Shared publicly  - 
15
1
Alan Rominger's profile photoSolve for X's profile photoXero Xedd's profile photoRichard Creamer's profile photo
4 comments
 
So, the image of the double-slit experiment in the first picture that shows here makes me really really want to watch that talk. But I can't find it.
 
This was just amazing!
 
Thank you for sharing. I have some comments and questions:

• Question: I've read in the past that high-speed raster scanning lasers can be used to project images directly onto the retina. Has any research been done where the laser is replaced with a narrow field-of-view sensor, and retina images are thus acquired via a raster-scanning technique?

• Question: WRT imaging of the retina to infer what image/scene a human is "seeing" or remembering, has multi- or hyper-spectral imaging been explored? (That is, simultaneously imaging the retina at many narrow frequency bands, and determining if the vision correlation/inference algorithm can be improved by analyzing selected weighted subsets of pixel values at different frequencies.)

• Comment: It strikes me that the method by which reconstructing approximations to the images one is "seeing" in one's brain when remembering a scene or subject by scanning the brain or retina may suggest a similar approach to reconstructing the sounds one is "hearing" or imagining or “saying” as well.

• Question: Has any research been done in this area?

• Questions: Has any research identified the areas within the brain which process the auditory system's signals and which might, perhaps, be similarly scanned to enable an analogous method for computing/inferring the sounds/music/speech we hear or “say” in our brain? Could, perhaps, earbud-shaped transducers be used to sense low-level sounds originating in the brain, but having a very slight physical resonance effect in the inner ear?

• Comment: It seems to me that an important component of a deeper human-computer interface should incorporate sound as well as sight. (Assuming that one of the intended applications of the presented vision inference technology introduced in the video is to pave the way for the development of a direct, bi-directional vision link between humans and computers.)

• Comment: This is a bit off-topic, but perhaps you or someone else can elaborate on this related neuroscience topic… Being a former musician and avid music lover, over the course of my lifetime I have accumulated audio memories, probably equivalent to many gigabytes of recorded sound, all of which are stored in my brain in some sort of highly-compressed form. But, I am periodically amazed at how easily these audio memories return when a musical piece I haven't listened to for perhaps decades is unexpectedly played on the radio. For instance, Haydn's 4th Symphony was played on the classical FM station just last week, and the memories I had imprinted in my brain long ago for this piece, clear down to very intricate details, were suddenly re-activated in high-fidelity. I could "hear" (in advance) the next few bars before they were played in very great detail (the timbres, notes, interpretation, dynamics, etc.). That is, just hearing a tiny part of a musical piece can function as a "key" which unlocks long-suppressed, highly-detailed memories.

• Question: Has any research been done to determine whether this sort of key-activated high-fidelity memory recall, or re-activation, offers any clues as to how the brain compresses/decompresses information?

• Comment: Over the years, I have noticed/observed that I think (ideation) visually. That is, when I get an idea, it occurs or is manifested as a sequence of visual images, sometimes with relational linkages between one or more of these images. In early 2010, the recognition of this led me to some interesting ideas which I hope to explore some day.

• (Last) Question: Does your vision for this "Solve for X” project include any components related to capturing (or initiating) ideation via the retina?

Thank you, again, for sharing this interesting presentation.
Add a comment...