Medical MRI Mind Reading claim (newspaper)

  • Thread starter Thread starter imiyakawa
  • Start date Start date
  • Tags Tags
    Mind Mri Reading
AI Thread Summary
Researchers have developed a method to reconstruct video clips from brain activity using fMRI technology, marking a significant advancement beyond previous studies that focused on still images. The process involves scanning participants' brains while they watch movies and using a computer to predict and reconstruct the viewed content based on neural activity associated with motion. The computer utilizes a database of 18 million one-second YouTube clips to match brain activity patterns, resulting in blurry reconstructions rather than clear images. This approach highlights the complexity of visual processing in the brain, as it relies on encoding motion rather than static images. Overall, the findings demonstrate the potential of brain decoding devices to interpret dynamic stimuli.
imiyakawa
Messages
262
Reaction score
1
http://www.smh.com.au/technology/sc...structs-videos-from-brain-20110923-1ko5s.html

It sounds like science fiction: while volunteers watched movie clips, a scanner watched their brains. And from their brain activity, a computer made rough reconstructions of what they viewed.

...

The new work was published online on Thursday by the journal Current Biology. It's a step beyond previous work that produced similar results with still images.

bg-pair2-420x0.jpg


::I can't locate the primary source::

Didn't want to put this in skepticism or another sub section.. too interesting. Move if wrong thanks.
 
Biology news on Phys.org
Here's what it sounded like to me: the scanner picks up some sort of information from the brain which is then fed to a computer. The computer is then given a multiple choice of video clips to match the information to. It picks one and superimposes that clip onto the information somehow to create the image we were shown. Not as interesting as it seems at first.
 
zoobyshoe said:
Here's what it sounded like to me: the scanner picks up some sort of information from the brain which is then fed to a computer. The computer is then given a multiple choice of video clips to match the information to. It picks one and superimposes that clip onto the information somehow to create the image we were shown. Not as interesting as it seems at first.

Why are the images all fuzzy :s

Why can't they just pull up the actual image that they're guessing why do they have to "recreate" it?

Thanks.
 
Here's a link to the primary source, for those interested in reading the paper: http://www.cell.com/current-biology/fulltext/S0960-9822(11)00937-7?switch=standard

Abstract:
Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1,2] and can form the basis for brain decoding devices [3,4,5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6,7,8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10,11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.
 
zoobyshoe said:
Here's what it sounded like to me: the scanner picks up some sort of information from the brain which is then fed to a computer. The computer is then given a multiple choice of video clips to match the information to. It picks one and superimposes that clip onto the information somehow to create the image we were shown. Not as interesting as it seems at first.

Not quite, they are actually predicting based only on neural activity. But they are associating (through Bayesian method) that neural activity with elementary motions and then reconstructing the movie they're actually watching with the elements of motion.

They do this by training the voxel definition(s?) on each individual subject.

The neurons they're recording are encoding motion, not a static image, so rather than having a bank of colors (that comes standard with every computer nowdays), they need a bank of motions (which does not come standard with computers). They gathered that bank of motions from youtube (and they were not the same clips the subjects actually saw, they are just a bank of elementary motions).

researchers fed the computer 18 million one-second YouTube clips that the participants had never seen. They asked the computer to predict what brain activity each of those clips would evoke.
Then they asked it to reconstruct the movie clips using the best matches it could find between the YouTube scenes and the participants' brain activity.
The reconstructions are blends of the YouTube snippets, which makes them blurry. Some are better than others. If a human appeared in the original clip, a human form generally showed up in the reconstruction.
 
Last edited:
Pythagorean said:
Not quite, they are actually predicting based only on neural activity. But they are associating (through Bayesian method) that neural activity with elementary motions and then reconstructing the movie they're actually watching with the elements of motion.

They do this by training the voxel definition(s?) on each individual subject.

The neurons they're recording are encoding motion, not a static image, so rather than having a bank of colors (that comes standard with every computer nowdays), they need a bank of motions (which does not come standard with computers). They gathered that bank of motions from youtube (and they were not the same clips the subjects actually saw, they are just a bank of elementary motions).
Yes, I see from your explanation I misunderstood the "bank" to be much simpler than it actually is, and therefore misunderstood what the result demonstrated. It is more interesting than I thought. Thanks for clarifying.
 
https://www.discovermagazine.com/the-deadliest-spider-in-the-world-ends-lives-in-hours-but-its-venom-may-inspire-medical-miracles-48107 https://en.wikipedia.org/wiki/Versutoxin#Mechanism_behind_Neurotoxic_Properties https://www.sciencedirect.com/science/article/abs/pii/S0028390817301557 (subscription or purchase requred) he structure of versutoxin (δ-atracotoxin-Hv1) provides insights into the binding of site 3 neurotoxins to the voltage-gated sodium channel...
Popular article referring to the BA.2 variant: Popular article: (many words, little data) https://www.cnn.com/2022/02/17/health/ba-2-covid-severity/index.html Preprint article referring to the BA.2 variant: Preprint article: (At 52 pages, too many words!) https://www.biorxiv.org/content/10.1101/2022.02.14.480335v1.full.pdf [edited 1hr. after posting: Added preprint Abstract] Cheers, Tom
Back
Top