Medical MRI Mind Reading claim (newspaper)

  • Thread starter Thread starter imiyakawa
  • Start date Start date
  • Tags Tags
    Mind Mri Reading
AI Thread Summary
Researchers have developed a method to reconstruct video clips from brain activity using fMRI technology, marking a significant advancement beyond previous studies that focused on still images. The process involves scanning participants' brains while they watch movies and using a computer to predict and reconstruct the viewed content based on neural activity associated with motion. The computer utilizes a database of 18 million one-second YouTube clips to match brain activity patterns, resulting in blurry reconstructions rather than clear images. This approach highlights the complexity of visual processing in the brain, as it relies on encoding motion rather than static images. Overall, the findings demonstrate the potential of brain decoding devices to interpret dynamic stimuli.
imiyakawa
Messages
262
Reaction score
1
http://www.smh.com.au/technology/sc...structs-videos-from-brain-20110923-1ko5s.html

It sounds like science fiction: while volunteers watched movie clips, a scanner watched their brains. And from their brain activity, a computer made rough reconstructions of what they viewed.

...

The new work was published online on Thursday by the journal Current Biology. It's a step beyond previous work that produced similar results with still images.

bg-pair2-420x0.jpg


::I can't locate the primary source::

Didn't want to put this in skepticism or another sub section.. too interesting. Move if wrong thanks.
 
Biology news on Phys.org
Here's what it sounded like to me: the scanner picks up some sort of information from the brain which is then fed to a computer. The computer is then given a multiple choice of video clips to match the information to. It picks one and superimposes that clip onto the information somehow to create the image we were shown. Not as interesting as it seems at first.
 
zoobyshoe said:
Here's what it sounded like to me: the scanner picks up some sort of information from the brain which is then fed to a computer. The computer is then given a multiple choice of video clips to match the information to. It picks one and superimposes that clip onto the information somehow to create the image we were shown. Not as interesting as it seems at first.

Why are the images all fuzzy :s

Why can't they just pull up the actual image that they're guessing why do they have to "recreate" it?

Thanks.
 
Here's a link to the primary source, for those interested in reading the paper: http://www.cell.com/current-biology/fulltext/S0960-9822(11)00937-7?switch=standard

Abstract:
Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1,2] and can form the basis for brain decoding devices [3,4,5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6,7,8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10,11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.
 
zoobyshoe said:
Here's what it sounded like to me: the scanner picks up some sort of information from the brain which is then fed to a computer. The computer is then given a multiple choice of video clips to match the information to. It picks one and superimposes that clip onto the information somehow to create the image we were shown. Not as interesting as it seems at first.

Not quite, they are actually predicting based only on neural activity. But they are associating (through Bayesian method) that neural activity with elementary motions and then reconstructing the movie they're actually watching with the elements of motion.

They do this by training the voxel definition(s?) on each individual subject.

The neurons they're recording are encoding motion, not a static image, so rather than having a bank of colors (that comes standard with every computer nowdays), they need a bank of motions (which does not come standard with computers). They gathered that bank of motions from youtube (and they were not the same clips the subjects actually saw, they are just a bank of elementary motions).

researchers fed the computer 18 million one-second YouTube clips that the participants had never seen. They asked the computer to predict what brain activity each of those clips would evoke.
Then they asked it to reconstruct the movie clips using the best matches it could find between the YouTube scenes and the participants' brain activity.
The reconstructions are blends of the YouTube snippets, which makes them blurry. Some are better than others. If a human appeared in the original clip, a human form generally showed up in the reconstruction.
 
Last edited:
Pythagorean said:
Not quite, they are actually predicting based only on neural activity. But they are associating (through Bayesian method) that neural activity with elementary motions and then reconstructing the movie they're actually watching with the elements of motion.

They do this by training the voxel definition(s?) on each individual subject.

The neurons they're recording are encoding motion, not a static image, so rather than having a bank of colors (that comes standard with every computer nowdays), they need a bank of motions (which does not come standard with computers). They gathered that bank of motions from youtube (and they were not the same clips the subjects actually saw, they are just a bank of elementary motions).
Yes, I see from your explanation I misunderstood the "bank" to be much simpler than it actually is, and therefore misunderstood what the result demonstrated. It is more interesting than I thought. Thanks for clarifying.
 
Deadly cattle screwworm parasite found in US patient. What to know. https://www.usatoday.com/story/news/health/2025/08/25/new-world-screwworm-human-case/85813010007/ Exclusive: U.S. confirms nation's first travel-associated human screwworm case connected to Central American outbreak https://www.reuters.com/business/environment/us-confirms-nations-first-travel-associated-human-screwworm-case-connected-2025-08-25/...
Chagas disease, long considered only a threat abroad, is established in California and the Southern U.S. According to articles in the Los Angeles Times, "Chagas disease, long considered only a threat abroad, is established in California and the Southern U.S.", and "Kissing bugs bring deadly disease to California". LA Times requires a subscription. Related article -...
I am reading Nicholas Wade's book A Troublesome Inheritance. Please let's not make this thread a critique about the merits or demerits of the book. This thread is my attempt to understanding the evidence that Natural Selection in the human genome was recent and regional. On Page 103 of A Troublesome Inheritance, Wade writes the following: "The regional nature of selection was first made evident in a genomewide scan undertaken by Jonathan Pritchard, a population geneticist at the...
Back
Top