MRI Mind Reading claim (newspaper)

  • Context: Medical 
  • Thread starter Thread starter imiyakawa
  • Start date Start date
  • Tags Tags
    Mind Mri Reading
Click For Summary

Discussion Overview

The discussion revolves around a claim regarding a technology that reconstructs video clips from brain activity using fMRI. Participants explore the implications, mechanisms, and limitations of this technology, as well as its representation in media. The conversation includes technical explanations and personal interpretations of the study's findings.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants express skepticism about the novelty of the technology, suggesting that the process involves matching brain activity to pre-selected video clips rather than direct reconstruction.
  • Others clarify that the technology predicts based on neural activity and associates it with elementary motions, reconstructing the viewed movie from these elements.
  • There is a discussion about the limitations of the reconstruction, noting that the images produced are often fuzzy and not exact replicas of the original clips.
  • Participants mention that the reconstruction process relies on a bank of motion clips sourced from YouTube, which were not the same clips viewed by the subjects.
  • Some participants acknowledge misunderstandings regarding the complexity of the motion bank and express that the technology is more interesting than initially perceived.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the implications and novelty of the technology. There are competing interpretations of how the reconstruction process works and its significance.

Contextual Notes

Limitations include the dependence on the specific methodology used in the study, the nature of the neural activity being recorded, and the challenges in accurately reconstructing dynamic visual stimuli from brain data.

imiyakawa
Messages
262
Reaction score
1
http://www.smh.com.au/technology/sc...structs-videos-from-brain-20110923-1ko5s.html

It sounds like science fiction: while volunteers watched movie clips, a scanner watched their brains. And from their brain activity, a computer made rough reconstructions of what they viewed.

...

The new work was published online on Thursday by the journal Current Biology. It's a step beyond previous work that produced similar results with still images.

bg-pair2-420x0.jpg


::I can't locate the primary source::

Didn't want to put this in skepticism or another sub section.. too interesting. Move if wrong thanks.
 
Biology news on Phys.org
Here's what it sounded like to me: the scanner picks up some sort of information from the brain which is then fed to a computer. The computer is then given a multiple choice of video clips to match the information to. It picks one and superimposes that clip onto the information somehow to create the image we were shown. Not as interesting as it seems at first.
 
zoobyshoe said:
Here's what it sounded like to me: the scanner picks up some sort of information from the brain which is then fed to a computer. The computer is then given a multiple choice of video clips to match the information to. It picks one and superimposes that clip onto the information somehow to create the image we were shown. Not as interesting as it seems at first.

Why are the images all fuzzy :s

Why can't they just pull up the actual image that they're guessing why do they have to "recreate" it?

Thanks.
 
Here's a link to the primary source, for those interested in reading the paper: http://www.cell.com/current-biology/fulltext/S0960-9822(11)00937-7?switch=standard

Abstract:
Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1,2] and can form the basis for brain decoding devices [3,4,5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6,7,8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10,11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.
 
zoobyshoe said:
Here's what it sounded like to me: the scanner picks up some sort of information from the brain which is then fed to a computer. The computer is then given a multiple choice of video clips to match the information to. It picks one and superimposes that clip onto the information somehow to create the image we were shown. Not as interesting as it seems at first.

Not quite, they are actually predicting based only on neural activity. But they are associating (through Bayesian method) that neural activity with elementary motions and then reconstructing the movie they're actually watching with the elements of motion.

They do this by training the voxel definition(s?) on each individual subject.

The neurons they're recording are encoding motion, not a static image, so rather than having a bank of colors (that comes standard with every computer nowdays), they need a bank of motions (which does not come standard with computers). They gathered that bank of motions from youtube (and they were not the same clips the subjects actually saw, they are just a bank of elementary motions).

researchers fed the computer 18 million one-second YouTube clips that the participants had never seen. They asked the computer to predict what brain activity each of those clips would evoke.
Then they asked it to reconstruct the movie clips using the best matches it could find between the YouTube scenes and the participants' brain activity.
The reconstructions are blends of the YouTube snippets, which makes them blurry. Some are better than others. If a human appeared in the original clip, a human form generally showed up in the reconstruction.
 
Last edited:
Pythagorean said:
Not quite, they are actually predicting based only on neural activity. But they are associating (through Bayesian method) that neural activity with elementary motions and then reconstructing the movie they're actually watching with the elements of motion.

They do this by training the voxel definition(s?) on each individual subject.

The neurons they're recording are encoding motion, not a static image, so rather than having a bank of colors (that comes standard with every computer nowdays), they need a bank of motions (which does not come standard with computers). They gathered that bank of motions from youtube (and they were not the same clips the subjects actually saw, they are just a bank of elementary motions).
Yes, I see from your explanation I misunderstood the "bank" to be much simpler than it actually is, and therefore misunderstood what the result demonstrated. It is more interesting than I thought. Thanks for clarifying.
 

Similar threads

  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 18 ·
Replies
18
Views
7K
  • · Replies 17 ·
Replies
17
Views
5K
Replies
9
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
13
Views
6K
  • · Replies 2 ·
Replies
2
Views
8K
  • · Replies 1 ·
Replies
1
Views
10K