Can a Program Run on the Human Brain?

  • Thread starter Thread starter Warpspeed13
  • Start date Start date
  • Tags Tags
    Brain Program
Click For Summary
The discussion centers on the feasibility of running computer programs on the human brain, drawing parallels to AI and neural network technologies. Participants debate whether the brain can be considered a computer capable of executing software, with some arguing that the brain's nonlinear processing differs fundamentally from traditional computing. The concept of the brain being "Turing complete" is explored, with opinions divided on its ability to simulate other computers or run external programs. There is also mention of existing technologies that connect brain activity to artificial limbs, suggesting a form of interaction between biological and mechanical systems. Overall, the conversation highlights the complexities of understanding brain function in relation to computational processes.
  • #31
Technically "binary" is not a language - it's more like the pen-strokes that can be combined to make letters or pictures and so on ... the machine language of a CPU is usually written in binary inside the machine architecture, and different machines have different machine languages.

I don't think anyone expects the machine language of a brain would to be written in anything like binary inside the brain "architecture" or even for it to be written in any way similar to the machine language of a digital CPU.

But I think I can kinda see what OP is trying to get at here.
It links back to the experiment in the first post: if you used a brain to store some data in the manner of that experiment, is it in principle possible for the brain's owner to subsequently understand the data thus stored. i.e. could data be added to someone's conscious mind bypassing the usual external sensors?

@Warpspeed13:
Am I close?
 
Biology news on Phys.org
  • #32
Simon Bridge said:
But I think I can kinda see what OP is trying to get at here.
It links back to the experiment in the first post: if you used a brain to store some data in the manner of that experiment, is it in principle possible for the brain's owner to subsequently understand the data thus stored. i.e. could data be added to someone's conscious mind bypassing the usual external sensors?

Sure. That's a reasonable question. BUT ... my point is that assuming that such information should be stored in binary seems like a bad idea. I think that just complicates the retrieval when you are talking about something as associative as the brain. We already HAVE a mechanism (even if we have no understanding of the details) for storing words, images, sounds, etc. Why complicate things by throwing "binary" into the mix?
 
  • #33
This is fair - though I suspect "binary" is being used here in terms of "machine language" ... it would be reasonable to bypass the brain's standard IO by using the brains machine language (whatever that turns out to be: probably not binary) to get data into the brain "directly", so to speak, in an owner-retrievable format.

This would have a valid use in the cases where the brain's stdio is not working well - maybe the person is Blind?
Ten it would be useful to be able to get visual information into their brains in a way that bypasses whatever is malfunctioning in their visual system.

But I really need confirmation from OP before continuing on this line.
I don't know for sure that this is what is intended since it is somewhat besides the initial wording.
 
  • #34
Ya you've pretty much got it but I was thinking since their is very little understanding of the brains io it might be faster to make the brain adapt to the machine, and eventually (assuming the adaptability seen in the rat experiment extends this far) have the brain convert the signals to its own format on the fly. That way functional prototypes of things such as bionic limbs that provide tactile feedback could be fast tracked since their would be no need to fully understand the brains io to design them.
 
  • #35
Warpspeed13 said:
Ya you've pretty much got it but I was thinking since their is very little understanding of the brains io it might be faster to make the brain adapt to the machine, and eventually (assuming the adaptability seen in the rat experiment extends this far) have the brain convert the signals to its own format on the fly. That way functional prototypes of things such as bionic limbs that provide tactile feedback could be fast tracked since their would be no need to fully understand the brains io to design them.

Without that understanding of the brain, how would you propose to create ANY kind of meaningful interface of the sort you seem to intend?
 
  • #36
We'll the brain in the rats case adapted to the new information quite readily I'm proposing it may be possible that the brain would create its own interface if the information were transmitted in what is a learnable format such as computer language. It has already been noted earlier in the discussion that people are capable of "running the code" in their head so evidently with training the brain can learn to comprehend that information format. So it seems to me given both cases that it would be worth testing to see if the brain would develop it's own interface. As fare as I know no one has ever tried anything similer.
 
  • #37
Warpspeed13 said:
We'll the brain in the rats case adapted to the new information quite readily I'm proposing it may be possible that the brain would create its own interface if the information were transmitted in what is a learnable format such as computer language. It has already been noted earlier in the discussion that people are capable of "running the code" in their head so evidently with training the brain can learn to comprehend that information format. So it seems to me given both cases that it would be worth testing to see if the brain would develop it's own interface. As fare as I know no one has ever tried anything similer.

I agree it's an interesting idea, I just think computer language is a bad idea for the implementation. Something learnable, yes but direct computer language type constructs would not be as natural to the brain as stuff more like what it has already learned. I don't agree w/ your extrapolation of the brain being able to step through a program therefore a program is in an easy format for the brain. The brain goes through a HUGE amount of steps that a computer doesn't have to when "stepping through" a program.
 
  • #38
If you want a good book that goes deep into this subject, get "Going Inside" by McCrone:https://www.amazon.com/dp/0880642629/?tag=pfamazon01-20

It's a great book, and there's a chapter called the hunt for the neural code, or something like that.

There's been much conjecture in this thread but there is a long history of discussion on this subject if one is really interested. McCrone is a good place to start and is accessible reading for the layperson.

In short, the brain "codes" information through the selective strengthening of synapses which connect networks of neuron in primary sensory cortices. This is often referred to as Hebbian learning and memories consist of sequences of these Hebbian templates that are released, or remembered temporally in the same fashion that they were stored initially. The mechanism through which this occurs is still, obviously, under investigation, but a leading candidate is a process known as "chaotic itinerancy," (http://www.ncbi.nlm.nih.gov/pubmed/15285053) whereby the chaotic state of the system moves through particular trajectories that resemble flashes of frames in a film reel. (http://www.ncbi.nlm.nih.gov/pubmed/16513196)

So, the way the brain works is really "old school." It works much like the way the old film cameras and film projectors work. There is even a "frame rate" of how these sensori-motor experiences are stored and retrieved. The details are complicated, of course, but as a gross reading, you can subdivide the frame rates of human "thought" and perception into 3 categories. You have 40 hz gamma osciallations that are essentially those that are related to the storage and retrieval of specific sensory percepts. You have beta oscillilation (~20hz) that are related to intercommunication between secondary and association cortices, and you have alpha oscillations (~10hz) that are associated with global, hemisphere-wide cognitive and sensori-motor processes.
(http://www.ncbi.nlm.nih.gov/pubmed/?term=freeman++alpha-theta+rates)

So, the point here is that, unless you have a way for a serial-based mechanism to selectively strengthen the billions of individual synapses that are responsible for each frame of a remembered percept of a sensori-motor event, then you are going to have a difficult time "uploading" a computer program into the brain. It is much more efficient to store memories in the brain and interface with it simply by training a humans eyeballs on the page of a book, and then turning the page.
 
Last edited by a moderator:
  • #39
phinds said:
I agree it's an interesting idea, I just think computer language is a bad idea for the implementation. Something learnable, yes but direct computer language type constructs would not be as natural to the brain as stuff more like what it has already learned. I don't agree w/ your extrapolation of the brain being able to step through a program therefore a program is in an easy format for the brain. The brain goes through a HUGE amount of steps that a computer doesn't have to when "stepping through" a program.
Hmm maybe a language used in fuzzy logic processing or trinary would be better. I wonder if this could be tested on a cockroach or a rat in a maze? Maybe teach it just left/right and then to test if it is working throw it in a new maze and transmit directions and see if it has a better success rate.
 
  • #40
Warpspeed13 said:
We'll the brain in the rats case adapted to the new information quite readily I'm proposing it may be possible that the brain would create its own interface if the information were transmitted in what is a learnable format such as computer language. It has already been noted earlier in the discussion that people are capable of "running the code" in their head so evidently with training the brain can learn to comprehend that information format. So it seems to me given both cases that it would be worth testing to see if the brain would develop it's own interface. As far as I know no one has ever tried anything similar.
I can run code in my head, but not nearly as well as a computer. The only reason I ever do it is to discover why a computer program isn't working as intended - it's a painstaking process.
I think if you wanted to try something out, you would identify something that is already communicated and try to wire it. For example, take three or four dogs and pick some sort of communication - like wagging their tail. Then use fMRI to identify areas of the brain active during tail-waving, recognizing tail-waving, and recognizing another dog. Then put a transceiver on each dog, each with a unique code. Encode proximity information to each of the other three dogs and wire them to three spots where "other dog recognition" was noted in the fMRI. Then wire the tail waving to allow communication that way.
I don't think we know how well different parts of the brain can rewire themselves to make sense of the new information. But it might be interesting to allow the dogs to interact with these transceivers for a few years.
 
  • #41
It has already been noted earlier in the discussion that people are capable of "running the code" in their head so evidently with training the brain can learn to comprehend that information format.
... the brain does not input the computer code. In machine terms, the brain uploads the written form as a series of images and then converts that into it's internal language to be thought about. The internal process is unknown in detail. The result is converted back to the computer code to output via a vector printer (your hand) and confirmed by another video scan.

The rats accommodated the presence of the new brain state - they did not "know" the information.
It is far more likely that the brain would treat intruding information as a kind of damage - after all, that part of the brain was probably used for something already.

Direct input of data to a person's brain in a way that the person could access would amount to writing memories.
That would involve much more subtlety and a more holographic access ... memories need to be written to wide areas of brain. We don't know how wide would be needed ... so now we have entered the realm of wild speculation.
 

Similar threads

Replies
10
Views
4K
  • · Replies 23 ·
Replies
23
Views
6K
  • · Replies 2 ·
Replies
2
Views
9K