The Quest for Consciousness: A Neurobiological Approach

AI Thread Summary
The discussion revolves around Christof Koch's book "The Quest for Consciousness," which explores the biological basis of consciousness and introduces the concept of neuronal correlates of consciousness (NCC). Participants debate the validity of NCC, with some philosophers arguing that it lacks coherence and does not address the "hard problem" of consciousness—how subjective experiences arise from physical processes. The conversation contrasts scientific and philosophical approaches, with some asserting that philosophy often lacks empirical evidence, while others argue that science has not yet adequately explained consciousness. Key points include the relationship between brain activity and conscious experience, the limitations of current scientific understanding, and the potential existence of non-physical aspects of consciousness. The discussion highlights differing views on whether consciousness can be fully explained through physical processes or if there is something more, with participants expressing skepticism about both mechanistic explanations and supernatural interpretations. The complexity of defining consciousness and the ongoing debates about its nature are central themes, emphasizing the interdisciplinary challenges in understanding this profound topic.
  • #51
Found something interesting that perhaps you guys already know since I've seen Dennett's name about the group:

Daniel Dennett, director of the Centre for Cognitive Studies at the University of Medford, Massachusetts, commented that "the global communication network is already capable of complex behaviour that defies the efforts of human experts to comprehend".

I think I shall have to look him up.
 
Physics news on Phys.org
  • #52
saltydog said:
Found something interesting that perhaps you guys already know since I've seen Dennett's name about the group:

Daniel Dennett, director of the Centre for Cognitive Studies at the University of Medford, Massachusetts, commented that "the global communication network is already capable of complex behaviour that defies the efforts of human experts to comprehend".

I think I shall have to look him up.

Do you mean Tufts University in Medford? Yeah, there are neural networks floating out there on the web that not only display fairly complex behavior, but are very capable of self-intiation of action and are autonomous entities for all practical purposes.
 
  • #53
saltydog said:
Well, you know "exhaustive" means perfect and that's another issue. I've already stated my claim of "marble mind" elsewhere here that caused . . . some awkwardness: I lack proof. But in response to your question, I think it can. Let's take an ephemeral one: human emotions like love and hate. Some would say that human emotion could never be represented as an equation. I think it can. It's a pattern of, dare I speak the word, "dynamics" of neural assemblies. If we were to artifically replicate similar dynamics in some machine, it too, in my humble opinion, would exhibit qualities we could equate to love and hate. But this is a sophisticated example. Simple ones will occur first and during this experimental work, our concept of "consciousness" will undergo radical changes; we will de-anthropomorphize it!

Are you sure there's nothing more to it? I mean, if we replicated our "love" program into a "machine", would it really be able to "love"? What about the social aspect of our own mental evolution? Can a full-blown conscious program exist without having first evolved (developed) and gained "life-experience"?
 
  • #54
saltydog,
You say you're interested in Dennett? The reason I asked that part about it "evolving" (developing) within the context of "world-experiences" is because Dennett seems to think that this is necessary. He agrees with you about consciousness being nothing more than complex relationships among neurons...but he doesn't think that those complex relationships can be created properly ex nihilo, but that they require the ability to interact (and the past of having interacted) with an environment conducive to the development of conscious abilities.
 
  • #55
loseyourname said:
Do you mean Tufts University in Medford? Yeah, there are neural networks floating out there on the web that not only display fairly complex behavior, but are very capable of self-intiation of action and are autonomous entities for all practical purposes.

That's interesting. I've written neural networks (well only 256 nodes), just enough to recognize a number with training. I'll search for them. Can you give me a link?

Salty
 
  • #56
Mentat said:
saltydog,
You say you're interested in Dennett? The reason I asked that part about it "evolving" (developing) within the context of "world-experiences" is because Dennett seems to think that this is necessary. He agrees with you about consciousness being nothing more than complex relationships among neurons...but he doesn't think that those complex relationships can be created properly ex nihilo, but that they require the ability to interact (and the past of having interacted) with an environment conducive to the development of conscious abilities.

Well, I'm just starting to read Dennett but I too believe AI will have to "grow" an artificial mind. But I think time to grow is a "secondary requirement" that's needed in natural minds to establish the connections between neurons and it may be the "easiest" initial approach for us to simulate. As I suspect a functioning mind is a consequence of architecture (connections), then if for some reason the arcitecture could be established from the start, a growth period would not be needed. Might be a lot easier to just grow it though.

I must admit, both Chalmers and Dennett, what I've read so far (and I plan to continue), are both well, not very specific on suggesting experimental models that might shed light on consciousness one way or the other. Seems Chalmers is more interested in "spiritual" connections to consciousness; Dennett, a biochemical one. Me, well, unless I see differently, more a dynamical and architectural one.
 
  • #57
Network security is the best example I can think of. Synchronized encryption is another. I was running a search right now to see what I could find for you, and this came up. It's pretty interesting. University researchers in Britain have developed a robot that can infer hypotheses and experiments and has been set to work on scientific tasks that are often considered too tedious for humans.
 
  • #58
Mentat said:
Are you sure there's nothing more to it? I mean, if we replicated our "love" program into a "machine", would it really be able to "love"? What about the social aspect of our own mental evolution? Can a full-blown conscious program exist without having first evolved (developed) and gained "life-experience"?

Yea, I know love is a tough one. That's why I choose it. We may well have to grow an artificial mind and as Rodney Brooks said, "not want to turn it off". But I still think the growth period is a secondary consequence of the time needed to establish an architecture. Surely there are biochemical changes which occur during development as well but I'm not aware of anyone suggesting that mind is contained in the biochemistry inside of the neuron. Certainly the biochemistry affects the function of neurons, but it seems to me to be a "localized" affect and not a global one; consciousness seems to be a "global phenomenon" don't you think? Everything I've read points to the neural-architecture as the seat of consciousness. If this turns out to be true, then if the architecture could be established from the start, growth would not be needed. And yes, I do believe that a sufficiently complex architecture could exhibit behavior that is similar to love.
 
  • #59
Mentat said:
saltydog,
You say you're interested in Dennett? The reason I asked that part about it "evolving" (developing) within the context of "world-experiences" is because Dennett seems to think that this is necessary. He agrees with you about consciousness being nothing more than complex relationships among neurons...but he doesn't think that those complex relationships can be created properly ex nihilo, but that they require the ability to interact (and the past of having interacted) with an environment conducive to the development of conscious abilities.

To elaborate on what salty said, imagine that we can reconstruct your own brain, neuron by neuron, to the point where the architecture and functionality of the second brain was exactly the same as yours. Would it not believe it was you? It would have your memories filed away and your behavioral tendencies programmed into it (including any tendency to love in a particular way). It would also hold all of the same beliefs that you do. Now imagine we did this same thing, but instead of using organic neurons, we used silicon chips that performed exactly the same computations and behaved exactly like human neurons. Wouldn't the outcome be the same? We'd have created a robot Mentat, complete with your past and your social constructs. (You'll have to put aside the practical impossibility of ever doing this, of course.)
 
Last edited:
  • #60
loseyourname said:
Network security is the best example I can think of. Synchronized encryption is another. I was running a search right now to see what I could find for you, and this came up. It's pretty interesting. University researchers in Britain have developed a robot that can infer hypotheses and experiments and has been set to work on scientific tasks that are often considered too tedious for humans.

Thanks, I've done a lot of work with RSA encryption as well.
 
  • #61
A conscious moment

I've been reading John Searle's "Consciousness"

He proposes the "Unified Field Theory" suggesting that consciousness is spread across a portion of the brain called the thalamocortical system. Searle states, "we should look for consciousness as a feature of the brain emerging from the activities of large masses of neurons, and which cannot be explained by the activities of individual neurons".

In my humble opinion, that statement hints of "Emergence". Allow me to offer a slightly changed version of an analogy I stated earlier:

Imagine all the ways thousands of butterflies trapped in a 3-D matrix could flap their wings in synchronicity. Not just all at once but in a symphony of "flowing" patterns (a large 3-D matrix). Imagine in it's past, a breeze passed through the matrix. The pattern of beating wings shifted in response to the force of the breeze propagating through the matrix. Now the wings, in the absence of a breeze, shift back to that pattern as the matrix experiences a conscious moment.
 
  • #62
loseyourname said:
To elaborate on what salty said, imagine that we can reconstruct your own brain, neuron by neuron, to the point where the architecture and functionality of the second brain was exactly the same as yours. Would it not believe it was you? It would have your memories filed away and your behavioral tendencies programmed into it (including any tendency to love in a particular way). It would also hold all of the same beliefs that you do. Now imagine we did this same thing, but instead of using organic neurons, we used silicon chips that performed exactly the same computations and behaved exactly like human neurons. Wouldn't the outcome be the same? We'd have created a robot Mentat, complete with your past and your social constructs. (You'll have to put aside the practical impossibility of ever doing this, of course.)

Quite correct (and admirably succinct).
 

Similar threads

Replies
4
Views
3K
Replies
7
Views
3K
Back
Top