Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Daniel Dennett's Consciousness Explained

  1. Jul 27, 2011 #1
    In Daniel Dennett's Consciousness Explained, he described and explained how subjective experience or qualia or consciousness in general is just a result of biology and nothing more. The word "Consciousness" is very broad and can become vague so let's just focus on qualia or internal subjective experience. Could it really just be a result of pure brain biology? Is there any latest research how neural algorithm or network property can give rise to it? Someday. Can we create pure mechanical sentient being with internal subjective experience? This would be the definite proof that consciousness and qualia is just a special properties of certain network algorithm or processes. Is this possible? I'm quite undecided about this... one day believing our qualia is due to something extra.. other day believing it is just a result of neural and glial processes and pure biological based on cells and thermodynamics and nothing more. What is the best evidence and latest research so far now with regards to this issue? Is there a definite argument for example why biology (or mere calculational methods based on neural networks) is insufficient to produce qualia or internal subjective experience ?
     
  2. jcsd
  3. Jul 28, 2011 #2

    apeiron

    User Avatar
    Gold Member

    The answer to your question is not agreed. But at least it can be clearly stated.

    As you say, the guts of it is whether consciousness is "computational", or whether it "involves something more"?

    If it is computational - a software pattern - then the presumption that follows is it could run on any kind of hardware in principle. So this could be biological. Or it could be something that is doing the same job. You could do it in silicon chips and if you made the same patterns, your machine would be aware.

    Now we already can say that the brain, as a biological organ, certainly looks computational in many aspects. Neurons fire in ways that look like digital signals. Axons conduct the signals point-to-point. It looks like information processing.

    So the question is whether it is "pure computation" (rather than pure biology). Or whether it is something more entangled, more subtle, more difficult (even impossible) to replicate at the hardware level.

    We can think the brain is crisply divided by the software/hardware distinction - as it is in the Turing machine which is the basis of our concept of computation - but it is then a big if as to whether the brain actually is divided in this "pure" fashion.

    I believe, having studied this issue plenty :smile:, that the mind is NOT pure computation. It is not a Turing machine. There is not a clean software~hardware divide that would allow you to identify some set of consciousness algorithms that you pick up (or download) and implement on some other general purpose hardware.

    This is just a wrong mental image although a very common one.

    Instead, what we should be focused on is generalising our notion of consciousness as a living process. So in fact, forget the machine model that comes from technology and actually apply some biological thinking.

    Theoretical biology explains living systems with a whole bunch of concepts like anticipation, adaptation, semiosis, dissipation, hierarchy, modelling relations, that make consciousness seem much less of a mystery.

    So computer analogies are fine as far as they go. Which isn't very far when it comes to living systems. Life needs to be described in its own terms at the end of the day.

    For this reason, I think you make a big mistake when saying consciousness seems too vague and hard to define, so let's switch the conversation to qualia.

    The idea of qualia is that there can be such a thing as "atoms of experience". Now you have not just a software/hardware divide, but a dualistic mind/matter divide. Effectively you have painted yourself into an intellectual corner with no way out (except by all sorts of crackpot magic like quantum coherence, panpsychism, etc).

    So it is right to think that consciousness is a rather squishy and ill-defined concept once you examine it. You have started trying to generalise something specific - your current state of experience - to something more general, your brain biology. So you just need to keep going down that path, using actual concepts from biology.

    BTW, Dennett ain't much of a biologist despite the fact he writes a lot about Darwin machines. It gives the game away that he treats biology as "just machinery".
     
  4. Jul 28, 2011 #3
    Do you agree with Antonio Damasio and Gerald Edelman? They have the same belief that it's all biological circuitry and nothing more. Sometimes I tend to believe them. Sometimes I don't. Edelman is building machine with same neural circuits as humans. It can be taught like an infant and it seems to be working as modeled. Maybe if the computational is complex enough.. internal subjective experience can occur as an emergence? It has to do with sensor (or senses) and memory and internal analysis. Is there any proof why this can't give rise to internal subjective experience? Damasio has very sophisticated idea in his new book Self Comes to Mind that our neural circuits may just do this.
     
  5. Jul 28, 2011 #4

    apeiron

    User Avatar
    Gold Member

    Neither of them have particularly good models. Neither of them have anything startling to say. One is an adequate populariser, the other is a famous egotist. Neither carry any particular weight in the field.

    If I was pointing you to the best work that is computational in its language, but tries to capture the biological essence, then historically I would look to the cyberneticists like Ashby and MacKay, then the ART neural nets of Stephen Grossberg, and most recently to the Bayesian brain approach of Friston, Hinton and others.
    http://en.wikipedia.org/wiki/Bayesian_brain

    A predictive coding or anticipatory approach to consciousness explains pretty directly why such a system would have "internal states". It has to to work.

    The standard computational model gets it back to front by thinking the brain works by turning sensory input into experential output - some kind of internal display that arises for no particular reason while the brain is trying to generate suitable motor output.

    But the biological model I'm talking about says systems guess the state of the world from experience. They create running predictive models. Then they respond to the errors of prediction to update that model. So it is instead output before input. You need a state of experience to be able to experience.

    It is not complicated at all. There is certainly no need to invoke "computational complexity". You just need to turn your notions of processing round so they face the right way.

    Dennett of course is sort of talking about this with his "intentional stance". But that is the problem with Dennett. He sort of gets a lot of things vaguely right and then thinks he is being original. When it seems just obvious and better described to those already at work in the field.

    Edelman had a similar "my big idea" approach. He made himself very unpopular even though he had raised big bucks for a research institute.

    Damasio is just a decent neuroscientist who knows his stuff, but had no sharply focused theory of the kind that would produce actual models.
     
  6. Jul 28, 2011 #5

    ConradDJ

    User Avatar
    Gold Member


    I think there are two almost unrelated issues being confused in this kind of discussion. One is about the kind of consciousness we humans experience. To say this is “biological circuitry and nothing more” would simply be foolish, since it’s obvious that language and culture play a major part in shaping not only what we experience, but also the unconscious background-processing that goes on in our brains.

    The other issue is about “internal subjective experience” or “sentience”. This seems like a very deep mystery IF we treat it as an objective property that some things (like people and other animals) “have” and other things don’t.

    There is no threshold of “complex computation” that determines whether something “has” its own point of view on the world. In our culture, we’re used to thinking of rocks and trees and molecules as “objects” – something seen from the outside. We think of ourselves and other humans as “subjects” each of which has its own “internal” point of view. But these are not objective categories! It’s just a question of the viewpoint we choose to take.

    We can very reasonably view a person as an object – say, from the standpoint of biology or sociology or economics – if for these purposes the person’s own point of view is unimportant.

    But we could also try to understand what the world looks like from an atom’s point of view, in its relationships with other atoms. Physicists don’t in fact do this, but I think eventually they will need to. This does not amount to assuming atoms “have consciousness” or any special “sentience”, beyond the many kinds of physical interactions we know about. It’s just a matter of rethinking what we know from a different point of view.

    Unfortunately there’s a lot of valuable work in brain science getting mixed up with what to me are meaningless questions about “the neural basis of consciousness”. There is no objective truth as to whether something “has its own point of view” or “is merely an object.” The confusion arises only because we’re much more used to imagining the world from another person’s point of view than from the point of view of a cat or a worm or a tree.
     
  7. Jul 28, 2011 #6

    ConradDJ

    User Avatar
    Gold Member

    This makes sense to me. And it clearly applies to all kinds of biological systems, not just ones we would call “conscious”. In fact, something a lot like this happens in physics as well. In quantum mechanics, the wave function that describes the “state” of any physical system is a statistical projection of possibilities that gets updated – from the point of view of that particular system – to the extent actual information is received.
     
  8. Jul 28, 2011 #7
     
  9. Jul 28, 2011 #8

    apeiron

    User Avatar
    Gold Member

    I agree with this of course.

    And this is why I say that the generalisation of human/animal awareness is the important step in a theory of consciousness. You can't just say it is a result of an arrangement of physical atoms, or some pattern of information. You have to focus on the process, the systematic aspects, in generalising from our minds as a particular biological system to systems in general.
     
  10. Jul 28, 2011 #9
    Some people like to model, whereas others would like to know.



    Communication. It's only possible between minds/selves. Even what happens between 2 routers/modems is not communication, it's just interaction. There must be a conscious human being at the one end for interaction to become information and communication. The theory that biology can produce internal subjective experience is dead and its bodily fluids are being kept ventilated until one day a better theory comes to light.

    Ask yourself(that's what philosophers do all the time anyway) what commonly used terms like "underlying reality; emergent; hidden variable(s)" might refer to?
     
  11. Jul 28, 2011 #10

    apeiron

    User Avatar
    Gold Member

    This is an assertion rather than an argument, which is precisely the problem.

    If you had an argument to offer, you would be able to say on what general grounds that routers are "only interacting" and what extra was involved in "consciously communicating".

    Saying there must be this, or must be that, is not philosophy or science but rhetoric.

    A strong computationalist might argue here along the lines that interaction is the exchange of information, so the material cause is exactly the same. And so all that differs is the informational complexity - or some other information-based measure.

    I would dispute this and suggest instead that we have to be able to measure the meaning being exchanged, which involves the degree of information discarded, and so gets us into more biological concepts like semiosis.

    But at least that would be an argument. Not bald assertions that are unsupported by either philosophy or science.
     
  12. Jul 28, 2011 #11


    I didn't as it was self-evident and simple and i deemed it not requiring further explanation.
    Two, three or 2 billion particles interacting carry no information in and of themselves. Information doesn't exist as such, UNTIL a mind is present, for information is only a defining characteristic of mind, not of particles(routers deal exclusively with electricity, not with information; whereas human beings deal with what is perceived as information, though it's still JUST electricity and electrical impulses). The icons on your desktop are electricity and carry no intrinsic information when there is no conscious mind around - they are not even icons without consciousness. I'll step a bit further and say that no human being has ever dealt with particles or fields, but with information about particles and fields. We still don't know what ínformation' is, as we don't know what a conscious mind is, so it's hard to speculate what might be involved in "consciously communicating". As always, it's far easier to spot how things are not, then being able to say how things really are.





    What you have been proposing so far are models. I have nothing against this approach(that you might deem scientific), as long as you realize how crippled those attempts really are when tested against reality(you are unconsciously injecting your mind and basing all your thesises on the inner workings of your conscious mind when you build your new model, then you go on and reject the very basis of your thesis).
     
    Last edited: Jul 28, 2011
  13. Jul 28, 2011 #12

    apeiron

    User Avatar
    Gold Member

    How do you characterise your approach if it is not scientific? It can't therefore be natural philosophy either, so I'm not sure where that leaves you.

    What is really ironic is that consciousness IS a model of reality. You yourself are saying we don't directly know the world, but represent it in terms of modelled concepts - like "particles", "waves", "bits".

    So if you don't accept the epistemology~ontology divide, I'm really have no idea what kind of thought system you are appealing to here.

    The OP was about the scientific/natural philosophy approach to reality. I'm not hearing anything from you of any substance as to why we should not stick to it.
     
  14. Jul 28, 2011 #13

    You are right, this is difficult(though some may find my attitide leaning towards idealism). As i said earlier in my defense, it's always easier to shoot and take down a model, than to propose a viable one. This is to say, i am about as clueless as the next person on these issues.




    I can't help, sorry. I have to observe the rules on overly speculative posts and what i may say would probably be meaningless anyway. These topics are like walking and balancing on a knife's edge
     
  15. Jul 28, 2011 #14

    apeiron

    User Avatar
    Gold Member

    At least idealism IS a legitimate philosophical position :smile:, even if difficult to argue for.

    But if the OP is "how do we know if computationalism is enough", then bringing in idealism does defocus the discussion.

    I think the question about computationalism is a live one. In mind science, it is up against other "natural explanations" like non-linear dynamics of various stripes (Kelso, Freeman, Nunez, Harth). And then approaches like neural networks which sit sort of in-between.

    So it seems clear - to science - that the brain/mind is not just about computation. But that larger general basis to theory is still a matter of much discussion.
     
  16. Jul 28, 2011 #15
    Let's discuss in terms of hard data and neuroscience and parts of the brains. What minimal part of the brain can exist that can exhibit qualia? Damasio said the brain stem is sufficient. It's like the spine and brain stem can already feel even with lack of frontal cortex. Zeki who wrote "Vision of the Brain" believes that perceptual centers like visual cortex is itself conscious due to experiences with synesthesia. He was saying that even without frontal cortex feedback. The isolated v4 for example can perceive colors and the qualia of it. But Edelman believes reentrant communications between the different modules of the brains is necessary for qualia. What is your position about this? Try to describe in terms of neuroscience and brain parts so we can lock on the mechanism of interaction.
     
  17. Jul 28, 2011 #16

    apeiron

    User Avatar
    Gold Member

    It all depends on how you define qualia. If you choose some operational definition like "shows a behavioural response to stimulus", then even a spinal reflex gets covered by that. On those grounds, E.coli measureably responds to the world and is properly sentient.

    But if you believe consciousness is something very special and all about the "ineffable redness of red", then you are asserting a subjective private definition that is not scientifically measurable. Nor even philosophically credible, I would argue.

    This notion of qualia is inherently dualistic. And so not even worth bothering to discuss in terms of "hard data and neuroscience".

    For instance, claims about V4 being able to "perceive colour in isolation" are nonsense. V4 is clearly part of an integrated visual hierarchy.

    The proper question is instead how do we describe V4 in terms of computational notions of modularity vs distributed function? Or more biologically, in terms of the dichotomous drives of differentiation~integration?

    If Zeki is saying colour perception is about this brain location, and Edelman is saying it is about this brain connection, are they arguing for different mechanisms or simply pointing to complimentary parts of the same differentiation~integration process?

    The same with Damasio. Is it the either/or of lower brain vs higher brain. Or is it the wholeness that is apparent at every level of the brain (with varying degrees of plasticity~stability, the lower brain being more hardwired, the higher brain being more adaptable)?

    To step back, we have here a situation where there are a number of well-qualified scientists (Edelman, Damasio, Zeki, Crick, Freeman, etc) who are publishing popular books to advertise their personal speculations about "a theory of consciousness".

    In every case, they presume an unsophisticated monism. Consciousness "just is" the emergent result of some arrangement of neural mechanism. And given the brain is such a complex thing to explain to people, well let's play up this particular aspect and make it sound key.

    This is trite. But neuroscience has for a very long time been an off-shoot of medicine, and so a very simple-minded reductionism is embedded in the culture of these people. It is also the kind of answer that the lay pop science reader is looking for. Hence there is a market for top neuroscientists to write neuroscience books that confirm rather than challenge prevailing mechanistic views of biological systems.

    So again, what is your (or their) definition of qualia as something generally measureable? Only then would it be possible to speak to the neuro-evidence.
     
  18. Jul 28, 2011 #17
    According to Zeki (who was one of the original discoverers of color function of area V4 in the brain) in http://www.scribd.com/doc/49691724/The-Disunity-of-Consciousness-Zeki [Broken]

    "Processing sites are also perceptual sites. One conclusion from the clinical evidence is that a micro-consciousness for colour or visual motion is generated through activity at a distinct processing site,and therefore that a processing site is also a perceptual site. Such a conclusion is reinforced by studies of the visual motioncentre, area V5, which receives a direct visual input that bypasses the primary visual cortex (area V1)"

    Please let me know how much you agree about the paper. I'm still thinking of a good definition of Qualia that won't make it vague.

     
    Last edited by a moderator: May 5, 2017
  19. Jul 29, 2011 #18

    apeiron

    User Avatar
    Gold Member

    This paper is OK by me. Just conceptually rather clumsy in the way it tries to express the basic idea that the brain is a nested hierarchy of processing activity.

    So as I said, the brain is doing both integration and differentiation. It is organised into modules, yet just as much functioning as a coherent whole. It does both at the same time - and most people want to say it is either doing the one, or the other.

    Zeki is correctly arguing against the idea - the Cartesean theatre model - that the activities of the brain must be outputted to some final consciousness display area. Instead, everything happens where it happens. And is connected as a whole already.

    To me, this is arguing against a straw man. It is obvious that the brain does not output to a pineal gland or prefrontal zone to turn unconscious processing into conscious experience. But this is a really naive view that some do have, and Zeki is arguing with evidence against that.

    But should the activity of V4 or V5 then be described as micro-consciousnesses? That is where I say the terminology is clumsy. But in context, I would happily live with it.

    Zeki does not even use the term qualia. And he is not asserting that you could cut out V4 or V5 as chunks and they would generate subjective states that were colourful, or fast moving. They have to be part of the whole hierarchy of activity to contribute to such states.

    To be able to experience the redness of a red object, you still need both the bottom-up activity of the visual pathways - the retinas alone are carrying out four levels of processing. And you need just as much, if not more, top-down activity to create an attentive state, shaped by expectation and memory.

    Look at it like trying to scoop a whorl of turbulence out of a stream with a bucket. V4 goes into some measurable state when it sees red. A whorl develops that seems distinctive. But try and isolate it and you discover it was being formed by everything else that happened around it.
     
    Last edited by a moderator: May 5, 2017
  20. Jul 29, 2011 #19

    ConradDJ

    User Avatar
    Gold Member


    I’m working on a post for the BSM or Quantum forum on “The Observer”. In short, I think attempts to combine Quantum theory and Relativity while ignoring the viewpoint of the observer will probablly go nowhere. The alternative is to describe the physical world both from the objective “God’s-eye view” of classical physics, and from the local standpoint of individual systems in relation to one another. I will argue that while the objective viewpoint is immensely useful, we have neither logical nor empirical gounds for believing that it could be physically fundamental.


    Because my point was that “having a point of view” is “not a real predicate,” in Kant’s phrase.

    From your subjective standpoint, of course it’s true that you have your own point of view. But objectively...? When we look at things as “objects”, seen from outside, we can (if we want) ascribe “their own viewpoint” to them by trying to imagine the world from their viewpoint. There is no truth involved.

    It’s often argued – “well, I have very good objective evidence that another person has consciousness, because they behave much like me, and I know I have consciousness.” This makes some sense IF what we mean is human consciousness.

    But here’s Thomas Nagel from his famous discussion of “What it’s like to be a bat” –
    http://instruct.westvalley.edu/lafave/nagel_nice.html

    Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it.

    ... fundamentally an organism has conscious mental states if and only if there is something that it is to be that organism—something it is like for the organism. We may call this the subjective character of experience. ​

    This “definition” is nonsense, to me. Apparently Nagel can very remotely imagine being a bat, so he says there is “something like” being a bat. But he doesn’t want to try imagining being a tree, or an atom, so it’s obvious to him that they’re not “conscious”.

    “Subjective” means nothing more than “from one’s own point of view”. It has nothing to do with the “character of experience”... as if some experiences were “objective” and some “subjective”.

    So the basic problem in debates about consciousness, the basic reason why the word itself is never well defined, is that we confuse the very special, extremely rich and articulate experience we have (as highly cultured humans) with the simple notion of “point of view” – which to me just means being somewhere in particular in space at a certain moment.

    If by “consciousness” we mean, say, the experience of an organism that interacts with the world via a neural system – fine. But if we actually define “consciousness” in some specific way like this, instead of just pointing to it as something undefinable that we know we “have”, then the kind of question posed in this thread does not come up.
     
  21. Jul 29, 2011 #20
    I think part of the trouble with this question is that we run into problems of excessive self-reference. Take for example how we model time. It is useful to isolate points on a line representing time as snapshots of the evolving state of the system we are studying. Of course, there is no "single unit of time", the evolution of any system is a continuing process we can conceptually break down into smaller and smaller units, but we eventually run into a zeno's paradox type of problem.

    I think consciousness is similar. We will never find an atom of perception. It is a dynamic process, dependent on time. Of course the tools we have to study it are dependent on the thing itself. Perhaps there is a Godelian problem here. Or perhaps to understand "subjective experience" we need intellectual tools that aren't yet available to us.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Daniel Dennett's Consciousness Explained
  1. Dan Dennett and qualia (Replies: 16)

  2. Consciousness? (Replies: 51)

  3. The Conscious! (Replies: 9)

Loading...