Can Everything be Reduced to Pure Physics?

  • Thread starter Thread starter Philocrat
  • Start date Start date
  • Tags Tags
    Physics Pure
AI Thread Summary
The discussion centers on the claim that everything in the universe can be explained solely by physics. Participants express skepticism about this assertion, highlighting the limitations of physics and mathematics in fully capturing the complexities of reality, particularly concerning consciousness and life. The conversation touches on the uncertainty principle, suggesting that while physics can provide approximations, it cannot offer absolute explanations due to inherent limitations in measurement and understanding.There is a debate about whether all phenomena, including moral and religious beliefs, can be explained physically. Some argue that even concepts like a Creator could be subject to physical laws, while others assert that there may be aspects of reality that transcend physical explanations. The idea that order can emerge from chaos is also discussed, with participants questioning the validity of this claim in light of the unpredictability observed in complex systems.Overall, the consensus leans towards the notion that while physics can describe many aspects of the universe, it may not be sufficient to explain everything, particularly when it comes to subjective experiences and the nature of consciousness.

In which other ways can the Physical world be explained?

  • By Physics alone?

    Votes: 144 48.0%
  • By Religion alone?

    Votes: 8 2.7%
  • By any other discipline?

    Votes: 12 4.0%
  • By Multi-disciplinary efforts?

    Votes: 136 45.3%

  • Total voters
    300
  • #351
Fliption said:
BTW, when we say a zombie "believes", we're talking purely functional behaviour.


Meaning what, exactly. Does the Zombie have any inner life? He sees blue things remembers blue things, can compare what he sees with what he remembers, can imagine a blue thing he has never seen, and so on. Right? But he just doesn't experience, or miss, blueness.
 
Physics news on Phys.org
  • #352
Why Should The So-called 'Hard Problem' Hold us back Intellectually?

We are heading in the wrong direction intellectually with this Qualia issue. Let's call a truce and start thinking differently. For why may indescribability of qualia to each other in the public realm hold us back? As I have pointed out many times above, the only fundamental benchmark in the measure and understanding of qualia is if it fails us in the most important aspect of the human existence: COLLECTIVE RESPOSIBILITY. If we fail to collectively look at things, recognise what they are and action or act upon them in the 'same' or equivalent way, then this would be the the most useful way to know that qualia as part of the human conscious existence is playing dirty tricks on us...and we may very legitimately declare its presence in our being fundamentally useless.

Yes, it would be fundamentally useless as it would have nothing sinificant to contribute to collective existence.

LANGUAGE AND QUALIA. Which language are we talking about? Verbal? Written? What about BODY LANGUAGE? Tell me which scientific discipline has made any attempt to conceive it, let alone study it? Tell me! Yes, oral or spoken language cannot explain qualia from one person to the next becuase qualia is self explanatory. If you see a red car just point at it and say 'that is a red car'! If a bystander points at it and says the same thing 'that is a red car', we ought to accept that they are seeing, recognising and understanding the same thing. Qualia in this sense serves only a 'Discriminatory value' to the overall human existence. If Qualia fundamentally fails to discriminate between the different states of the physical world, then it's in deep trouble as it will fail to be a reliable part of a conscious human self.

The job of the eye is to see and discriminate between differing visual states and not to explain them. It not only must be able to discriminate between the visual states that are already known to the perceiver, but also it must discriminate between new visual states that become available as the perceiver goes about his or her daily life. All that the spoken or written language does is to label things that come through the eyes and all other visual organs - the nose, the ears, the tongue, the skin, the memory cells etc - and use them in inquisitive, acquisitive and precuationary visual activities or should I vaguely say conscious activities.

CHANGE AND QUALIA. Qualias like all other visual states obey causal and relational laws and, most importantly, they stick to rules of logic. They rely on and deteriorate with visual states or organs. Well, I don't want to go through that route of things not being there when we are not looking at them, not feeling them or simply not experiencing them. I believe that they remain very much there except that faulty bodily organs just fail to display them. Qualias change according to the corresponding changes in the physical states of the body organs that perceive them. The reliance of qualias on the proper functionaing of the visual organs that display them makes qualia an engineering problem and quite naturally prone to change. The Question now is: which type of change? FUNCTIONAL CHANGE or STRUCTURAL CHANGE? So far, we tend to habitually waste a great deal of time in concentrating entirely on the functional change of things around us and naively hope that they stay changed or they improve the physical states of things for good. But we do know that that simply isn't the case. Structural change is entirely ignored for the usual naive reason of not wishing to interfer with nature. But I am arguing that the configurational relation between quailias and the physical body organs that display them can be improved by structural re-engineering of the entire human reality or should I say the human physical state. I predict that several aspects of qualia may very well be re-engineered out of place or existence not unless they can prove their continual usefulness in the end-state of man.

If Mary came out of the black and white room and she's confronted with a new visual information...so what? What is the big deal? The Inquisitive mode of consciousness treats all knowledge of enquiry as commulaltive. In the human realm, all knowledge is classified into (1) Useful and (2)Non-useful. Mary coming out to see other colours other than black and white is just an addition to her stock of knowledge. But the most important question that should concern Mary is this. HOW USEFUL THIS NEW KNOWLEDGE IS TO HER IN THE PUBLIC REALM WHERE SHE MUST PHYSICALLY SUCCEED IN SURVIVING IS THE FUNDAMENTAL ISSUE AT STAKE HERE. Whatever happens to this knowledge inside her is irrelevant. The only signifacnt use of this knowledge is that she must be able to discriminate between different colours in the real world that she lives in. As she goes about her daily life, she will continue to come across new visual information that is either obvious and self-explanatory or is explainable by means of our natural Langauge or is by a combination of both.

SCIENCE AND QUALIA. The attitude of science and approach of science to dealing with qualia must change. Science must treat qualia as an engineering issue that is capable of being altered when the visual organs are physically interfered with at the structural engineering or re-engineering level. The only significant scientific research that is any use to the humans is structural engineering of bodily parts and seeing precisely how they affect the visual states of all kinds. And not the analysable components of qualia. This sort of experiment would make a huge difference. Also, it is the duty of science to investigate whether the increase or decrease in the number of visual organs in the human body has any effect on the quality of visual data or visual perception. For all we know, the current physical configuration of man with the current number of visual organs may very well be inadequate for climbing to a higher or superior state of being. Science must must recsue the human race from total destruction...for the naive claim that we must leave every thing to nature is profoundly dangerous, if not wholly suicidal! For me, this amounts to what I habitually call 'DANGEROUS CONTENTMENT'.

QUESTION: Must science explain qualia first before making a genuine attempt improve the physical state of man? Science of man or science of needs: which one should science pursue?
 
Last edited:
  • #353
selfAdjoint said:
Meaning what, exactly. Does the Zombie have any inner life? He sees blue things remembers blue things, can compare what he sees with what he remembers, can imagine a blue thing he has never seen, and so on. Right? But he just doesn't experience, or miss, blueness.

No inner life. It means that all the brain functions associated with believing are working but nothing else. Here's the way it was put in the thread I linked.

"Belief here is used strictly in a functional sense, i.e. one's disposition to make certain verbal utterances, and does not refer to any experiential aspect of belief-- eg the subjective feelings associated with believing something.)
 
  • #354
selfAdjoint said:
As it was previously stated here, the Zombie could not only state that it had p-consciousness, but believe it to be so. We are presumed able to read the mind of a Zombie for purposes of discussion, I guess. If that is so, seeing that our only evidence for p-consciousness at all is introspection, we are all in the position of such a Zombie, and thus p-consciousnesss becomes an epiphenomenon, which makes no detectable difference in our inner lives, to say nothing of our behavior.
i think an interesting point is implied here... that is, that consciousness is simply a trait of the ability to analyze "inwards" just like we analyze "outwards"...
i've read a paper about consciousness recently, where a couple of scientists claimed the awareness of action to simply be a matter of degree of intelligence and our ability to analyze ourself and our environment.
the selfawareness comes at a point when we intellectually discover, that we are thinking and feeling.
 
Last edited:
  • #355
Fliption said:
No inner life. It means that all the brain functions associated with believing are working but nothing else. Here's the way it was put in the thread I linked.

"Belief here is used strictly in a functional sense, i.e. one's disposition to make certain verbal utterances, and does not refer to any experiential aspect of belief-- eg the subjective feelings associated with believing something.)

Well in that case it seems that you are using petitio principi to define p-consciousness; that is you are assuming on the one hand that p-consciousness is the presence of qualia, and on the other hand your definition of zombies as without p-consciousness takes away ALL inner life except belief. So p-c includes the features I mentioned, which have good neurochemical substrates; sensation, memory, imagination, mental comparison.

A materialist zombie would have to have those (sensation, memory of sensation, imagination of sensation, comparison of differently generated sensations) because the brain features that produce them are being actively studied, and if the hard problem means anything, it has to confine itself to the complement of those features.
 
  • #356
selfAdjoint said:
Well in that case it seems that you are using petitio principi to define p-consciousness; that is you are assuming on the one hand that p-consciousness is the presence of qualia, and on the other hand your definition of zombies as without p-consciousness takes away ALL inner life except belief. So p-c includes the features I mentioned, which have good neurochemical substrates; sensation, memory, imagination, mental comparison.

I'm not understanding the definitional problem you're pointing out. Perhaps too much is being made of the word "belief"? The point is simply that there is no reason to believe that a zombie with identical A-consciousness to you would behave any differently from you. So if you believe you have P-consciousness, a zombie with identical A-consciousness must also behave as if it has the same belief. To suggest it really "believes" is a stumbing block because it implies an inner life, which by definition there is none. That's why I posted the clarification above that when we say belief, we are talking only about the functional aspects of it. It is probably best that the word not be used at all.
 
Last edited:
  • #357
Egmont said:
You are smart and I think you can understand what I'm trying to say.

If I can make a resume of our two different positions:
-you say that consciousness is an explanatory concept people have invented to explain the behavior of humans, until we will find out in more detail how they really work and can describe their behavior in "simpler" terms, at which point the concept of consciousness becomes irrelevant (in the same way phlogiston is).

-I claim that consciousness is something that exists in my world (and probably in yours too) which has nothing to do with the explanation of the behavior of humans, but which, in itself, needs an explanation, and that the non-behavioral property of consciousness makes that explanation very hard in scientific terms.

After that, we got into the issue of whether we are using the word "consciousness" in the same way.

Would you agree with me then, that whatever _that_ property is, it's something that can't be communicated? And would you also agree that Chalmers and his followers think they are successful at communicating what the "hard problem" is about?

I don't know Chalmers. I will not agree with you that that property is something that cannot be communicated. I hope that it can - even easily - be communicated between two entities who both have consciousness, and hence, after some thinking, should have the same "problem", and recognize that's what is being talked about.
However, it cannot be communicated in formal terms (you agreed upon that). In order to communicate it, you can only "reach out a helping hand" and hope that it clicks on the other side.

You seem to be close to grasping that the world is full of hard problems, but none of them are about things we can talk about. I hope you give the issue some more thought; you would be glad you did.

I don't know what you are talking about. I only see one hard problem in _that_ category. And as I pointed out, you CAN talk about it. Maybe you can enlighten me.

Now keep this in mind: there is something we can't talk about, but I can't tell you what that something is, you have to figure out by yourself. Anything I can describe to you in words is not something we can't talk about. I have no way to talk about this thing, but there is a way to see it, and it's possible to guide people so they can also see it for themselves.

I can talk about it, I can try to tell you what it is, but I cannot come up with a formal definition. I can - and that's the whole problem here - also not come up with an OBJECTIVE operational description. But I can come up with a subjective description, assuming that this also fits close YOUR subjective experiences. So it is not that I cannot talk about it at all. I only need "a little help from my friends".

I didn't accuse you of anything. All I said was, if I started talking about bewustness with you without fully understanding what you mean by it, we would end up disagreeing at some point. Which is exactly the situation concerning most philosophical issues.

But you DO know what I mean, because at a certain point you said you didn't see the point in defining a new word if it was "consciousness" that I was talking about. A Freudian slip ? :-)

You are absolutely correct, but is "consciousness" one such thing or not? That is, is "consciousness" a concept that is related to "things in the world", or is it not?

We're getting close. I am conscious. So it IS a thing in the world. I *HAVE* subjective experiences. I *DO* feel pain. It is something that exists in MY world. But - I think we agreed on this - so it must be in YOUR world.

I am really impressed with that statement. Serious. So you see, we need a lot of concepts which lack formal definition in order for our communication to be meaningful, but at the same time concepts without a formal definition cannot be subject to scientific study.

Wrong. All the objects of study of all natural sciences are such. It is only in mathematics (and in linguistics and law) that concepts have a formal definition. But that is because they don't describe things in the world, but formal systems (maths and languages). Take the concept of "electron". I can simply say that it is the particle we accept as the particle of the QED dirac spinor. I can formally define what we mean with a dirac spinor in QED, but that doesn't define the "physical" electron. To do that, I'd have to give you lots of descriptions: some experimental ones, like it is the particles that come out of a hot cathode in vacuum, and they happen to be the same particles we find in the outer regions of atoms etc..., they have charge -1, they have a mass (at low energies!) of about 511KeV etc.. but I cannot DEFINE what is an electron because there will always be instances where my definition will flunk. I do exactly the same with consciousness, except for one thing: I cannot describe OBJECTIVE measurements with instruments dealing with it.
And it is thanks to this lack of formal definition that scientific progress is possible. If we had FORMALLY DEFINED an electron as Thompson could have done it, then it would not have been compatible with its quantum mechanical or relativistic description ! We still mean the same "thing in the world" as Thompson with "electron" but its theoretical description has seriously altered.


Doesn't that make you think? Doesn't it sound like science is only true to the extent that it restricts itself to formal logic?

I tried to explain you exactly the opposite !

I hope we get a chance, one day, to talk about why I think physics doesn't have as much to do with "things in the world" as we usually think. The truths of physics, from my perspective, seem to come from formal logic, not from the nature of reality. But that's a discussion way ahead.

But I think you misunderstood what physics is about in that case ! It is in setting up RELATIONSHIPS between formally defined concepts in theories (Dirac spinor) with "things out there" (electrons).

So can we take the fact that someone understands our descriptions of consciousness as proof that they are conscious?

Yes, but we're back to the same difficulty. It is not because it APPEARS as if someone understands the concept, from its behavioral point of view, that he also DOES understand it. A very smart computer program might be generating all what I'm typing here, and as such have no clue as what it is talking about.

You are correct about that, but as stated it is a problem like any other. Like any scientific problem, it will take time to be worked on, a final, absolute answer will never be found, but there's nothing preventing us from learning a lot more than we currently know.

Ah, something we can agree upon. Only, the way things present themselves, we haven't even started. As I wrote somewhere, interconnecting consciousnesses could be a first step. If it can be done.

That really depends on what you mean by behaviourism. Using a computer to send messages to an internet forum on metaphysics sounds like "behaviour" to me. Granted, mention to behaviour is absent in your description, but the description itself is manifested behaviour of a conscious entity (yourself)

No, absolutely not. Our message exchanges are (to me) absolutely no indication that either of us has consciousness. The only thing that indicates me that you have consciousness is that you are a human being.

This is what many people don't see. Consciousness is related to behaviour, but in a very abstract way. The more abstract a concept, the harder it is to think about it, and the easier it is to get confused and see problems where they don't exist.

As I pointed out, I don't think that consciousness has much to do with behavior. I even envision the possibility that consciousness IN NO WAY influences our behavior which is probably dictated by the running of a biochemical computer program. Even our thinking is not influenced by our consciousness. Our consciousness just subjectively observes what our (non-conscious) body is doing and thinking.
I acknowledge that this is an extreme viewpoint, but I consider it an interesting thought that consciousness CANNOT influence the behavior of a human being. It's just there passively observing what's being done, said and thought. And undergoes feelings.

cheers,
Patrick.
 
  • #358
Fliption said:
I have a particular feature of my existence that I observe. I can then observe that I'm not sure anyone else has this same feature. I can inductively decide they probably do. But the nature of this feature forces me to decide this inductively and it is this nature that results in the inability to reductively understand it. I don't see where definitions change any of this that I've written.

EXACTLY. I think I'm on the same wavelength as Fliption (but he's putting his arguments in a much more professional way :-)

I would like to point out that the reasoning:
<<
a) with concept A we mean such and such.

b) clearly, concept A has property B.

c) now from property B, we can derive a difficult problem

so there's something wrong with the way you define concept A >>

as a wrong way of reasoning.

It is almost as if in mathematics, you write down a function,
f(x) = integral sin(t)/t dt

and then you say, yeah, well there's something wrong with your
definition of f(x) because I don't know how to work out the integral !

It is not because from some concepts follows a difficult problem that the statement of the problem is wrong (or the concepts).

cheers,
Patrick.
 
  • #359
selfAdjoint said:
because the brain features that produce them are being actively studied, and if the hard problem means anything, it has to confine itself to the complement of those features.

I think the ultimate conscious experience is the fact that pain hurts. Pain is the physiological manifestation (neurotransmitters etc...) and the behavioural consequences (trying to avoid it, and screaming if we can't avoid it) ; but the fact that it HURTS cannot be studied actually (except for ASKING "did it hurt?" and assuming the answer is honest ;-)

For instance, I am pretty convinced that trying to factorise big numbers on my PC does pain to my PC (it gets hot, it takes a long time to answer, everything seems to run slowly etc...). My PC even regularly reboots in order to avoid it (or I might have a virus). But I don't think my PC FEELS the pain. Although my program prints out that it does if the number is really big...

cheers,
Patrick.
 
  • #360
If you stick a pin in a baby, it will respond with behavior, but it can't tell you that it hurts. Neverthelass, because the baby is human, we INFER that it hurts, and say "Nasty man! Stop hurting that baby". When your PC indicates harm with behavior by getting warm, you don't infer pain because it is a machine. Maybe you should? After all it wouldn't be much of a programming job to adapt some natural language program to produce "Ow! That hurts!" from your PC's speakers when it overheats.
 
  • #361
selfAdjoint said:
If you stick a pin in a baby, it will respond with behavior, but it can't tell you that it hurts. Neverthelass, because the baby is human, we INFER that it hurts, and say "Nasty man! Stop hurting that baby". When your PC indicates harm with behavior by getting warm, you don't infer pain because it is a machine. Maybe you should? After all it wouldn't be much of a programming job to adapt some natural language program to produce "Ow! That hurts!" from your PC's speakers when it overheats.

Right. That's exactly what the hard problem is all about :-)
In fact, I don't know if a newborn baby is conscious and feels pain. For all safety, I assume it does (because legally I think I'm in trouble if I'd act as if it wasn't 8-). But it might very well not. And only slowly turn on its conscousness, say, at 1 or 2 years old. How can we know ?

cheers,
patrick.
 
Last edited:
  • #362
Fliption said:
I'm not understanding the definitional problem you're pointing out. Perhaps too much is being made of the word "belief"? The point is simply that there is no reason to believe that a zombie with identical A-consciousness to you would behave any differently from you. So if you believe you have P-consciousness, a zombie with identical A-consciousness must also behave as if it has the same belief. To suggest it really "believes" is a stumbing block because it implies an inner life, which by definition there is none. That's why I posted the clarification above that when we say belief, we are talking only about the functional aspects of it. It is probably best that the word not be used at all.

It is not the issue of belief but the definition of a zombie. Could you say clearly whether a zombie in your definition does or does not posses the properties of sensation, memory of particular sensation, imagination of sensation, and the ability to compare remembered, sensed and imagined sensations. I claim that AIs can be programmed to do these things (perhaps poorly, but it's the categories I'm talking about, not the efficiency). If your zombie has some of these but not others would you indicate which?

Thank you.
 
  • #363
selfAdjoint said:
It is not the issue of belief but the definition of a zombie. Could you say clearly whether a zombie in your definition does or does not posses the properties of sensation, memory of particular sensation, imagination of sensation, and the ability to compare remembered, sensed and imagined sensations. I claim that AIs can be programmed to do these things (perhaps poorly, but it's the categories I'm talking about, not the efficiency). If your zombie has some of these but not others would you indicate which?

Thank you.

I would say it can do the functional aspects of all those things. But it has no experience of doing them.
 
  • #364
vanesch said:
Right. That's exactly what the hard problem is all about :-)
In fact, I don't know if a newborn baby is conscious and feels pain. For all safety, I assume it does (because legally I think I'm in trouble if I'd act as if it wasn't 8-). But it might very well not. And only slowly turn on its conscousness, say, at 1 or 2 years old. How can we know ?

cheers,
patrick.
Did Homo erectus experience pain? Do chimpanzees experience pain? Do cats? mice? lizards? trees? mosquitos?

If you say that they do (never mind why you say they do), does that mean they also possesses p-consciousness?

On the operating table, you do not experience pain (unless the anaesthetist fails to do her job!), and you are not conscious of your lack of consciousness. Do you possesses p-consciousness? What if you're in a coma?
 
  • #365
Fliption said:
I would say it can do the functional aspects of all those things. But it has no experience of doing them.

Ummm, OK. That leaves me with a problem. For in doing those things, it IS experiencing them in what I would call a reasonably not overspecialized use of the verb "to experience". Perhaps we could agree that it is not AWARE of experiencing them?

But awareness doesn't seem to require unphysical assumptions:

It is not impossible to bring the fact of experience into an AI system as data, and to allow it to be "sensed, imagined, remembered, compared". I recall a couple of years ago a plan for self repairing satellites and rovers that would do just this; monitor their own behavior, compare it with norms, and apply problem solving algorithms to search the behavior stream to find and repair the cause of any devience. Yes, this is what our autonomic nervous systems do more or less, but I would argue that pace your philosophy there is no sharp line between this kind of thing and percieving one's feelings.

Feelings have recently been a hot study area in the fMRI brain scan field. Quite siimple physical processes in the hippocampus have resulted in complex reported feelings.
 
  • #366
selfAdjoint said:
Ummm, OK. That leaves me with a problem. For in doing those things, it IS experiencing them in what I would call a reasonably not overspecialized use of the verb "to experience". Perhaps we could agree that it is not AWARE of experiencing them?

But awareness doesn't seem to require unphysical assumptions:

It is not impossible to bring the fact of experience into an AI system as data, and to allow it to be "sensed, imagined, remembered, compared". I recall a couple of years ago a plan for self repairing satellites and rovers that would do just this; monitor their own behavior, compare it with norms, and apply problem solving algorithms to search the behavior stream to find and repair the cause of any devience. Yes, this is what our autonomic nervous systems do more or less, but I would argue that pace your philosophy there is no sharp line between this kind of thing and percieving one's feelings.

Feelings have recently been a hot study area in the fMRI brain scan field. Quite siimple physical processes in the hippocampus have resulted in complex reported feelings.

I think there are some semantic issues with using the words this way. Of course, you can use them however you like but I don't think using them in this context makes any philosophical issues go away. From what I've seen in discussions in this forum, I think people might reverse your use of the words awareness and experience. For example I've seen people say that a video camera is aware of the data it receives. But I've never seen the word experience used in the same way. Regardless of which word it is we use, there is a feature that seems to have no functional explanation such as "the hippocampus does x". That is the feature that we're calling P-consciousness.
 
  • #367
selfAdjoint said:
But awareness doesn't seem to require unphysical assumptions:

It is not impossible to bring the fact of experience into an AI system as data, and to allow it to be "sensed, imagined, remembered, compared". I recall a couple of years ago a plan for self repairing satellites and rovers that would do just this; monitor their own behavior, compare it with norms, and apply problem solving algorithms to search the behavior stream to find and repair the cause of any devience. Yes, this is what our autonomic nervous systems do more or less, but I would argue that pace your philosophy there is no sharp line between this kind of thing and percieving one's feelings.
In another 'artificial machine' area, such awareness is already alive and flourishing ... in modern communications networks, the 'self-healing network' has been extensively researched, standards written, and commercial companies sell such systems to large telecom companies, who hire teams of SI experts to tweak these systems, so as to reduce even further the number of human techs needed to monitor and maintain such systems. Do such systems actually work? Yes, and you bet your life on them every day that you make a 000 (911 in the US) call!
 
  • #368
Nereid said:
Did Homo erectus experience pain? Do chimpanzees experience pain? Do cats? mice? lizards? trees? mosquitos?

If you say that they do (never mind why you say they do), does that mean they also possesses p-consciousness?

I don't know what is meant with a-consciousness and p-consciousness. I'd only say that *IF* they experience pain, then they are conscious.
And the difficult problem is indeed, to find out if chimps, cats, mice, lizards, trees, and mosquitos feel pain. I'm not talking about their behavior that would "indicate us they'd feel pain".

As I said, I don't know these definitions of a and p consciousness. But I can guess it: It seems from what is said above, that "a-consciousness" is just the intelligence of a computer program to dictate behavior "as if" the entity were conscious, and "p-consciousness" is what I simply call consciousness, namely the awareness of it, and the subjective experiences. I think p-consciousness (for me for short consciousness) doesn't influence behavior, and a-consciousness is not consciousness but the physical description of the input-response mechanism, be it a computer program, a brain or whatever.

cheers,
Patrick.
 
  • #369
vanesch said:
I think the ultimate conscious experience is the fact that pain hurts. Pain is the physiological manifestation (neurotransmitters etc...) and the behavioural consequences (trying to avoid it, and screaming if we can't avoid it) ; but the fact that it HURTS cannot be studied actually (except for ASKING "did it hurt?" and assuming the answer is honest ;-)

For instance, I am pretty convinced that trying to factorise big numbers on my PC does pain to my PC (it gets hot, it takes a long time to answer, everything seems to run slowly etc...). My PC even regularly reboots in order to avoid it (or I might have a virus). But I don't think my PC FEELS the pain. Although my program prints out that it does if the number is really big...

cheers,
Patrick.

Correct...the computer probably does not feel any pain, have you or any of the learned members of this gathering thought of any additional ability or abilities at the engineering level to be given to this computer to enable it to feel pain? There is equally another consideration...perhaps pain may not be a requirement of an efficient or perfect state of being. Robots are now being made not only physically flexible but also are being empowered with more abilities that closely resemble those of the humans.

The debate tends to be moving from Mary to Zombie to computers without any readiness for anyone to commit him or herself as to what additional abilities are neended to make these different systems structurally and functionally more efficient. And in terms of the human system, there are so many displayed abilities and functions that cannot stand the test of efficiency, let alone be grounded as fundamentally necessary.
 
Last edited:
  • #370
Philocrat said:
Correct...the computer probably does not feel any pain, have you or any of the learned members of this gathering thought of any additional ability or abilities at the engineering level to be given to this computer to enable it to feel pain?


We can think of the following: I take a big metal box (say, 2m on 2m on 2m), in which I put my PC, with the original program, but with the display, speakers and keyboard outside of the box. I also bribe one of the doctors of the nearby hospital that when there's a hopeless case coming in the emergencies, with a broken spine, paralized and without a voice, that he quickly does the necessary reparations to the victim, and then handles her over to me. I put her in the box, put a few electrodes on her body and connect them to my computer. Now when I ask my computer to factorize a large number, it not only prints out "Aw, that hurt" on my screen, but also connects (through a simple controller card) the mains (220V) to the electrodes on the victim's body, which is conscious. She can't move and can't scream ; I don't see her, because she's inside the big box. But I'd say now that my "box computer", when it prints out "Aw, that hurt", feels pain...

cheers,
Patrick.
 
  • #371
Many contemporary philosophers have already suspected the concept of 'awareness of being aware' or 'self-awareness' as the essential component of consciousness in general. For those of you who understand computers up to the programming and engineering levels, you should know that many new generations of computers are already 'environmentally aware'. In fact on this aspect, many of these computers would outsmart or outfox the humans, as far as the notion of safety or avoidance of evironmental dangers is concerned. There are now so many sophisticated devices that if you fit them onto modern computers they would cause these computers to become 'Super aware' of their external environments.

The BIG question now is:

What technical difficulties do we have to overcome both at the detailed hardware engineering level and at the detailed schematic Programming level in order to empower computers with self-wareness.

The issue is no longer about argueing whether computers can think or be conscious. Computer is nearly human! The question should therefore concentrate on what is left to be done to make computers fully human, given that being human is thought to be the benchmark or measure of being alive. For all we know, being human may afterall not be the only route of getting to design superbeings. For it seems as if we are currently thinking that we must first design human-like machines before setting about the important yet well-overdue project of structurally and functionally improving the physical state of the human-like beings. I dont't know why we think in this way, but so it seems. Bad habits die hard!
 
Last edited:
  • #372
Philocrat said:
The BIG question now is:

What technical difficulties do we have to overcome both at the detailed hardware engineering level and at the detailed schematic Programming level in order to empower computers with self-wareness.

The problem still stands: how would you know you've succeeded ?

There's no behavioral way to know. Look at my "computer in a box". The output on the screen (the only behavioral access I have) is identical: it prints out "aw that hurt!". But in the case the victim is connected to the mains, there is an awareness of pain in my box, and if the victim is not connected, it is a simple C-program line that printed out the message. The computer works in identical ways.
If you now replace that human victim (of which we can assume that it consciously experiences pain) by a machine, how can we know ? The behavir is identical.

cheers,
Patrick.
 
Last edited:
  • #373
vanesch said:
As I pointed out, I don't think that consciousness has much to do with behavior. I even envision the possibility that consciousness IN NO WAY influences our behavior which is probably dictated by the running of a biochemical computer program. Even our thinking is not influenced by our consciousness. Our consciousness just subjectively observes what our (non-conscious) body is doing and thinking.
I acknowledge that this is an extreme viewpoint, but I consider it an interesting thought that consciousness CANNOT influence the behavior of a human being. It's just there passively observing what's being done, said and thought. And undergoes feelings.

Why would a biochemical computer program, have written into itself to self destruct?

How would you account for the fact that, I pushed my wife out of the way of getting hit by a car and almost getting killed myself? Why would I want to do that? What influenced my choice then?
 
  • #374
Rader said:
How would you account for the fact that, I pushed my wife out of the way of getting hit by a car and almost getting killed myself? Why would I want to do that? What influenced my choice then?

Heroic behavior can be naturally selected for, in that related groups of individuals, of whom some have "heroic behaviour" (running the risk of sacrificing themselves for the well-being of the group), have a survivalistic advantage over a "bunch of cowards". The heroic subject diminishes of course his own chances of getting his genetic material to the next generation, but his relatives will have a higher chance in doing so.
Also, if a heroic subject *survives* to his heroic deed, often there is a lot of compensation, and even survivalistic advantage (success with members of opposite sex).

What makes you think that this behavior is unthinkable without consciousness ?

But the very behavioural observation for "altruistic selfdestruction" cannot be the proof of consciousness.
Dogs do this too. Some security systems do that too. Even a fuse does it, inside electronic equipment. Are fuses conscious ?

cheers,
Patrick.
 
Last edited:
  • #375
vanesch said:
Heroic behavior can be naturally selected for, in that related groups of individuals, of whom some have "heroic behaviour" (running the risk of sacrificing themselves for the well-being of the group), have a survivalistic advantage over a "bunch of cowards". The heroic subject diminishes of course his own chances of getting his genetic material to the next generation, but his relatives will have a higher chance in doing so.
Also, if a heroic subject *survives* to his heroic deed, often there is a lot of compensation, and even survivalistic advantage (success with members of opposite sex).

I would agree with you, that all those factors could be computized in my brain subconciously but it was apparently overrided. My subconscious actions had nothing to do with calculations of survival of the human race. What went through my head was anxiety fear hate relief love, in that order.

It seems we never get past KP to KP4 with this issue of who is conscious. What if we could guess what is in each others head? Bobby Fisher seemed to, what of his competitors? Why did Big Blue beat Spatsky? Could anything be conscious, meat or machine?

What makes you think that this behavior is unthinkable without consciousness?

I am aware of being aware, is one primary reason. The second reason I would give is I never seen anyone walking around that had no consciounsess and was dead, doing these things. I realize I have no proof that anything is either conscious or alive. This could have consequence as you have stated in your previous post. Maybe your right, consciousness is observing but something is aware of being observed. I know that, from my own experience. The world is weird enough now without giving the property to consciousness of being able to descriminate, whereby I or maybe humans are only conscious.

But the very behavioural observation for "altruistic selfdestruction" cannot be the proof of consciousness.

Or you or I can know that.

Dogs do this too. Some security systems do that too. Even a fuse does it, inside electronic equipment. Are fuses conscious?

You know by your posts you seem to be very interested and educated to answer your last question and this one. Is not the basic difference between measurement of coherent states and non-cohent states, the observer? HUMANS DOGS FUSES show the same results, only if we can determine if they observe. Does it not come down to the fact that all elecromagnetic waves observe each other?
 
  • #376
Rader said:
My subconscious actions had nothing to do with calculations of survival of the human race. What went through my head was anxiety fear hate relief love, in that order.

You misunderstood my point. If there is a natural selection for a certain behavior, then that behavior is not necessarily instilled with a conscious thought of "I have to optimize my natural selection" :-) You asked how it could be that you had an altruistic heroic behavior if it weren't for a conscious descision (against all odds) to act that way. I pointed out that your "biochemistry computer" could have been programmed to behave that way by natural selection, and that such behavior is no proof of consciousness.
There are now 2 possibilities left: one is that (as I propose) your "biochemistry computer" runs its unconscious program as any other computer, and your consciousness is just passively watching and having feelings associated with it, without the possibility of intervening. The other possibility is that your consciousness is "in charge" of your brain, and influences behavior.

cheers,
Patrick.
 
  • #377
vanesch said:
You misunderstood my point. If there is a natural selection for a certain behavior, then that behavior is not necessarily instilled with a conscious thought of "I have to optimize my natural selection" :-) You asked how it could be that you had an altruistic heroic behavior if it weren't for a conscious descision (against all odds) to act that way. I pointed out that your "biochemistry computer" could have been programmed to behave that way by natural selection, and that such behavior is no proof of consciousness.

I think I understand you correctly but do you understand me. I could have let the car run over her if I was programed or not for this trait. I choose not to. If I was getting a divorce maybe I would have had a second thought about it and let the car run over her. Now do you understand my point. That takes a conscious thought.

There are now 2 possibilities left: one is that (as I propose) your "biochemistry computer" runs its unconscious program as any other computer, and your consciousness is just passively watching and having feelings associated with it, without the possibility of intervening. The other possibility is that your consciousness is "in charge" of your brain, and influences behavior.

01-The world would be totally deterministic and there be no choice. Your claiming then that a "biochemistry computer", I take that to mean the "brain parts" would cause consciousness while conciousness produced, looks on. This would be a classical explanation, and if the "biochemistry computer", was quantum in nature?

02-If your consciousness is "in charge" of your brain, and influences behavior, then all behavior would be totally deterministic, only if there was a classical explanation of the brain. If the brain was quantum in nature, then it would seem to be more understandable why we make choices.
 
  • #378
Rader said:
I think I understand you correctly but do you understand me. I could have let the car run over her if I was programed or not for this trait. I choose not to. If I was getting a divorce maybe I would have had a second thought about it and let the car run over her. Now do you understand my point. That takes a conscious thought.

What is a conscious thought? All of the brain and biochemical acitvity required for you to "think" about this decision can, in principle, be completely accounted for. None of these activities have anything to do with consciousness. What Vanesch is saying is that there is no way for you to know whether your consciousness is actually participating in the process or whether it is just experiencing the physical activities that particpate in the process. The "conscious thought" you're referencing can be completely explained using physical processes of the brain; none of which are associated with consciousness. This is why there is a 'hard problem'.
 
  • #379
vanesch said:
We can think of the following: I take a big metal box (say, 2m on 2m on 2m), in which I put my PC, with the original program, but with the display, speakers and keyboard outside of the box. I also bribe one of the doctors of the nearby hospital that when there's a hopeless case coming in the emergencies, with a broken spine, paralized and without a voice, that he quickly does the necessary reparations to the victim, and then handles her over to me. I put her in the box, put a few electrodes on her body and connect them to my computer. Now when I ask my computer to factorize a large number, it not only prints out "Aw, that hurt" on my screen, but also connects (through a simple controller card) the mains (220V) to the electrodes on the victim's body, which is conscious. She can't move and can't scream ; I don't see her, because she's inside the big box. But I'd say now that my "box computer", when it prints out "Aw, that hurt", feels pain...

cheers,
Patrick.

The scenario that you are describing here may very well reflect the current state of our progress at the design, engineering and programming levels. Yes, true this may very well be so, but it still doesn't alter the fact that we need to clearly state and classify the notions of (1) intelligence (2) thinking and (3) consciousness. For example, given that we knew what (1) or (2) or (3) clearly means, we need to take stock of all the things that humans can do that computers cannot do and vice versa and the things that both can equally do that come under (1), (2) or (3). All that I have seen so far is that people just argue away in a point-scoring manner without much attention to these questions. This problem is captured much more clearly in my next posting below.

The state that you are describing is admittedly problematic, but I am saying that we need to move away from this level of sentiment and take hard stock of what is going on at the detailed engineering and programming levels. As to the puzzle of why we want to first replicate the human-like intelligence, or thinking or consciousness in machines before thinking about any form of progress in the subject, well that's another matter. I leave that to your imagination.
 
  • #380
vanesch said:
The problem still stands: how would you know you've succeeded ?

There's no behavioral way to know. Look at my "computer in a box". The output on the screen (the only behavioral access I have) is identical: it prints out "aw that hurt!". But in the case the victim is connected to the mains, there is an awareness of pain in my box, and if the victim is not connected, it is a simple C-program line that printed out the message. The computer works in identical ways.
If you now replace that human victim (of which we can assume that it consciously experiences pain) by a machine, how can we know ? The behavir is identical.

cheers,
Patrick.

How could we not know? Yes, I agree with you, behaviourism does have some drawbacks but it never completely undermines sucessful existence. In terms of the humans, we are naturally lazy and reluctant about taking control of things on our causal and relational pathways. The claim that we cannot intervene with our own nature and make an effort to re-negineer and improve our state of being is not only wrong but fundamentally dangerous. We do know, and have always known, when we succeed in the public realm, even behaviourally. If we could not do this, we would probably not be here today. Perhaps, the measure is only in degrees or minimal, but at least we are still here. On this very same token, when we do succeed in replicating human-like intelligence in other non-human systems, I personally see nothing that would stop us from knowing this. In fact this is even the more reason why we must have courage to take control and use the right and clear approach in dealing with this issue.
 
Last edited:
  • #381
The Turing Universal Machine and Consciousness

The dispute is not, and has never been, about whether a machine can think or act intelligibly because the original Turing Machine had all the necessary ingredients to do so. Rather, it’s wholly about whether thinking or acting intelligibly is a conscious act. The notion of awareness (introspective or extro-spective) ought to already have been captured by the notion of thinking or intelligence, given that we knew what this meant in the first place. I am saying that it is more than well overdue for all the inter-disciplinary researchers to commence the process of schematically yet quite naturally coming to a concrete agreement on this subject. The agreement that I am referring to here could be captured in the following schema:

SCHEMA I

(1) A Conscious act is an intelligent act
(2) All intelligent acts are conscious acts
(3) Anything that can produce an intelligent act is conscious
(4) Computer can produce intelligent act
-------------------------------------------------------------------------------
Therefore, computer is conscious

Immediately after this argument, the next most important question to ask is this:

What then constitutes an intelligent act?

In an honest and genuine response to this question, the researchers on this subject should then move on to create a ‘reference table’ of all the things that count as intelligent acts.

This argument may equivalently be stated as:

(1) A conscious act is an act of thinking
(2) All acts of thinking are conscious acts
(3) Anything that can think is conscious
(4) Computer can think
-------------------------------------------------------------------------------------
Therefore, computer is conscious

You are then required to state clearly:

What constitutes thinking?

The researcher must then create a reference table of all the things classed under thinking.

SCHEMA II

On the other hand, if it turns out that there are some thinking or intelligent acts that are conscious and some that are not, the schema should take the form:


(1) Some acts of thinking are conscious acts
(2) Thinking is conscious if you are aware not only of what your are thinking about but also of the fact that you are thinking
(3) Anything that can do this is conscious
(4) Computer has some thinking acts that are conscious
-------------------------------------------------------------------------------------
Therefore, computer is conscious

The researchers who opt for this alternative schema must classify thinking acts or intelligent acts into (1) those that are conscious and (2) those that are not conscious. Perhaps there may be a third or more schmas to prove otherwise, but I am going to leave it at this point for now.


NOTE: The implication of the Universal Turing Machine is such that it does not presuppose consciousness, therefore any schema that any researcher may opt for still has to decide on the relevance or non-relevance of consciousness. Even if he or she successfully avoids the issue of consciousness at the level of engineering or re-engineering to improve the intelligent system in questions, he or she may not avoid this issue at the level of structural and functional comparison of the system in question to the human system. Researchers in the end must either accept it as relevant or reject it as not.
 
Last edited:
  • #382
Rader said:
I think I understand you correctly but do you understand me. I could have let the car run over her if I was programed or not for this trait. I choose not to. If I was getting a divorce maybe I would have had a second thought about it and let the car run over her. Now do you understand my point. That takes a conscious thought.
How do you know your consciousness was MAKING the decision and your body was acting that way, or your body decided to act that way, and your consciousness was feeling all right with that decision (without a means of intervening) and "thought" it took it.

01-The world would be totally deterministic and there be no choice. Your claiming then that a "biochemistry computer", I take that to mean the "brain parts" would cause consciousness while conciousness produced, looks on. This would be a classical explanation, and if the "biochemistry computer", was quantum in nature?

02-If your consciousness is "in charge" of your brain, and influences behavior, then all behavior would be totally deterministic, only if there was a classical explanation of the brain. If the brain was quantum in nature, then it would seem to be more understandable why we make choices.
[/QUOTE]

This is indeed, more or less, the point. Although I do not need the idea of determinism: you can have randomly generated phenomena without conscious influence. I also tend to think - but I'm very careful here - that quantum theory might have something to say about the issue. But I think we are still very far from finding out, it is the "open door" in actual physics to consciousness.

Our mutual understanding of our viewpoints is converging, I think.

cheers,
patrick.
 
  • #383
Philocrat said:
(2) All intelligent acts are conscious acts

I do not agree. I do not see the link between intelligence (the ability to solve difficult problems) and consciousness.


The researchers who opt for this alternative schema must classify thinking acts or intelligent acts into (1) those that are conscious and (2) those that are not conscious.

Hehe, yes, they have to solve the hard problem :-)
Because it is not the problem category, nor the problem solving strategy, that will indicate this. So what remains of the intelligent act on which we base the separation ? What will be the criterion ? Also, assuming we're talking about a Turing machine, do you mean it is the _software_ that is conscious ? Independent of the machine on which it runs ? When it is written on a CD ?
I have a hard time believing that a Turing machine, no matter how complex, can be conscious. But I agree that I cannot prove or disprove this.

But we should avoid the confusion between intelligence and consciousness here. Now it might very well be that certain levels of intelligence are only attainable if the entity is conscious. But personally, I do not see a link, especially if consciousness is just sitting there passively watching. You could just as well look at power consumption and say that if you reach the density of power consumption of a human brain, the machine is conscious, and then jump into the research on power resistors. I think that "intelligence" (the ability to solve difficult problems) is a property just as power consumption, when related to consciousness.

cheers,
Patrick.
 
  • #384
Fliption said:
What is a conscious thought?

Cognitive awareness. http://www.hedweb.com/bgcharlton/awconlang.html

All of the brain and biochemical acitvity required for you to "think" about this decision can, in principle, be completely accounted for. None of these activities have anything to do with consciousness. What Vanesch is saying is that there is no way for you to know whether your consciousness is actually participating in the process or whether it is just experiencing the physical activities that particpate in the process. The "conscious thought" you're referencing can be completely explained using physical processes of the brain; none of which are associated with consciousness. This is why there is a 'hard problem'.

Fliption, actually there seems to be evidence of both. When something is born into existence, it apppears to be conscious, until such time, I can say I am conscious. If this is explanable some how, some day, this will eliminate the "hard problem" This would explain what is conscious and what physcial states determine how much something is conscious. Consciouness would have to be a fundamental property of nature.
 
  • #385
vanesch said:
How do you know your consciousness was MAKING the decision and your body was acting that way, or your body decided to act that way, and your consciousness was feeling all right with that decision (without a means of intervening) and "thought" it took it.

Good question, the only way for me to answer that is that, my consciounsess is aware of being aware and has evolved to an understanding of an order of the way the world ought to be. Sometimes my consciousness acts right but my body says no. Sometimes my body acts right when my consciousness knows better. So it appears that consciousness is watching and we make the decision how to act. :wink:

This is indeed, more or less, the point. Although I do not need the idea of determinism: you can have randomly generated phenomena without conscious influence. I also tend to think - but I'm very careful here - that quantum theory might have something to say about the issue. But I think we are still very far from finding out, it is the "open door" in actual physics to consciousness.
Our mutual understanding of our viewpoints is converging, I think.

That happens sometimes to our dissapointments later that nobody has the same view.
 
  • #386
Rader said:

I've read (part of) the article, and I think it completely misses the point. Not that I say that the scientific part of the article is wrong, but - unless I misunderstood it, I my opinion, it doesn't address the issue of consciousness as it has been adressed here on this forum. It is a technical description of brain functions.


Some quotes:
"On the other hand consciousness is an ordinary fact of life - babies are born without it and develop it over the first few years of life."

"The question of consciousness can therefore be approached by considering the general phenomenon of awareness, of which consciousness is one particular example. "

"And awareness has a quite exact definition: it is the ability selectively to direct attention to specific aspects of the environment, and to be able cognitively to manipulate these aspects over a more prolonged timescale than usual cognitive processing will allow. To hold in mind selected aspects of the perceptual landscape. Technically, awareness is attention plus working memory - ie. the ability to attend selectively among a range of perceived stimuli and a short term memory store into which several of these attended items can be 'loaded', held simultaneously, and combined. "
...
"While awareness is found in animals right across the animal kingdom; consciousness is of much more limited distribution. I suggest that consciousness is probably confined to a small number of recently-evolved social animals such as the great ape lineage - especially common chimpanzees and bonobos - and perhaps a few other recently-evolved social mammals such as elephants and dolphins."


"Consciousness arises when body state information becomes accessible to awareness. "
----

My comments:
Clearly, awareness as defined above has nothing to do with what has been meant here with consciousness. I need something that can hold information for a rather long time in memory, and access it selectively, and I have to be able to select amongst several stimuli.

In that case, I can make a machine with "awareness" using a PC, and, say, a webcam on a motor !

Moreover, if I write regularly information about power consumption, memory and CPU usage, temperature, fan speed etc... into the working memory of my PC, it is now conscious !

Come on !
 
  • #387
vanesch said:
I do not agree. I do not see the link between intelligence (the ability to solve difficult problems) and consciousness.

Does it matter? Or are you implying that when you are solving complex problems (given that this is all what the term 'intelligence' means) you are not conscious of the complex problem that you are solving, let alone of the fact that you are actually doing so?


Hehe, yes, they have to solve the hard problem :-)
Because it is not the problem category, nor the problem solving strategy, that will indicate this. So what remains of the intelligent act on which we base the separation ? What will be the criterion ? Also, assuming we're talking about a Turing machine, do you mean it is the _software_ that is conscious ? Independent of the machine on which it runs ? When it is written on a CD ?

The schemas that I am suggesting make no claims about anything. They are a mere guide setting the stage for further arguments. Or you could say that it's an invitation for those going around in circles to commit themselves and commence the process of landing the argument in safer grounds. Either we accept that consciousness has some accountable relationship with intellgent or thinking acts or that there is no such relationship. Either way, we still have to say what counts as intelligence or thinking and subsequently state whether other systems, other than the a human system, are capable of possessing such an ability. I am inviting those involved in this subject to come clean of this fact. We cannot just let lose the argument and just let it run without taking a concrete stand and take stock. It is therefore irrelevant whether such an ability (intelligence) is successfully replicated as a software or as a hardwired system or as a combination of several kinds.

I have a hard time believing that a Turing machine, no matter how complex, can be conscious. But I agree that I cannot prove or disprove this.

As I have pointed it out above, the original Turing Machine does not presuppose consciousness...and it may not rule it out either. That's why I am calling for an agreement on the whole subject. We cannot just independently and non-directionally debate it away without eventually agreeing on something. Either consciousness is part of an intelligent system or it is simply not.

But we should avoid the confusion between intelligence and consciousness here. Now it might very well be that certain levels of intelligence are only attainable if the entity is conscious. But personally, I do not see a link, especially if consciousness is just sitting there passively watching. You could just as well look at power consumption and say that if you reach the density of power consumption of a human brain, the machine is conscious, and then jump into the research on power resistors. I think that "intelligence" (the ability to solve difficult problems) is a property just as power consumption, when related to consciousness.

Well, this drags you into interactionist nightmare!..."especially if consciousness is just sitting there passively watching". It seems as if you are inviting me to think that when aspects of physical states or events approache critical functional states mysterious immaterial entities manifest to interplay. Right? Critical functional states of the physical material world do not necessarily presuppose non-physicality, nor non-existent, nor any independence from their physical sources. I do not want to go down that route of the so-called 'existence and indendence' of immaterial, or non-physical or any mysterious kinds of states. That route is just too messy and delusory for me.
 
Last edited:
  • #388
Philocrat said:
Either we accept that consciousness has some accountable relationship with intellgent or thinking acts or that there is no such relationship. Either way, we still have to say what counts as intelligence or thinking and subsequently state whether other systems, other than a human system, are capable of possessing such an ability.

Intelligence (defined as the ability to solve "difficult" problems) has, to me, no a priori relationship with consciousness. When I solve a difficult integral with paper and pencil, I absolutely have not the feeling to go through an "algorithmic mantra", but creatively find substitution rules etc... to solve it. It is an intellectual challenge as any. Well, if I type in the expression in Mathematica on my PC, it solves the same problem. I think that if you would have told someone in the 19th century that a machine could solve an integral, they would have classified that as an "intelligent act". So I'm pretty convinced that no matter where you put the bar for "human intelligence", a computer program will pass it, now, or in the near future.
So I think that the answer to your request above, is clearly that "other systems than the human system can possesses this ability".

But it doesn't indicate at all anything about consciousness. I think from an engineering point of view, you don't give a damn about consciousness. You want intelligence! You want your machines to behave in certain ways. It doesn't matter if they behave "as if they were conscious" or if they "are conscious". My point here is that we don't know how to find out ! So we could ignore the problem and we don't need an IEEE standard for consciousness. But a moral issue comes up: if machines ARE conscious, should they have rights ? Is the concept of "torturing a conscious being" applicable ? Nobody in his right mind would be shocked of "torturing an intelligent being" because the concept doesn't make sense.

cheers,
Patrick.
 
  • #389
Philocrat said:
It seems as if you are inviting me to think that when aspects of physical states or events approache critical functional states mysterious immaterial entities manifest to interplay. Right?

I do not necessarily claim that. Call it an "emerging property", in the same way phonons are an emerging property in the solid state. My hope is that it is somehow part of physics, but it is not sure. But the problem is not whether or not it IS physics, the problem is that we have no way of finding out, once we realize that consciousness has no necessarily link to behavior.
In ALL "solutions" I've seen proposed, people end up _redefining_ consciousness as something else, in order to have an operational definition.
Computer science people who work on artificial intelligence usually redefine it as 1) intelligence or 2) pure behaviorism, usually as a social intelligence (the Turing test, for instance).
If you read the article by the psychiatrist, to him, consciousness is "working memory that has access to internal body information". If you give that to a computer engineer, he quickly solders you some SRAM and a few sensors on a PCB :-)
Moreover, these people do useful work with their definition, because the concept they define IS interesting. But they miss the original meaning of consciousness. I think the philosophical problem stands there, untouched.

cheers,
Patrick.
 
  • #390
vanesch said:
Intelligence (defined as the ability to solve "difficult" problems) has, to me, no a priori relationship with consciousness. When I solve a difficult integral with paper and pencil, I absolutely have not the feeling to go through an "algorithmic mantra", but creatively find substitution rules etc... to solve it. It is an intellectual challenge as any. Well, if I type in the expression in Mathematica on my PC, it solves the same problem. I think that if you would have told someone in the 19th century that a machine could solve an integral, they would have classified that as an "intelligent act". So I'm pretty convinced that no matter where you put the bar for "human intelligence", a computer program will pass it, now, or in the near future.
So I think that the answer to your request above, is clearly that "other systems than the human system can possesses this ability".

But it doesn't indicate at all anything about consciousness. I think from an engineering point of view, you don't give a damn about consciousness. You want intelligence! You want your machines to behave in certain ways. It doesn't matter if they behave "as if they were conscious" or if they "are conscious". My point here is that we don't know how to find out ! So we could ignore the problem and we don't need an IEEE standard for consciousness. But a moral issue comes up: if machines ARE conscious, should they have rights ? Is the concept of "torturing a conscious being" applicable ? Nobody in his right mind would be shocked of "torturing an intelligent being" because the concept doesn't make sense.

cheers,
Patrick.

The answers to the moral questions are already automatically expected. It's just a matter of when, and not if. There will be no surprises. If many intelligent groups within the human system have faught for their rights in the past and succeeded, what would make computers or other systems who possesses genuine consciousness or something equivalent to it not to do the same? Human beings have always had prejudices of this kind because of fear or lack of understanding of change. Well, that's normal.

Yes, in a way it doesn't make sense and what is even worst about this whole episode is why we want to replicate human intelligence or thinking or consciousness or whatever you wish to call it in other systems first before setting about the pressing need of correcting structural and functional errors and inadequacies in us. Why build these imitations machines first before re-engineering our own originally erroneous reality? This for me is puzzling bit and the most difficult one to comprehend.
 
  • #391
vanesch said:
I do not necessarily claim that. Call it an "emerging property", in the same way phonons are an emerging property in the solid state. My hope is that it is somehow part of physics, but it is not sure. But the problem is not whether or not it IS physics, the problem is that we have no way of finding out, once we realize that consciousness has no necessarily link to behavior.
In ALL "solutions" I've seen proposed, people end up _redefining_ consciousness as something else, in order to have an operational definition.
Computer science people who work on artificial intelligence usually redefine it as 1) intelligence or 2) pure behaviorism, usually as a social intelligence (the Turing test, for instance).
If you read the article by the psychiatrist, to him, consciousness is "working memory that has access to internal body information". If you give that to a computer engineer, he quickly solders you some SRAM and a few sensors on a PCB :-)
Moreover, these people do useful work with their definition, because the concept they define IS interesting. But they miss the original meaning of consciousness. I think the philosophical problem stands there, untouched.

cheers,
Patrick.

But this is one discipline's response which must be welcomed. Though physicalist and behaviourist in scope, that guy does have something to contribute. He is stating a physicalist-bahviourist argumnets and you know as well as I do that, as inadequate as this might seem, it's never completely ruled out. In fact no one can successfully rule it out. His memory interpretation gives memory a better and more realistic role to play. From my own investigation, the natural functions of genes and memory centres in our physical material bodies are the most underated and neglected as very powerful multi-function, multipurpose coding and display systems. We naively but negligibly annex to them less than they are capable of doing. That is the problem that has tormented me over the years. Another area of gross negligence in the subject is wrong classification of conscious states that I have been struggling in this very thread and elsewhere to draw everyone's attention to without much success.

I am saying that the time for debate is over...we should start classifying and then schematically map the results into the underlying states. There some real links should be found. I have also looked at the whole concept of independence, non-existence, non-physical, interactionist or immaterial nature of consciousness, but I have always found it quantitativelly, analytically and logically absurd. Call me naive or any name you might wish, I just have not found the link, that's why I find it very difficult to accept.

Don't forget that consciousness is now a multi-discilinary project. It's no longer exclusive to philosophy. Nearly every discipline now wants a slice of it. And that's why we can no longer afford to be snobbish. I urge that all the research data from all the disciplines must be respected and rigorously but cautiously looked at and collated.
 
  • #392
Natural Law, Man-made Law and Consciousness (Part I)

In the Book of Nature there are many classes of laws that are interfaced with man-made laws, when it comes to things and what natural laws govern or affect them, consciousness in my opinion is no exception. The fundamental law that I specifically intend to invoke from the book of nature that governs consciousness is the ‘LAW OF RATIONALITY’. This law does not in any way assumes the falsity or truthfulness of any type of explanations or notions that we may already have about consciousness, rather it merely states the fundamental purpose and logical specification of it as applied at the practical human level (see the notes below for its clarification).

In the first part of this piece, I deliberately make blind logical assumptions that collect into what looks like a logical argument about the nature of thinking or intelligence. In the second part I am interested in finding out not only about what counts as consciousness but also whether any of the things listed about thinking or intelligence in the first part is conscious. So is a thinking or intelligent act a conscious act?

What are the things that count as Intelligence or Thinking?

I will start this marathon task by making very simple but very open-minded assumptions:

1) That any act of intelligence or thinking must be construed as an ability of some sort (presumably a functional kind)

2) That this ability allows anything that possesses it to function and behave the way it was originally designed to do, regardless of if such a thing was self-created, randomly or accidentally created, or created by the so-called intelligent designer.

3) Intelligent acts are purposive, empowering and progressive.

4) Storing and recalling information (memorising and remembering) is a thinking or intelligent act.

5) Feeling something and reacting to or acting upon that thing is a thinking or intelligent act.

6) Seeing something and reacting to or acting upon it is a thinking or intelligent act.

7) Hearing something and reacting to or acting upon it is a thinking or intelligent act.

8) Counting and calculating are thinking or intelligent acts

9) Following rules is a thinking or intelligent act.

10) Reasoning and making choices is a thinking or intelligent act.

11) Anything that possesses some or all of these abilities is intelligent or can think

12) Machines possesses some or all of these abilities

13) Humans possesses some or all of these abilities

14) Other living organisms possesses some or all of these abilities
_____________________________________________________________
Therefore, machines, humans and other living organisms are intelligent or can think.


NOTE: The problem of seeing, or hearing or feeling something and reacting to it is that it is fundamentally governed by the Law of Rationality upon which Man-made law itself is based. The Law of Rationality says that there must be sufficient time intervals between seeing something, thinking about that thing, forming a belief about it, memorising it and physically acting upon it. But in practice this is never really the case. For example, in man-made law this is even more problematic as the law of rationality is not clear by interpretation. This implies that the time intervals between the listed mental and physical events can collapse into oblivion where they become temporally and functionally inadequate for a rational being to exercise self-restraint or control. In criminal law no crime is committed unless premeditated or intended, hence a criminal act devoid of intention (the mental aspect), unless proven otherwise, is no crime. Conversely, a criminal intention devoid of a criminal act (the physical aspect), unless proven otherwise, is no crime. And the standard presumption here is that to commit a physical criminal act one ought to have thought of it and formed a clear intention to do so. When the prosecution raises this charge in court, the burden of proof is upon the defence to proof otherwise. However, there is a controversial aspect of this with regards to the idea of ‘seeing and thinking before you act’. There is an aspect of the law, which assumes that there is an occasion when we can act without thinking such as during ‘provocation’. When provocation is raised in a criminal case, if successfully proved, the immediate consequence is to either acquit the accused or limit the severity of the punishment. It seems therefore as if though the law makers believe that the coincidence of thought and action as is in the case of provocation is a violation of the law of rationality, and hence anyone who finds him or herself in this situation has no case to answer. I see no benefit in intensifying this controversy, but it is important to understand the correct sequence of things in the process of seeing, thinking and acting. The correct sequence is this:

(1) You see something or experience something;
(2) You think using the contents of your experience (immediate, historical or a combination of both);
(3) You memorize by forming a belief or beliefs from your thought about that thing;
(4) And you act for or against the thing on the basis of that belief or beliefs.

According to the law of rationality not only are there clearly quantifiable time intervals between thinking, forming and memorizing the resultant beliefs and acting on those beliefs, but also these time intervals must be sufficient. However, what is not clear is whether the lawmakers believe that during provocation there is a complete absence of time intervals between thinking, belief formation and action, or whether they believe that although the time intervals do exist during provocation but nevertheless they are insufficient for a rational man to exercise self-control. The question I have always asked is this: does it make any difference whether the law as it currently stands is based on the former or the latter? This is the question that has occupied my mind for some years now.

Continue in par II below...
 
Last edited:
  • #393
Natural Law, Man-made Law and Consciousness (Part II)

What are the things that count as consciousness?

I will equally make the following open-minded assumptions without any commitment to their falsity or validity. I am merely setting the stage for classification and analysis proper to begin:

1) Consciousness is an ability to be aware of what you are doing. If you are thinking, you must be aware not only of what you thinking about but also of the fact that you are thinking. If you are acting in an intelligent way, you must also be able to be aware not only of what you are intelligently acting upon, but also of the fact that you are doing so.

2) To see is to be aware

3) To hear is to be aware

4) To feel is to be ware

5) To store and recall (memorise and remember) is to be aware

6) To Repeat a task or a set of tasks a number of times is to be aware

7) To follow rules is to be aware

8) To pass information from one position in space and time to the next (communicate) is to be aware

9) To count and calculate is to be aware

10) To reason following a set of logical steps or procedures is to be aware

11) To make a choice or a decision from a list of alternative choices is to be aware.

12) Anything that can see, feel, hear, store and recall information, count and calculate, reason, and make choices is conscious

13) Human beings can do all these things

14) Computers can do all these things

15) Animals, insects and fishes can do all these things

16) Microscopic organisms can do all these things
____________________________________________________________
Therefore, Human beings, Computers, Animals, insects, Fishes and Microscopic organisms are conscious.


I am not claiming that this argument is accurate, let alone any of its premises so. But I think the amount of information that I have packed into it is enough for us or those involved in this subject to start classifying, analysing and making concrete judgements about the nature and purpose of the human consciousness, and to state whether other systems other than the humans can have and make a purposeful use of it.

NOTE: The problem that we have in this discussion is that man-made law recognises the distinction between mental events and physical events and treats them as outwardly purposive and useful for measuring, monitoring and controlling rational behaviour needed for administering and enforcing ‘COLLECTIVE RESPOSNSIBILITY’, without making judgements about their true natures, causal roles, causal relations and origins. The Law of Rationality, though scientifically reducible and clear, seems to be given a very limited interpretation at the outer practical level. Perhaps this is the reason why scientific opinion is usually sought in some very hard and complicated cases, such as sleepwalking and mental illness cases. The lawmaker leaves the gab between the mental and the physical unclosed and does not pretend in any shape or form to close it, but nevertheless assumes a substantial degree of consciousness in the process. For the lawmaker, to commit an offence, both the MENTAL CONTENT (Intention) and the PHYSICAL CONTENT (Action) must exist and one must flow from the other …..and one devoid of the other is no crime. Either way the lawmaker assumes existence of consciousness…and in terms of the provocation example that I gave above, it seems as if the lawmaker takes provocation, if proved, to be an absence of consciousness – that is, absence of a conscious, fully thought out, decision to bring about a criminal or an unlawful act. In other words the lawmaker does not wait to establish or know how, say, pain is represented in the minds or bodies of both the offender and the offended before stating and enforcing the law, nor neither does he waits to know whether both the offender and the offended feel pain in the same way or differently before acting as so stated. He resumes his interpretation and enforcement of the law from the point (practical, non-scientific level) where he thinks ‘reasonable’ and ‘fair’, without assuming any knowledge of what is going on at the scientific level.

This problem that I have just highlighted is summarised in law as:

“everyman is presumed to intend the consequence of his act”

The fundamental question that confronts all the multi-disciplinary researchers on this subject is this:

When will they turn ordinary assumptions into hard scientific facts?

If we consider the lawmaker’s standpoint (the interpretation and enforcement of collective responsibility by a device of behaviourism alone) as functionally inadequate, then we ought to look for a way of correcting the occasional errors in the process scientifically, such as those errors that usually plaque the system and subsequently manifest into wrongful convictions and miscarriage of justice. Conversely, if we reject the system as being structurally and functionally fatalistic or very dangerous in scope and in substance, then we ought to consider the total overhauling and re-engineering of it at the structural underlying level. I cannot imagine anyone immediately opting for the former, let alone the latter……but they are unquestionably very important issues that sooner or later we would have to unavoidably confront in a multi-disciplinary manner.

And so is the notion of 'COLLECTIVE CONSCIOUSNESS’, if at all it is structurally and functionally possible in the first place.
 
Last edited:
  • #394
Philocrat said:
Don't forget that consciousness is now a multi-discilinary project. It's no longer exclusive to philosophy. Nearly every discipline now wants a slice of it. And that's why we can no longer afford to be snobbish.

*We* cannot be snobbish as philosophers, because I'm not a philosopher, for the record, I'm a physicist :-) However, the "multidisciplinary" effort seems to me, for its own convenience, to *redefine* consciousness in the behavioral way. I can very well understand the idea: otherwise one quickly runs out of "things to do", and at the end of the year, one needs to write papers!
In a lot of situations, as with the psychiatrist, "awareness" is set equal to "having access to data and responding in a "thoughtfull matter" to these data". And, as I said, all this is fine and well, and gives rise to useful scientific knowledge.

But what all these people seem to miss, is that the ONLY reason for consciousness to exist as a concept, is that *I AM CONSCIOUS*. If I weren't conscious, as far as any scientifically falsiable statement goes, the concept doesn't make sense, exactly because it doesn't have any operational definition, besides the obvious fact that I know I'm conscious. And the strategy in all these multidisciplinary efforts is to, rather randomly, add an operational definition to it (memory function/Turing test/behaviorism/intelligence...) in order to render it a concept that can enter into falsifiable theories. The funny thing is that each discipline has its own added operational definition and they are sometimes incompatible. I'm sure that, if one follows your way of reasoning, we will soon have an IEEE standard of when a machine is conscious.

But the contents of the hard problem is untouched, exactly because of this: without adding an operational definition, consciousness hasn't got any, because its operational definition is purely subjective. It is this notion, by itself, the fact that what we call consciousness is inherently subjective, that makes that it cannot be part of a falsifiable scientific theory. Normally one rejects such notions. The only reason I cannot reject it is that subjectively, the concept that I'm conscious DOES make sense. In fact, it is the ONLY scientifically non-falsifiable notion that cannot be rejected.

Now I'm a physicist, so I can also contribute to the multidisciplinary effort :-) I'm very happy, because in physics, there is another _hard problem_. It is called the measurement problem in quantum theory.
A well-known strategy of impressing your environment is to take two impossible-to-solve problems, and say that they are the same thing, so that's what I do, for the moment. I think - but I'm open to it - that, just as Penrose, there is a link between the measurement problem in quantum theory and the problem of consciousness.

cheers,
Patrick.
 
  • #395
vanesch said:
Now I'm a physicist, so I can also contribute to the multidisciplinary effort :-) I'm very happy, because in physics, there is another _hard problem_. It is called the measurement problem in quantum theory.
A well-known strategy of impressing your environment is to take two impossible-to-solve problems, and say that they are the same thing, so that's what I do, for the moment. I think - but I'm open to it - that, just as Penrose, there is a link between the measurement problem in quantum theory and the problem of consciousness.

cheers,
Patrick.
The difficult we can do immediately, the impossible takes just a little longer.

Other than that they're both 'impossible-to-solve', why link the two? Does linking the two allow you to have an objective criterion for determining the presence of consciousness? For example, "if it says it's conscious, and there's clearly some quantum measurement thingy involved in how it works, then it truly is conscious."
 
  • #396
It's just a search strategy - parsimony. Suppose all our unsolvable problems are unsolvable for the same reason. Penrose made quite a run with it, but seems to have become seduced into doing QFT on microtubules.
 
  • #397
vanesch said:
*We* cannot be snobbish as philosophers, because I'm not a philosopher, for the record, I'm a physicist :-) However, the "multidisciplinary" effort seems to me, for its own convenience, to *redefine* consciousness in the behavioral way. I can very well understand the idea: otherwise one quickly runs out of "things to do", and at the end of the year, one needs to write papers!
In a lot of situations, as with the psychiatrist, "awareness" is set equal to "having access to data and responding in a "thoughtfull matter" to these data". And, as I said, all this is fine and well, and gives rise to useful scientific knowledge.

But what all these people seem to miss, is that the ONLY reason for consciousness to exist as a concept, is that *I AM CONSCIOUS*. If I weren't conscious, as far as any scientifically falsiable statement goes, the concept doesn't make sense, exactly because it doesn't have any operational definition, besides the obvious fact that I know I'm conscious. And the strategy in all these multidisciplinary efforts is to, rather randomly, add an operational definition to it (memory function/Turing test/behaviorism/intelligence...) in order to render it a concept that can enter into falsifiable theories. The funny thing is that each discipline has its own added operational definition and they are sometimes incompatible. I'm sure that, if one follows your way of reasoning, we will soon have an IEEE standard of when a machine is conscious.

But the contents of the hard problem is untouched, exactly because of this: without adding an operational definition, consciousness hasn't got any, because its operational definition is purely subjective. It is this notion, by itself, the fact that what we call consciousness is inherently subjective, that makes that it cannot be part of a falsifiable scientific theory. Normally one rejects such notions. The only reason I cannot reject it is that subjectively, the concept that I'm conscious DOES make sense. In fact, it is the ONLY scientifically non-falsifiable notion that cannot be rejected.

Now I'm a physicist, so I can also contribute to the multidisciplinary effort :-) I'm very happy, because in physics, there is another _hard problem_. It is called the measurement problem in quantum theory.
A well-known strategy of impressing your environment is to take two impossible-to-solve problems, and say that they are the same thing, so that's what I do, for the moment. I think - but I'm open to it - that, just as Penrose, there is a link between the measurement problem in quantum theory and the problem of consciousness.

cheers,
Patrick.

I understand all what you are saying and what your key concerns are about the 'hard problem' of consciousness...and I appreciate your own effort on the subject totally. But the questions that still bother me are these:

1) SUBJECTIVITY

If consciousness or an aspect of it is wholly subjective why does it make persistent objective demands on the physical world? Why does it interplay in the public realm?

2) PRODUCTION AND FORM

If Consciousness or an aspect of it has no physical orgin, where does it come from? What produces it? If it is non-physical, what is it made of?

3) INDEPENDENCE AND INTERACTION

How does it in its non-physical, non-material state interact with a purely but clearly accounable physical system like the human?

I am sorry to be this persistent, but my main concern is that by dwelling too much on the hard problem of consciousness, we may waste so much time such that we lose touch of the fact that the volume of research data accumulated in different disciplines on the subject must start converging. These data must start converging, not yesterday, but now!

Or we might as well retire into ourselves and admit that qualia needs a brand new discipline to expalin it.
 
Last edited:
  • #398
Nereid said:
The difficult we can do immediately, the impossible takes just a little longer.

Other than that they're both 'impossible-to-solve', why link the two?

As I said, because that's a technique to impress your environment :-p

Does linking the two allow you to have an objective criterion for determining the presence of consciousness? For example, "if it says it's conscious, and there's clearly some quantum measurement thingy involved in how it works, then it truly is conscious."

No, it is not in that way that I saw the link. The problem in QM is the fact that apparently, you need "other physics" for observers than for "systems", namely the Born probability rule (and projection) versus unitary evolution. You could argue that the observation is not made by a conscious being but by a measurement apparatus, but - as you know - decoherence theory shows us that you can simply allow unitary evolution, and then your system+measurement apparatus becomes in a superposition of states which, for all practical purposes, will not interfere anymore. We call the non-interfering component vectors, branches. If on a higher level, a projection is made and probabilities are assigned, then we will select a probable branch.
In that case we will also find that the measurement apparatus, as selected by that projection, applied exactly the same scheme. So you're free to postpone the projection up to conscious observation, or to consider that it happened already at the level of the apparatus. But you need a projection between here and there, because it is the only way to get out probabilities in the von Neuman way. (the MWI says otherwise, but I think it is flawed, in that whenever they find that you should have a history according to the von Neuman rule, they have sneaked in a preference for a high-probability state according to von Neuman!)

Now let us compare.
The essential element of consciousness (that made it difficult to handle) was that there was a subjective "observation". The essential difficulty in QM is that somewhere along the chain, you need to postulate "an observation".
Allow me to point out that it is tempting to say that both are linked. So consciousness is "that which applies the projection in QM". In that case, there is needs only be one consciousness in the world, namely mine (and I should stop talking to you guys about it, because you haven't got any and so you can't possibly know what I'm talking about :biggrin:).
The nasty thing is that decoherence has then also as a corrolary that anything else which applies the projection (and hence has a consciousness) cannot be recognized (because once branches decohere, everything happens exactly in the same way whether we consider that a machine projected or not, as long as we project, afterwards). So associating consciousness with "that what projects" also links the fundamental difficulty of recognizing something similar somewhere else (a consciousness has difficulty recognizing another consciousness, and a projector has difficulties recognizing another projector).
I don't claim "truth" for my statements here, but I think they are intriguing, no ?

cheers,
patrick.
 
  • #399
Philocrat said:
How true is the claim that everything in the whole universe can be explained by Physics and Physics alone? How realistic is this claim? Does our ability to mathematically describe physical things in spacetime give us sufficient grounds to admit or hold this claim? Or is there more to physical reality than a mere ability to matheamtically describe things?

You can always talk about sounds in waveforms, and yea, you can get a pretty much complete picture, but you can never understand how to use a harmonic minor, or how abstract art is interpreted.
 
  • #400
vanesch said:
As I said, because that's a technique to impress your environment :-p

No, it is not in that way that I saw the link. The problem in QM is the fact that apparently, you need "other physics" for observers than for "systems", namely the Born probability rule (and projection) versus unitary evolution. You could argue that the observation is not made by a conscious being but by a measurement apparatus, but - as you know - decoherence theory shows us that you can simply allow unitary evolution, and then your system+measurement apparatus becomes in a superposition of states which, for all practical purposes, will not interfere anymore. We call the non-interfering component vectors, branches. If on a higher level, a projection is made and probabilities are assigned, then we will select a probable branch.
In that case we will also find that the measurement apparatus, as selected by that projection, applied exactly the same scheme. So you're free to postpone the projection up to conscious observation, or to consider that it happened already at the level of the apparatus. But you need a projection between here and there, because it is the only way to get out probabilities in the von Neuman way. (the MWI says otherwise, but I think it is flawed, in that whenever they find that you should have a history according to the von Neuman rule, they have sneaked in a preference for a high-probability state according to von Neuman!)

Now let us compare.
The essential element of consciousness (that made it difficult to handle) was that there was a subjective "observation". The essential difficulty in QM is that somewhere along the chain, you need to postulate "an observation".
Allow me to point out that it is tempting to say that both are linked. So consciousness is "that which applies the projection in QM". In that case, there is needs only be one consciousness in the world, namely mine (and I should stop talking to you guys about it, because you haven't got any and so you can't possibly know what I'm talking about :biggrin:).
The nasty thing is that decoherence has then also as a corrolary that anything else which applies the projection (and hence has a consciousness) cannot be recognized (because once branches decohere, everything happens exactly in the same way whether we consider that a machine projected or not, as long as we project, afterwards). So associating consciousness with "that what projects" also links the fundamental difficulty of recognizing something similar somewhere else (a consciousness has difficulty recognizing another consciousness, and a projector has difficulties recognizing another projector).
I don't claim "truth" for my statements here, but I think they are intriguing, no ?
At the risk of horribly oversimplifying (or maybe distorting?) - and leaving completely aside the problem of you (or me, or Radar, etc) determining that there's only one consciousness in the universe - this QM measurement thingy just moves the objective criterion a tiny bit on from what I said earlier (which was (Version001): "if it says it's conscious, and there's clearly some quantum measurement thingy involved in how it works, then it truly is conscious.")
Version002: "if it says it's conscious, and says that every time it looked, there was either a dead cat or a live one (never a superposition of dead and live cat), and there's clearly some quantum measurement thingy involved in how it works, then it truly is conscious."
 
Back
Top