Can Everything be Reduced to Pure Physics?

  • Thread starter Thread starter Philocrat
  • Start date Start date
  • Tags Tags
    Physics Pure
Click For Summary
The discussion centers on the claim that everything in the universe can be explained solely by physics. Participants express skepticism about this assertion, highlighting the limitations of physics and mathematics in fully capturing the complexities of reality, particularly concerning consciousness and life. The conversation touches on the uncertainty principle, suggesting that while physics can provide approximations, it cannot offer absolute explanations due to inherent limitations in measurement and understanding.There is a debate about whether all phenomena, including moral and religious beliefs, can be explained physically. Some argue that even concepts like a Creator could be subject to physical laws, while others assert that there may be aspects of reality that transcend physical explanations. The idea that order can emerge from chaos is also discussed, with participants questioning the validity of this claim in light of the unpredictability observed in complex systems.Overall, the consensus leans towards the notion that while physics can describe many aspects of the universe, it may not be sufficient to explain everything, particularly when it comes to subjective experiences and the nature of consciousness.

In which other ways can the Physical world be explained?

  • By Physics alone?

    Votes: 144 48.0%
  • By Religion alone?

    Votes: 8 2.7%
  • By any other discipline?

    Votes: 12 4.0%
  • By Multi-disciplinary efforts?

    Votes: 136 45.3%

  • Total voters
    300
  • #361
selfAdjoint said:
If you stick a pin in a baby, it will respond with behavior, but it can't tell you that it hurts. Neverthelass, because the baby is human, we INFER that it hurts, and say "Nasty man! Stop hurting that baby". When your PC indicates harm with behavior by getting warm, you don't infer pain because it is a machine. Maybe you should? After all it wouldn't be much of a programming job to adapt some natural language program to produce "Ow! That hurts!" from your PC's speakers when it overheats.

Right. That's exactly what the hard problem is all about :-)
In fact, I don't know if a newborn baby is conscious and feels pain. For all safety, I assume it does (because legally I think I'm in trouble if I'd act as if it wasn't 8-). But it might very well not. And only slowly turn on its conscousness, say, at 1 or 2 years old. How can we know ?

cheers,
patrick.
 
Last edited:
Physics news on Phys.org
  • #362
Fliption said:
I'm not understanding the definitional problem you're pointing out. Perhaps too much is being made of the word "belief"? The point is simply that there is no reason to believe that a zombie with identical A-consciousness to you would behave any differently from you. So if you believe you have P-consciousness, a zombie with identical A-consciousness must also behave as if it has the same belief. To suggest it really "believes" is a stumbing block because it implies an inner life, which by definition there is none. That's why I posted the clarification above that when we say belief, we are talking only about the functional aspects of it. It is probably best that the word not be used at all.

It is not the issue of belief but the definition of a zombie. Could you say clearly whether a zombie in your definition does or does not posses the properties of sensation, memory of particular sensation, imagination of sensation, and the ability to compare remembered, sensed and imagined sensations. I claim that AIs can be programmed to do these things (perhaps poorly, but it's the categories I'm talking about, not the efficiency). If your zombie has some of these but not others would you indicate which?

Thank you.
 
  • #363
selfAdjoint said:
It is not the issue of belief but the definition of a zombie. Could you say clearly whether a zombie in your definition does or does not posses the properties of sensation, memory of particular sensation, imagination of sensation, and the ability to compare remembered, sensed and imagined sensations. I claim that AIs can be programmed to do these things (perhaps poorly, but it's the categories I'm talking about, not the efficiency). If your zombie has some of these but not others would you indicate which?

Thank you.

I would say it can do the functional aspects of all those things. But it has no experience of doing them.
 
  • #364
vanesch said:
Right. That's exactly what the hard problem is all about :-)
In fact, I don't know if a newborn baby is conscious and feels pain. For all safety, I assume it does (because legally I think I'm in trouble if I'd act as if it wasn't 8-). But it might very well not. And only slowly turn on its conscousness, say, at 1 or 2 years old. How can we know ?

cheers,
patrick.
Did Homo erectus experience pain? Do chimpanzees experience pain? Do cats? mice? lizards? trees? mosquitos?

If you say that they do (never mind why you say they do), does that mean they also possesses p-consciousness?

On the operating table, you do not experience pain (unless the anaesthetist fails to do her job!), and you are not conscious of your lack of consciousness. Do you possesses p-consciousness? What if you're in a coma?
 
  • #365
Fliption said:
I would say it can do the functional aspects of all those things. But it has no experience of doing them.

Ummm, OK. That leaves me with a problem. For in doing those things, it IS experiencing them in what I would call a reasonably not overspecialized use of the verb "to experience". Perhaps we could agree that it is not AWARE of experiencing them?

But awareness doesn't seem to require unphysical assumptions:

It is not impossible to bring the fact of experience into an AI system as data, and to allow it to be "sensed, imagined, remembered, compared". I recall a couple of years ago a plan for self repairing satellites and rovers that would do just this; monitor their own behavior, compare it with norms, and apply problem solving algorithms to search the behavior stream to find and repair the cause of any devience. Yes, this is what our autonomic nervous systems do more or less, but I would argue that pace your philosophy there is no sharp line between this kind of thing and percieving one's feelings.

Feelings have recently been a hot study area in the fMRI brain scan field. Quite siimple physical processes in the hippocampus have resulted in complex reported feelings.
 
  • #366
selfAdjoint said:
Ummm, OK. That leaves me with a problem. For in doing those things, it IS experiencing them in what I would call a reasonably not overspecialized use of the verb "to experience". Perhaps we could agree that it is not AWARE of experiencing them?

But awareness doesn't seem to require unphysical assumptions:

It is not impossible to bring the fact of experience into an AI system as data, and to allow it to be "sensed, imagined, remembered, compared". I recall a couple of years ago a plan for self repairing satellites and rovers that would do just this; monitor their own behavior, compare it with norms, and apply problem solving algorithms to search the behavior stream to find and repair the cause of any devience. Yes, this is what our autonomic nervous systems do more or less, but I would argue that pace your philosophy there is no sharp line between this kind of thing and percieving one's feelings.

Feelings have recently been a hot study area in the fMRI brain scan field. Quite siimple physical processes in the hippocampus have resulted in complex reported feelings.

I think there are some semantic issues with using the words this way. Of course, you can use them however you like but I don't think using them in this context makes any philosophical issues go away. From what I've seen in discussions in this forum, I think people might reverse your use of the words awareness and experience. For example I've seen people say that a video camera is aware of the data it receives. But I've never seen the word experience used in the same way. Regardless of which word it is we use, there is a feature that seems to have no functional explanation such as "the hippocampus does x". That is the feature that we're calling P-consciousness.
 
  • #367
selfAdjoint said:
But awareness doesn't seem to require unphysical assumptions:

It is not impossible to bring the fact of experience into an AI system as data, and to allow it to be "sensed, imagined, remembered, compared". I recall a couple of years ago a plan for self repairing satellites and rovers that would do just this; monitor their own behavior, compare it with norms, and apply problem solving algorithms to search the behavior stream to find and repair the cause of any devience. Yes, this is what our autonomic nervous systems do more or less, but I would argue that pace your philosophy there is no sharp line between this kind of thing and percieving one's feelings.
In another 'artificial machine' area, such awareness is already alive and flourishing ... in modern communications networks, the 'self-healing network' has been extensively researched, standards written, and commercial companies sell such systems to large telecom companies, who hire teams of SI experts to tweak these systems, so as to reduce even further the number of human techs needed to monitor and maintain such systems. Do such systems actually work? Yes, and you bet your life on them every day that you make a 000 (911 in the US) call!
 
  • #368
Nereid said:
Did Homo erectus experience pain? Do chimpanzees experience pain? Do cats? mice? lizards? trees? mosquitos?

If you say that they do (never mind why you say they do), does that mean they also possesses p-consciousness?

I don't know what is meant with a-consciousness and p-consciousness. I'd only say that *IF* they experience pain, then they are conscious.
And the difficult problem is indeed, to find out if chimps, cats, mice, lizards, trees, and mosquitos feel pain. I'm not talking about their behavior that would "indicate us they'd feel pain".

As I said, I don't know these definitions of a and p consciousness. But I can guess it: It seems from what is said above, that "a-consciousness" is just the intelligence of a computer program to dictate behavior "as if" the entity were conscious, and "p-consciousness" is what I simply call consciousness, namely the awareness of it, and the subjective experiences. I think p-consciousness (for me for short consciousness) doesn't influence behavior, and a-consciousness is not consciousness but the physical description of the input-response mechanism, be it a computer program, a brain or whatever.

cheers,
Patrick.
 
  • #369
vanesch said:
I think the ultimate conscious experience is the fact that pain hurts. Pain is the physiological manifestation (neurotransmitters etc...) and the behavioural consequences (trying to avoid it, and screaming if we can't avoid it) ; but the fact that it HURTS cannot be studied actually (except for ASKING "did it hurt?" and assuming the answer is honest ;-)

For instance, I am pretty convinced that trying to factorise big numbers on my PC does pain to my PC (it gets hot, it takes a long time to answer, everything seems to run slowly etc...). My PC even regularly reboots in order to avoid it (or I might have a virus). But I don't think my PC FEELS the pain. Although my program prints out that it does if the number is really big...

cheers,
Patrick.

Correct...the computer probably does not feel any pain, have you or any of the learned members of this gathering thought of any additional ability or abilities at the engineering level to be given to this computer to enable it to feel pain? There is equally another consideration...perhaps pain may not be a requirement of an efficient or perfect state of being. Robots are now being made not only physically flexible but also are being empowered with more abilities that closely resemble those of the humans.

The debate tends to be moving from Mary to Zombie to computers without any readiness for anyone to commit him or herself as to what additional abilities are neended to make these different systems structurally and functionally more efficient. And in terms of the human system, there are so many displayed abilities and functions that cannot stand the test of efficiency, let alone be grounded as fundamentally necessary.
 
Last edited:
  • #370
Philocrat said:
Correct...the computer probably does not feel any pain, have you or any of the learned members of this gathering thought of any additional ability or abilities at the engineering level to be given to this computer to enable it to feel pain?


We can think of the following: I take a big metal box (say, 2m on 2m on 2m), in which I put my PC, with the original program, but with the display, speakers and keyboard outside of the box. I also bribe one of the doctors of the nearby hospital that when there's a hopeless case coming in the emergencies, with a broken spine, paralized and without a voice, that he quickly does the necessary reparations to the victim, and then handles her over to me. I put her in the box, put a few electrodes on her body and connect them to my computer. Now when I ask my computer to factorize a large number, it not only prints out "Aw, that hurt" on my screen, but also connects (through a simple controller card) the mains (220V) to the electrodes on the victim's body, which is conscious. She can't move and can't scream ; I don't see her, because she's inside the big box. But I'd say now that my "box computer", when it prints out "Aw, that hurt", feels pain...

cheers,
Patrick.
 
  • #371
Many contemporary philosophers have already suspected the concept of 'awareness of being aware' or 'self-awareness' as the essential component of consciousness in general. For those of you who understand computers up to the programming and engineering levels, you should know that many new generations of computers are already 'environmentally aware'. In fact on this aspect, many of these computers would outsmart or outfox the humans, as far as the notion of safety or avoidance of evironmental dangers is concerned. There are now so many sophisticated devices that if you fit them onto modern computers they would cause these computers to become 'Super aware' of their external environments.

The BIG question now is:

What technical difficulties do we have to overcome both at the detailed hardware engineering level and at the detailed schematic Programming level in order to empower computers with self-wareness.

The issue is no longer about argueing whether computers can think or be conscious. Computer is nearly human! The question should therefore concentrate on what is left to be done to make computers fully human, given that being human is thought to be the benchmark or measure of being alive. For all we know, being human may afterall not be the only route of getting to design superbeings. For it seems as if we are currently thinking that we must first design human-like machines before setting about the important yet well-overdue project of structurally and functionally improving the physical state of the human-like beings. I dont't know why we think in this way, but so it seems. Bad habits die hard!
 
Last edited:
  • #372
Philocrat said:
The BIG question now is:

What technical difficulties do we have to overcome both at the detailed hardware engineering level and at the detailed schematic Programming level in order to empower computers with self-wareness.

The problem still stands: how would you know you've succeeded ?

There's no behavioral way to know. Look at my "computer in a box". The output on the screen (the only behavioral access I have) is identical: it prints out "aw that hurt!". But in the case the victim is connected to the mains, there is an awareness of pain in my box, and if the victim is not connected, it is a simple C-program line that printed out the message. The computer works in identical ways.
If you now replace that human victim (of which we can assume that it consciously experiences pain) by a machine, how can we know ? The behavir is identical.

cheers,
Patrick.
 
Last edited:
  • #373
vanesch said:
As I pointed out, I don't think that consciousness has much to do with behavior. I even envision the possibility that consciousness IN NO WAY influences our behavior which is probably dictated by the running of a biochemical computer program. Even our thinking is not influenced by our consciousness. Our consciousness just subjectively observes what our (non-conscious) body is doing and thinking.
I acknowledge that this is an extreme viewpoint, but I consider it an interesting thought that consciousness CANNOT influence the behavior of a human being. It's just there passively observing what's being done, said and thought. And undergoes feelings.

Why would a biochemical computer program, have written into itself to self destruct?

How would you account for the fact that, I pushed my wife out of the way of getting hit by a car and almost getting killed myself? Why would I want to do that? What influenced my choice then?
 
  • #374
Rader said:
How would you account for the fact that, I pushed my wife out of the way of getting hit by a car and almost getting killed myself? Why would I want to do that? What influenced my choice then?

Heroic behavior can be naturally selected for, in that related groups of individuals, of whom some have "heroic behaviour" (running the risk of sacrificing themselves for the well-being of the group), have a survivalistic advantage over a "bunch of cowards". The heroic subject diminishes of course his own chances of getting his genetic material to the next generation, but his relatives will have a higher chance in doing so.
Also, if a heroic subject *survives* to his heroic deed, often there is a lot of compensation, and even survivalistic advantage (success with members of opposite sex).

What makes you think that this behavior is unthinkable without consciousness ?

But the very behavioural observation for "altruistic selfdestruction" cannot be the proof of consciousness.
Dogs do this too. Some security systems do that too. Even a fuse does it, inside electronic equipment. Are fuses conscious ?

cheers,
Patrick.
 
Last edited:
  • #375
vanesch said:
Heroic behavior can be naturally selected for, in that related groups of individuals, of whom some have "heroic behaviour" (running the risk of sacrificing themselves for the well-being of the group), have a survivalistic advantage over a "bunch of cowards". The heroic subject diminishes of course his own chances of getting his genetic material to the next generation, but his relatives will have a higher chance in doing so.
Also, if a heroic subject *survives* to his heroic deed, often there is a lot of compensation, and even survivalistic advantage (success with members of opposite sex).

I would agree with you, that all those factors could be computized in my brain subconciously but it was apparently overrided. My subconscious actions had nothing to do with calculations of survival of the human race. What went through my head was anxiety fear hate relief love, in that order.

It seems we never get past KP to KP4 with this issue of who is conscious. What if we could guess what is in each others head? Bobby Fisher seemed to, what of his competitors? Why did Big Blue beat Spatsky? Could anything be conscious, meat or machine?

What makes you think that this behavior is unthinkable without consciousness?

I am aware of being aware, is one primary reason. The second reason I would give is I never seen anyone walking around that had no consciounsess and was dead, doing these things. I realize I have no proof that anything is either conscious or alive. This could have consequence as you have stated in your previous post. Maybe your right, consciousness is observing but something is aware of being observed. I know that, from my own experience. The world is weird enough now without giving the property to consciousness of being able to descriminate, whereby I or maybe humans are only conscious.

But the very behavioural observation for "altruistic selfdestruction" cannot be the proof of consciousness.

Or you or I can know that.

Dogs do this too. Some security systems do that too. Even a fuse does it, inside electronic equipment. Are fuses conscious?

You know by your posts you seem to be very interested and educated to answer your last question and this one. Is not the basic difference between measurement of coherent states and non-cohent states, the observer? HUMANS DOGS FUSES show the same results, only if we can determine if they observe. Does it not come down to the fact that all elecromagnetic waves observe each other?
 
  • #376
Rader said:
My subconscious actions had nothing to do with calculations of survival of the human race. What went through my head was anxiety fear hate relief love, in that order.

You misunderstood my point. If there is a natural selection for a certain behavior, then that behavior is not necessarily instilled with a conscious thought of "I have to optimize my natural selection" :-) You asked how it could be that you had an altruistic heroic behavior if it weren't for a conscious descision (against all odds) to act that way. I pointed out that your "biochemistry computer" could have been programmed to behave that way by natural selection, and that such behavior is no proof of consciousness.
There are now 2 possibilities left: one is that (as I propose) your "biochemistry computer" runs its unconscious program as any other computer, and your consciousness is just passively watching and having feelings associated with it, without the possibility of intervening. The other possibility is that your consciousness is "in charge" of your brain, and influences behavior.

cheers,
Patrick.
 
  • #377
vanesch said:
You misunderstood my point. If there is a natural selection for a certain behavior, then that behavior is not necessarily instilled with a conscious thought of "I have to optimize my natural selection" :-) You asked how it could be that you had an altruistic heroic behavior if it weren't for a conscious descision (against all odds) to act that way. I pointed out that your "biochemistry computer" could have been programmed to behave that way by natural selection, and that such behavior is no proof of consciousness.

I think I understand you correctly but do you understand me. I could have let the car run over her if I was programed or not for this trait. I choose not to. If I was getting a divorce maybe I would have had a second thought about it and let the car run over her. Now do you understand my point. That takes a conscious thought.

There are now 2 possibilities left: one is that (as I propose) your "biochemistry computer" runs its unconscious program as any other computer, and your consciousness is just passively watching and having feelings associated with it, without the possibility of intervening. The other possibility is that your consciousness is "in charge" of your brain, and influences behavior.

01-The world would be totally deterministic and there be no choice. Your claiming then that a "biochemistry computer", I take that to mean the "brain parts" would cause consciousness while conciousness produced, looks on. This would be a classical explanation, and if the "biochemistry computer", was quantum in nature?

02-If your consciousness is "in charge" of your brain, and influences behavior, then all behavior would be totally deterministic, only if there was a classical explanation of the brain. If the brain was quantum in nature, then it would seem to be more understandable why we make choices.
 
  • #378
Rader said:
I think I understand you correctly but do you understand me. I could have let the car run over her if I was programed or not for this trait. I choose not to. If I was getting a divorce maybe I would have had a second thought about it and let the car run over her. Now do you understand my point. That takes a conscious thought.

What is a conscious thought? All of the brain and biochemical acitvity required for you to "think" about this decision can, in principle, be completely accounted for. None of these activities have anything to do with consciousness. What Vanesch is saying is that there is no way for you to know whether your consciousness is actually participating in the process or whether it is just experiencing the physical activities that particpate in the process. The "conscious thought" you're referencing can be completely explained using physical processes of the brain; none of which are associated with consciousness. This is why there is a 'hard problem'.
 
  • #379
vanesch said:
We can think of the following: I take a big metal box (say, 2m on 2m on 2m), in which I put my PC, with the original program, but with the display, speakers and keyboard outside of the box. I also bribe one of the doctors of the nearby hospital that when there's a hopeless case coming in the emergencies, with a broken spine, paralized and without a voice, that he quickly does the necessary reparations to the victim, and then handles her over to me. I put her in the box, put a few electrodes on her body and connect them to my computer. Now when I ask my computer to factorize a large number, it not only prints out "Aw, that hurt" on my screen, but also connects (through a simple controller card) the mains (220V) to the electrodes on the victim's body, which is conscious. She can't move and can't scream ; I don't see her, because she's inside the big box. But I'd say now that my "box computer", when it prints out "Aw, that hurt", feels pain...

cheers,
Patrick.

The scenario that you are describing here may very well reflect the current state of our progress at the design, engineering and programming levels. Yes, true this may very well be so, but it still doesn't alter the fact that we need to clearly state and classify the notions of (1) intelligence (2) thinking and (3) consciousness. For example, given that we knew what (1) or (2) or (3) clearly means, we need to take stock of all the things that humans can do that computers cannot do and vice versa and the things that both can equally do that come under (1), (2) or (3). All that I have seen so far is that people just argue away in a point-scoring manner without much attention to these questions. This problem is captured much more clearly in my next posting below.

The state that you are describing is admittedly problematic, but I am saying that we need to move away from this level of sentiment and take hard stock of what is going on at the detailed engineering and programming levels. As to the puzzle of why we want to first replicate the human-like intelligence, or thinking or consciousness in machines before thinking about any form of progress in the subject, well that's another matter. I leave that to your imagination.
 
  • #380
vanesch said:
The problem still stands: how would you know you've succeeded ?

There's no behavioral way to know. Look at my "computer in a box". The output on the screen (the only behavioral access I have) is identical: it prints out "aw that hurt!". But in the case the victim is connected to the mains, there is an awareness of pain in my box, and if the victim is not connected, it is a simple C-program line that printed out the message. The computer works in identical ways.
If you now replace that human victim (of which we can assume that it consciously experiences pain) by a machine, how can we know ? The behavir is identical.

cheers,
Patrick.

How could we not know? Yes, I agree with you, behaviourism does have some drawbacks but it never completely undermines sucessful existence. In terms of the humans, we are naturally lazy and reluctant about taking control of things on our causal and relational pathways. The claim that we cannot intervene with our own nature and make an effort to re-negineer and improve our state of being is not only wrong but fundamentally dangerous. We do know, and have always known, when we succeed in the public realm, even behaviourally. If we could not do this, we would probably not be here today. Perhaps, the measure is only in degrees or minimal, but at least we are still here. On this very same token, when we do succeed in replicating human-like intelligence in other non-human systems, I personally see nothing that would stop us from knowing this. In fact this is even the more reason why we must have courage to take control and use the right and clear approach in dealing with this issue.
 
Last edited:
  • #381
The Turing Universal Machine and Consciousness

The dispute is not, and has never been, about whether a machine can think or act intelligibly because the original Turing Machine had all the necessary ingredients to do so. Rather, it’s wholly about whether thinking or acting intelligibly is a conscious act. The notion of awareness (introspective or extro-spective) ought to already have been captured by the notion of thinking or intelligence, given that we knew what this meant in the first place. I am saying that it is more than well overdue for all the inter-disciplinary researchers to commence the process of schematically yet quite naturally coming to a concrete agreement on this subject. The agreement that I am referring to here could be captured in the following schema:

SCHEMA I

(1) A Conscious act is an intelligent act
(2) All intelligent acts are conscious acts
(3) Anything that can produce an intelligent act is conscious
(4) Computer can produce intelligent act
-------------------------------------------------------------------------------
Therefore, computer is conscious

Immediately after this argument, the next most important question to ask is this:

What then constitutes an intelligent act?

In an honest and genuine response to this question, the researchers on this subject should then move on to create a ‘reference table’ of all the things that count as intelligent acts.

This argument may equivalently be stated as:

(1) A conscious act is an act of thinking
(2) All acts of thinking are conscious acts
(3) Anything that can think is conscious
(4) Computer can think
-------------------------------------------------------------------------------------
Therefore, computer is conscious

You are then required to state clearly:

What constitutes thinking?

The researcher must then create a reference table of all the things classed under thinking.

SCHEMA II

On the other hand, if it turns out that there are some thinking or intelligent acts that are conscious and some that are not, the schema should take the form:


(1) Some acts of thinking are conscious acts
(2) Thinking is conscious if you are aware not only of what your are thinking about but also of the fact that you are thinking
(3) Anything that can do this is conscious
(4) Computer has some thinking acts that are conscious
-------------------------------------------------------------------------------------
Therefore, computer is conscious

The researchers who opt for this alternative schema must classify thinking acts or intelligent acts into (1) those that are conscious and (2) those that are not conscious. Perhaps there may be a third or more schmas to prove otherwise, but I am going to leave it at this point for now.


NOTE: The implication of the Universal Turing Machine is such that it does not presuppose consciousness, therefore any schema that any researcher may opt for still has to decide on the relevance or non-relevance of consciousness. Even if he or she successfully avoids the issue of consciousness at the level of engineering or re-engineering to improve the intelligent system in questions, he or she may not avoid this issue at the level of structural and functional comparison of the system in question to the human system. Researchers in the end must either accept it as relevant or reject it as not.
 
Last edited:
  • #382
Rader said:
I think I understand you correctly but do you understand me. I could have let the car run over her if I was programed or not for this trait. I choose not to. If I was getting a divorce maybe I would have had a second thought about it and let the car run over her. Now do you understand my point. That takes a conscious thought.
How do you know your consciousness was MAKING the decision and your body was acting that way, or your body decided to act that way, and your consciousness was feeling all right with that decision (without a means of intervening) and "thought" it took it.

01-The world would be totally deterministic and there be no choice. Your claiming then that a "biochemistry computer", I take that to mean the "brain parts" would cause consciousness while conciousness produced, looks on. This would be a classical explanation, and if the "biochemistry computer", was quantum in nature?

02-If your consciousness is "in charge" of your brain, and influences behavior, then all behavior would be totally deterministic, only if there was a classical explanation of the brain. If the brain was quantum in nature, then it would seem to be more understandable why we make choices.
[/QUOTE]

This is indeed, more or less, the point. Although I do not need the idea of determinism: you can have randomly generated phenomena without conscious influence. I also tend to think - but I'm very careful here - that quantum theory might have something to say about the issue. But I think we are still very far from finding out, it is the "open door" in actual physics to consciousness.

Our mutual understanding of our viewpoints is converging, I think.

cheers,
patrick.
 
  • #383
Philocrat said:
(2) All intelligent acts are conscious acts

I do not agree. I do not see the link between intelligence (the ability to solve difficult problems) and consciousness.


The researchers who opt for this alternative schema must classify thinking acts or intelligent acts into (1) those that are conscious and (2) those that are not conscious.

Hehe, yes, they have to solve the hard problem :-)
Because it is not the problem category, nor the problem solving strategy, that will indicate this. So what remains of the intelligent act on which we base the separation ? What will be the criterion ? Also, assuming we're talking about a Turing machine, do you mean it is the _software_ that is conscious ? Independent of the machine on which it runs ? When it is written on a CD ?
I have a hard time believing that a Turing machine, no matter how complex, can be conscious. But I agree that I cannot prove or disprove this.

But we should avoid the confusion between intelligence and consciousness here. Now it might very well be that certain levels of intelligence are only attainable if the entity is conscious. But personally, I do not see a link, especially if consciousness is just sitting there passively watching. You could just as well look at power consumption and say that if you reach the density of power consumption of a human brain, the machine is conscious, and then jump into the research on power resistors. I think that "intelligence" (the ability to solve difficult problems) is a property just as power consumption, when related to consciousness.

cheers,
Patrick.
 
  • #384
Fliption said:
What is a conscious thought?

Cognitive awareness. http://www.hedweb.com/bgcharlton/awconlang.html

All of the brain and biochemical acitvity required for you to "think" about this decision can, in principle, be completely accounted for. None of these activities have anything to do with consciousness. What Vanesch is saying is that there is no way for you to know whether your consciousness is actually participating in the process or whether it is just experiencing the physical activities that particpate in the process. The "conscious thought" you're referencing can be completely explained using physical processes of the brain; none of which are associated with consciousness. This is why there is a 'hard problem'.

Fliption, actually there seems to be evidence of both. When something is born into existence, it apppears to be conscious, until such time, I can say I am conscious. If this is explanable some how, some day, this will eliminate the "hard problem" This would explain what is conscious and what physcial states determine how much something is conscious. Consciouness would have to be a fundamental property of nature.
 
  • #385
vanesch said:
How do you know your consciousness was MAKING the decision and your body was acting that way, or your body decided to act that way, and your consciousness was feeling all right with that decision (without a means of intervening) and "thought" it took it.

Good question, the only way for me to answer that is that, my consciounsess is aware of being aware and has evolved to an understanding of an order of the way the world ought to be. Sometimes my consciousness acts right but my body says no. Sometimes my body acts right when my consciousness knows better. So it appears that consciousness is watching and we make the decision how to act. :wink:

This is indeed, more or less, the point. Although I do not need the idea of determinism: you can have randomly generated phenomena without conscious influence. I also tend to think - but I'm very careful here - that quantum theory might have something to say about the issue. But I think we are still very far from finding out, it is the "open door" in actual physics to consciousness.
Our mutual understanding of our viewpoints is converging, I think.

That happens sometimes to our dissapointments later that nobody has the same view.
 
  • #386
Rader said:

I've read (part of) the article, and I think it completely misses the point. Not that I say that the scientific part of the article is wrong, but - unless I misunderstood it, I my opinion, it doesn't address the issue of consciousness as it has been adressed here on this forum. It is a technical description of brain functions.


Some quotes:
"On the other hand consciousness is an ordinary fact of life - babies are born without it and develop it over the first few years of life."

"The question of consciousness can therefore be approached by considering the general phenomenon of awareness, of which consciousness is one particular example. "

"And awareness has a quite exact definition: it is the ability selectively to direct attention to specific aspects of the environment, and to be able cognitively to manipulate these aspects over a more prolonged timescale than usual cognitive processing will allow. To hold in mind selected aspects of the perceptual landscape. Technically, awareness is attention plus working memory - ie. the ability to attend selectively among a range of perceived stimuli and a short term memory store into which several of these attended items can be 'loaded', held simultaneously, and combined. "
...
"While awareness is found in animals right across the animal kingdom; consciousness is of much more limited distribution. I suggest that consciousness is probably confined to a small number of recently-evolved social animals such as the great ape lineage - especially common chimpanzees and bonobos - and perhaps a few other recently-evolved social mammals such as elephants and dolphins."


"Consciousness arises when body state information becomes accessible to awareness. "
----

My comments:
Clearly, awareness as defined above has nothing to do with what has been meant here with consciousness. I need something that can hold information for a rather long time in memory, and access it selectively, and I have to be able to select amongst several stimuli.

In that case, I can make a machine with "awareness" using a PC, and, say, a webcam on a motor !

Moreover, if I write regularly information about power consumption, memory and CPU usage, temperature, fan speed etc... into the working memory of my PC, it is now conscious !

Come on !
 
  • #387
vanesch said:
I do not agree. I do not see the link between intelligence (the ability to solve difficult problems) and consciousness.

Does it matter? Or are you implying that when you are solving complex problems (given that this is all what the term 'intelligence' means) you are not conscious of the complex problem that you are solving, let alone of the fact that you are actually doing so?


Hehe, yes, they have to solve the hard problem :-)
Because it is not the problem category, nor the problem solving strategy, that will indicate this. So what remains of the intelligent act on which we base the separation ? What will be the criterion ? Also, assuming we're talking about a Turing machine, do you mean it is the _software_ that is conscious ? Independent of the machine on which it runs ? When it is written on a CD ?

The schemas that I am suggesting make no claims about anything. They are a mere guide setting the stage for further arguments. Or you could say that it's an invitation for those going around in circles to commit themselves and commence the process of landing the argument in safer grounds. Either we accept that consciousness has some accountable relationship with intellgent or thinking acts or that there is no such relationship. Either way, we still have to say what counts as intelligence or thinking and subsequently state whether other systems, other than the a human system, are capable of possessing such an ability. I am inviting those involved in this subject to come clean of this fact. We cannot just let lose the argument and just let it run without taking a concrete stand and take stock. It is therefore irrelevant whether such an ability (intelligence) is successfully replicated as a software or as a hardwired system or as a combination of several kinds.

I have a hard time believing that a Turing machine, no matter how complex, can be conscious. But I agree that I cannot prove or disprove this.

As I have pointed it out above, the original Turing Machine does not presuppose consciousness...and it may not rule it out either. That's why I am calling for an agreement on the whole subject. We cannot just independently and non-directionally debate it away without eventually agreeing on something. Either consciousness is part of an intelligent system or it is simply not.

But we should avoid the confusion between intelligence and consciousness here. Now it might very well be that certain levels of intelligence are only attainable if the entity is conscious. But personally, I do not see a link, especially if consciousness is just sitting there passively watching. You could just as well look at power consumption and say that if you reach the density of power consumption of a human brain, the machine is conscious, and then jump into the research on power resistors. I think that "intelligence" (the ability to solve difficult problems) is a property just as power consumption, when related to consciousness.

Well, this drags you into interactionist nightmare!..."especially if consciousness is just sitting there passively watching". It seems as if you are inviting me to think that when aspects of physical states or events approache critical functional states mysterious immaterial entities manifest to interplay. Right? Critical functional states of the physical material world do not necessarily presuppose non-physicality, nor non-existent, nor any independence from their physical sources. I do not want to go down that route of the so-called 'existence and indendence' of immaterial, or non-physical or any mysterious kinds of states. That route is just too messy and delusory for me.
 
Last edited:
  • #388
Philocrat said:
Either we accept that consciousness has some accountable relationship with intellgent or thinking acts or that there is no such relationship. Either way, we still have to say what counts as intelligence or thinking and subsequently state whether other systems, other than a human system, are capable of possessing such an ability.

Intelligence (defined as the ability to solve "difficult" problems) has, to me, no a priori relationship with consciousness. When I solve a difficult integral with paper and pencil, I absolutely have not the feeling to go through an "algorithmic mantra", but creatively find substitution rules etc... to solve it. It is an intellectual challenge as any. Well, if I type in the expression in Mathematica on my PC, it solves the same problem. I think that if you would have told someone in the 19th century that a machine could solve an integral, they would have classified that as an "intelligent act". So I'm pretty convinced that no matter where you put the bar for "human intelligence", a computer program will pass it, now, or in the near future.
So I think that the answer to your request above, is clearly that "other systems than the human system can possesses this ability".

But it doesn't indicate at all anything about consciousness. I think from an engineering point of view, you don't give a damn about consciousness. You want intelligence! You want your machines to behave in certain ways. It doesn't matter if they behave "as if they were conscious" or if they "are conscious". My point here is that we don't know how to find out ! So we could ignore the problem and we don't need an IEEE standard for consciousness. But a moral issue comes up: if machines ARE conscious, should they have rights ? Is the concept of "torturing a conscious being" applicable ? Nobody in his right mind would be shocked of "torturing an intelligent being" because the concept doesn't make sense.

cheers,
Patrick.
 
  • #389
Philocrat said:
It seems as if you are inviting me to think that when aspects of physical states or events approache critical functional states mysterious immaterial entities manifest to interplay. Right?

I do not necessarily claim that. Call it an "emerging property", in the same way phonons are an emerging property in the solid state. My hope is that it is somehow part of physics, but it is not sure. But the problem is not whether or not it IS physics, the problem is that we have no way of finding out, once we realize that consciousness has no necessarily link to behavior.
In ALL "solutions" I've seen proposed, people end up _redefining_ consciousness as something else, in order to have an operational definition.
Computer science people who work on artificial intelligence usually redefine it as 1) intelligence or 2) pure behaviorism, usually as a social intelligence (the Turing test, for instance).
If you read the article by the psychiatrist, to him, consciousness is "working memory that has access to internal body information". If you give that to a computer engineer, he quickly solders you some SRAM and a few sensors on a PCB :-)
Moreover, these people do useful work with their definition, because the concept they define IS interesting. But they miss the original meaning of consciousness. I think the philosophical problem stands there, untouched.

cheers,
Patrick.
 
  • #390
vanesch said:
Intelligence (defined as the ability to solve "difficult" problems) has, to me, no a priori relationship with consciousness. When I solve a difficult integral with paper and pencil, I absolutely have not the feeling to go through an "algorithmic mantra", but creatively find substitution rules etc... to solve it. It is an intellectual challenge as any. Well, if I type in the expression in Mathematica on my PC, it solves the same problem. I think that if you would have told someone in the 19th century that a machine could solve an integral, they would have classified that as an "intelligent act". So I'm pretty convinced that no matter where you put the bar for "human intelligence", a computer program will pass it, now, or in the near future.
So I think that the answer to your request above, is clearly that "other systems than the human system can possesses this ability".

But it doesn't indicate at all anything about consciousness. I think from an engineering point of view, you don't give a damn about consciousness. You want intelligence! You want your machines to behave in certain ways. It doesn't matter if they behave "as if they were conscious" or if they "are conscious". My point here is that we don't know how to find out ! So we could ignore the problem and we don't need an IEEE standard for consciousness. But a moral issue comes up: if machines ARE conscious, should they have rights ? Is the concept of "torturing a conscious being" applicable ? Nobody in his right mind would be shocked of "torturing an intelligent being" because the concept doesn't make sense.

cheers,
Patrick.

The answers to the moral questions are already automatically expected. It's just a matter of when, and not if. There will be no surprises. If many intelligent groups within the human system have faught for their rights in the past and succeeded, what would make computers or other systems who possesses genuine consciousness or something equivalent to it not to do the same? Human beings have always had prejudices of this kind because of fear or lack of understanding of change. Well, that's normal.

Yes, in a way it doesn't make sense and what is even worst about this whole episode is why we want to replicate human intelligence or thinking or consciousness or whatever you wish to call it in other systems first before setting about the pressing need of correcting structural and functional errors and inadequacies in us. Why build these imitations machines first before re-engineering our own originally erroneous reality? This for me is puzzling bit and the most difficult one to comprehend.
 

Similar threads

  • · Replies 58 ·
2
Replies
58
Views
3K
  • · Replies 190 ·
7
Replies
190
Views
15K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
295
  • · Replies 17 ·
Replies
17
Views
3K
Replies
29
Views
3K
  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 1 ·
Replies
1
Views
284
  • · Replies 209 ·
7
Replies
209
Views
16K
  • · Replies 7 ·
Replies
7
Views
526