Weak and Strong Emergence, what is it?

  • Thread starter Thread starter Q_Goest
  • Start date Start date
  • Tags Tags
    emergence Weak
Click For Summary
Weak emergence refers to phenomena that can be understood through the interactions of micro-level components, where local causes lead to local effects, as defined by Bedau. In contrast, strong emergence describes high-level phenomena that arise from low-level interactions but cannot be deduced from them, suggesting the need for additional fundamental laws. The discussion highlights the challenges in identifying strongly emergent phenomena and the implications of downward causation, where higher-level effects influence lower-level processes. The complexity of consciousness and subjective experience is raised as a potential example of strong emergence, questioning whether such phenomena can arise from purely deterministic systems. Ultimately, the conversation seeks to clarify the definitions and contexts of emergence in both philosophical and scientific frameworks.
  • #61
Hi Tournsel,
I already have all the information. Agents within the GoL cannot
have information I do not have.
I think that's an excellent point. But a computationalist would suggest the GoL is experiencing the information, and you are not. So the information must be experiencing itself. Would you agree this leads to panpsychism? What other conclusions could you come to given the assumption that we might have the same information, but no way to know if that information is experiencing anything? Are there any other ways to resolve this paradox and still maintain computationalism other than what Chalmers has come to?

Panpsychism is an explcitly metaphsyical position.
Regardless of what position it is, would you agree or disagree that computationalism leads to panpsychism if we have no way of detecting whether a machine is conscious or not?
 
Physics news on Phys.org
  • #62
Q_Goest said:
To say there are undeducible truths means that under certain assumptions, the something that we call subjective experience can't be deduced from what we can measure about the device allegedly harboring this experience.
To say that there is an undeducible truth behind a question implies (imho) that the question has an answer, but we simply cannot know what that answer is. In the case of questions such as "what is the largest natural number?" there IS no answer, because the question is meaningless. Thus there is no inaccessible "truth" behind the question, thus claiming there is an "undeducible truth" in questions such as this is incorrect.

Q_Goest said:
I think you've misconstrued this point. I don't see the point in suggesting that asking questions about how we can possibly detect conscious experience on some level is meaningless.
You seem to misunderstand the point I am making. I am NOT saying that "questions about how we can possibly detect conscious experience on some level are meaningless", I am saying that questions such as "what would moving finger's phenomenal consciousness be like if Q_Goest were to experience that phenomenal consciousness?" are meaningless.

Q_Goest said:
Can we deduce if a computer is having an experience simply by observing the actions of it's components? The answer is yes if you accept strong emergence per Chalmers, no if we use most other assumptions.
First define what you mean by "having an experience" (this is what I asked in my previous reply to you) :
moving finger said:
Before we can discuss possible answers to the question "is the computer having any experience at all?" you will first need to define precisely what you mean by "having an experience".
….then explain how you would go about confirming that the computer is “having an experience” assuming we accept Chalmers' version of strong emergence.

Q_Goest said:
Depending on what assumptions you make, you may come to the conclusion there is no way to know from examining the actions of a computer, if it is experiencing anything or not. Weak emergence says we can't. Strong emergence, depending on how you define it, either says you can (Chalmers) or no you can't (other, less well defined concepts of strong emergence).
Ditto above.

Q_Goest said:
I believe if you follow down this road of not being able to detect from examining the states of the computer, that the machine is experiencing anything, then we are led to concluding panpsychism is true.
Can you show how you arrive at this conclusion (because I believe your logic is faulty, but I cannot show you where you have erred unless you explain your premises and inferences in clear logical steps)?

Q_Goest said:
We can't say one thing is cognizant and another is not if we have no way of proving it. Note also this statement suggests the Turing test is not acceptable as proof, which would be a good discussion for a separate thread.
Inability to prove whether an agent is cognizant or not is NOT proof that cognizance does not exist - it is simply an indication that we cannot detect it.

Q_Goest said:
How do you define "strong emergence" then?
I would use a definition similar to the one you attribute to Chalmers in the OP :

A phenomenon P* is strongly emergent with respect to another set of phenomena P when P* arises from (ie is supervenient on) P, but some properties of P* are not deducible (even in principle) simply from knowledge of the properties of P.

Best Regards
 
  • #63
MF said: I am NOT saying that "questions about how we can possibly detect conscious experience on some level are meaningless", I am saying that questions such as "what would moving finger's phenomenal consciousness be like if Q_Goest were to experience that phenomenal consciousness?" are meaningless.
That's fine, let's get away from the discussion about "what would moving finger's phenomenal consciousness be like if Q_Goest were to experience that phenomenal consciousness?" and let's discuss questions such as "questions about how we can possibly detect conscious experience on some level".

MF said: First define what you mean by "having an experience"
By "having an experience" or when we say "consciousness" I'm referring to any of the many subjective experiences such as the phenomenon of unity, or of self awareness, or any experience that occurs in a human, but not in a rock for example.

MF said: ….then explain how you would go about confirming that the computer is “having an experience” assuming we accept Chalmers' version of strong emergence.
Chalmers is suggesting there are "new physical laws" which need to be discovered. For example, stone age man didn't understand fire to the degree we do. They couldn't understand how molecules interacted. They had new physical laws to uncover, so for stone age man, fire was a phenomena which they had no way of explaining. Similarly, Chalmers is suggesting that we need to uncover new physical laws to explain the phenomena of consciousness. If we knew what these physical laws were, we'd then be able to apply them to anything and deduce if that something was conscious or not. For example, stone age men may have believed the sun was a ball of fire. We now know that not to be the case because we know more about the physical laws that govern fire. The two look the same, the sun and fire, but they aren't the same phenomenon. Similarly, Chalmers is suggesting that if we had physical laws to apply to conscious phenomenon, we might be able to deduce if something is conscious or not.

Q said: I believe if you follow down this road of not being able to detect from examining the states of the computer, that the machine is experiencing anything, then we are led to concluding panpsychism is true.
MF said: Can you show how you arrive at this conclusion (because I believe your logic is faulty, but I cannot show you where you have erred unless you explain your premises and inferences in clear logical steps)?
If we assume that we can't know from examining something if it is conscious or not, then given there are a myriad of systems which go from the most simple to the most complex, all of which are equally capable of manipulating information (or performing calculations if you like that phrase better), we have no criteria for determining which of these systems is conscious and which are not. If we have no criteria on which to base a judgment on whether or not something is conscious, and we still claim the more complex ones are conscious, then we must also claim the less complex ones harbor some amount of this phenomenon as well. Conclusion is that every 'thing' is cognizant to some degree or other.

Any small system can be thought of as part of a larger system. A computer as we know it is simply a large number of switches. It could equally be made from water buckets, valves and pipes, or all the Chinese people shaking hands. Here's another example - A coffee pot is part of the galley, and the galley has mechanisms to manipulate coffee pots such as water spigots, electric switches, and people. The galley is part of an aircraft, and the aircraft has mechanisms to manipulate various parts of it. The aircraft is part of an airport, which has mechanisms to manipulate various parts of it, the airport is just part of a larger structure. Each level is shown to have parts which are being manipulated by the whole. Thus, at some level, we have a highly complex system of interactions.

A computer on the other hand is not really "computing" per se, it is merely manipulating parts of itself. We have granted an interpretation to the action of a switch or the filling of a bucket of water, or the lifting of a flag. In each case, we have granted that symbolic gesture some meaning. We have granted this symbolic gesture a computational meaning. We've said this filling bucket of water means a 1 and empty it means a 0. A filled coffee pot could mean a 1, a valve position could mean a 0, fluid in a hydraulic circuit above 1000 psi could mean a 1, the flaps on an aircraft being down meaning 0, a person in location A can mean 1, etc… We can grant a meaning to anything, it does not need to be a computer switch because even a computer switch does not truly mean a number. We simply grant it that right to be a number. We say it's a number because it is manipulated in some way that represents a number to us. We've assigned that manipulation a numerical value, but we could equally assign any action a numerical value.

The aircraft systems similarly are highly dependant on what is causing them to be manipulated. A coffee pot doesn't get filled unless a person turns the water on. The hydraulic circuit doesn't reach 1000 psi unless some valve is in the proper position. Each action can be 'mapped' to its cause and effect. And each of these cause and effects are inter-related. They are not independent of each other. In fact, they are SO inter-related, that from a classical perspective, the interactions are every bit as deterministic as the switches in a computer. So we can't say a coffee pot on an aircraft is independent of the airport because it can't be turned on or even be there unless there are causal relationships which provide for the coffee pot to be in the aircraft and the aircraft at the airport. From the classical level, these are all deterministic, causal relationships which we could assign numerical values to just like a computer. The two systems, the world's aircraft transportation network, and an allegedly conscious computer, are equivalent forms of computational networks except the aircraft transportation network has a tremendous amount of additional mass, a tremendous potential additional computational power is needed to describe it thus it has tremendous more computational power, and thus the transportation network is much more complex computationally than an alleged conscious computer.

So there is no good definition of computation. A computer is not computing, it is manipulating symbols. The airline industry is not computing, it is manipulating symbols. Both are doing similar things, both can have their actions mapped to numbers and we can say these things are calculating, but either neither of these is calculating or both are calculating. We can't say one is calculating and the other is not, because they are both manipulating symbols through causal relationships.

If we say a system is manipulating symbols through causal relationships, and some of these systems are conscious, then we must grant that all of them are potentially conscious. Thus panpsychism. I believe Searle and Putnam have then taken this concept a step farther and argued that we can map any FAS to any system, or something along those lines. Long story short, this additional argument shows that any allegedly conscious computer is equivalent to any other system.

- I don't think anyone can argue that computers are not symbol manipulators. Thus we can argue that we can map any actions of any system into any numbers we want and thus suggest any given system is manipulating symbols along the lines of a computational device and thus everything must be assumed to be computational. If everything is computational, we can't simply say "this subsystem here is conscious but this one isn't" unless there is a distinction that can be made. The problem with computationalism right now is that it lacks any significant and meaningful distinction.

The question then for a computationalist, is "How do you define a computation?", and "What is the system needed to implement that computation such that the system can create the phenomena of consciousness?". Chalmers side steps the issue by suggesting there are other physical laws which might answer more succinctly these problems. Saying consciousness is created by "strong emergence" without describing how that type of emergence physically differs from weak emergence leads to panpsychism.
 
  • #64
moving finger said:
Iagree that 3rd person science often approximates to an objective unbiased perspective, but it does not follow that it is always objective.

Yes it does: that "objective" means "unbiased".

Measurements in QM may be fundamentally subjective.

There is no evidene that they are.


Science assumes 3rd person perspective always entails objectivity – the results of QM suggest this is not always the case.

What results?

Scientific objectivity is an assumption which may not always be valid.

I have asked previously how it is that you know there are no internal properties of the GoL which are inaccessible from the 3rd person perspective (ie from outside the GoL).


As have explained, I know that because I have complete information about the GoL.

Can you defend your claim, or is this belief of yours simply an article of faith? Still waiting for an answer.

The actual situation is that you have offered
no support for your extradordianry claim tha
tthe pixels in "Life" have qualia.


With respect, Tournesol, I have pointed out your misconception about physicalism before, and it seems you simply ignore your error. We are once again going round and round in circles, with apparently no communication between us. You are entitled to your own private definition of physicalism of course, but there isn’t much point continuing the discussion if this is the case, since we are not using the same language.

Chalmers uses "my" definition. perphapsit is your that is private...
 
  • #65
moving finger said:
Invalid inference. The existence of the law of gravity does not reduce the number of variables we need to specify the instantaneous positions and momenta of an N-particle system.

Best Regards

It certainly can do. Allow the particles to fall
towards the centre of graivty and they will
all end up with the same position. Laws are
all about redundancy.
 
  • #66
It might have an answer, which we know, but are nonetheless unable to deduce in any mathematical
or logical way. That appears to be the case with qualia.

Before we can agree on the answer, we need to understand just what is the question you are trying to answer?

What-it-seems-like questions.

To put it another wat, if we should reject the undeducable tout court, we should
reject strong emergnece (see OP). Yet you claim to be arguing in favour of strong emergence.

I don't "reject the undeducible" - I reject the notion that we can answer meaningless questions.

Some of the questions you reject as meaningless are
answerable -- and therefore meaningful.

"What would moving finger's conscious experience be like from Q_Goest's perspective?" is such a meaningless question. Even Q_Goest now seems to agree with this (which is why the discussion has moved on from "what is the computer experiencing?" to "is the computer experiencing anything?")

The problem with that question is not a problem
of perpective alone. In a universe of purely
gemoterical
perspective, it would be quite possible
to predict someone else's observationes.

We might not exist in such a universe, but that is fact
over and above the existence of observers and (literal) perspectives.
 
  • #67
Q_Goest said:
think that's an excellent point. But a computationalist would suggest the GoL is experiencing the information, and you are not.

Computationalists aren't required to believe any programme is
conscious.

The deeper prolbem is that any programme is entirely
knwoable, form the outside, in principle, which
is incompatible with the idea of qualia as intrinisically
unknowable from the outside.

So the information must be experiencing itself. Would you agree this leads to panpsychism? What other conclusions could you come to given the assumption that we might have the same information, but no way to know if that information is experiencing anything? Are there any other ways to resolve this paradox and still maintain computationalism other than what Chalmers has come to?

I can't think of any. It is all something of
an argument against computationalism IMO.

Regardless of what position it is, would you agree or disagree that computationalism leads to panpsychism if we have no way of detecting whether a machine is conscious or not

We can detect wether a machine reports
on its internal states and so on. It is only
phenomenal consciousness that is problematical.
Chalmer's point is that where you have the one
(funtional/behaviourial consciousness), you can
expect to have p-consciousness.
 
  • #68
Q_Goest said:
That's fine, let's get away from the discussion about "what would moving finger's phenomenal consciousness be like if Q_Goest were to experience that phenomenal consciousness?" and let's discuss questions such as "questions about how we can possibly detect conscious experience on some level".
I suggested that already a few posts ago.

Q_Goest said:
By "having an experience" or when we say "consciousness" I'm referring to any of the many subjective experiences such as the phenomenon of unity, or of self awareness, or any experience that occurs in a human, but not in a rock for example.
How do you know whether it occurs in a rock or not? (I am not suggesting it does, I am asking how you can substantiate your claim that it does not)

Q_Goest said:
Chalmers is suggesting there are "new physical laws" which need to be discovered.
I understand that, and I am saying that the physical properties of phenomenal consciousness which you call "having an experience" are by definition inaccessible to 3rd person investigation, hence asking "what are the laws which describe these properties" is a meaningless question.

Q_Goest said:
For example, stone age man didn't understand fire to the degree we do. They couldn't understand how molecules interacted. They had new physical laws to uncover, so for stone age man, fire was a phenomena which they had no way of explaining. Similarly, Chalmers is suggesting that we need to uncover new physical laws to explain the phenomena of consciousness.
It cannot be done, because phenomenal consciousness is a 1st person subjective experience, it is a category error to think that the "laws" which describe the 1st person perspective properties of subjective phenomenal consciousness can somehow be known or described from a 3rd person perspective.

Q_Goest said:
If we knew what these physical laws were, we'd then be able to apply them to anything and deduce if that something was conscious or not.
Therein lies your problem, because by definition we CANNOT know what these laws are, the properties they describe are inaccessible to the 3rd person perspective.

Q_Goest said:
If we assume that we can't know from examining something if it is conscious or not, then given there are a myriad of systems which go from the most simple to the most complex, all of which are equally capable of manipulating information (or performing calculations if you like that phrase better), we have no criteria for determining which of these systems is conscious and which are not. If we have no criteria on which to base a judgment on whether or not something is conscious, and we still claim the more complex ones are conscious, then we must also claim the less complex ones harbor some amount of this phenomenon as well. Conclusion is that every 'thing' is cognizant to some degree or other.
Does not logically follow.
To say that "X may be conscious" (and we simply cannot tell whether it is conscious or not) is not the same as saying that "X is necessarily conscious".

If you believe we can tell the difference between a conscious and a non-conscious entity from the 3rd person perspective, please explain how you think it can be done.

Q_Goest said:
Saying consciousness is created by "strong emergence" without describing how that type of emergence physically differs from weak emergence leads to panpsychism.
You have still not shown how you arrive at this conclusion (indeed, I have shown above that your logic is faulty - saying "X may be conscious, we have no way of knowing" is not the same as saying "X is necessarily conscious")

Best Regards
 
Last edited:
  • #69
Hi Tournsel

We can detect wether a machine reports
on its internal states and so on.
Not sure what that means. How does a machine "report on its internal states" in any meaningful way?



Hi MF

How do you know whether it occurs in a rock or not? (I am not suggesting it does, I am asking how you can substantiate your claim that it does not)
The point being, IFF we don't want to accept panpsychism, then we will make the assumption that a rock is not conscious. I didn't mean to infer that we can know if a rock is conscious or not, only that we preclude this possibility if we want to exclude panpsychism as a possibility.

I understand that, and I am saying that the physical properties of phenomenal consciousness which you call "having an experience" are by definition inaccessible to 3rd person investigation, hence asking "what are the laws which describe these properties" is a meaningless question.
What is the meaningless question? I believe what you're trying to say is that it is meaningless for a 3'rd person to investigate whether or not some system is "having an experience". Is that correct? But that's not a "question" so you have me confused.

If that is correct, then what evidence do you have to base this statement on (ie: that it is impossible for a 3'rd person to determine if a system is "having an experience")? I don't believe there is any. We can't say there is no evidence simply because computationalism precludes the possibility that something is inaccessible to a 3'rd person investigation. What you're saying is that computationalism precludes any evidence of experience by a 3'rd person, so you conclude there is no evidence.

It cannot be done, because phenomenal consciousness is a 1st person subjective experience, it is a category error to think that the "laws" which describe the 1st person perspective properties of subjective phenomenal consciousness can somehow be known or described from a 3rd person perspective.

Therein lies your problem, because by definition we CANNOT know what these laws are, the properties they describe are inaccessible to the 3rd person perspective.
Can you provide any logical reasoning to show what you say is true regardless of the assumptions used to base the phenomena of consciousness on? If this is only based on the concept of computationalism, the statements are only valid for computationalism and may be invalid for other theories.

Does not logically follow.
To say that "X may be conscious" (and we simply cannot tell whether it is conscious or not) is not the same as saying that "X is necessarily conscious".
I agree. But I don't need to prove that the aircraft industry is necessarily experiencing something. I only need to prove that there is no differentiator between one computational structure which isn't conscious and another which is allegedly conscious given the concept of computationalism. I only need to prove there is a possibility given the theory on hand (computationalism) that panpsychism is not ruled out. There is no differentiator for computationalism, as you can see. Thus, we can't preclude panpsychism given the assumptions computationalism is based on. If we can't preclude panpsychism given this theory, there is a problem with the theory which needs to be addressed. Although I've attacked this from a slightly different angle, others have already pointed out this problem. The reaction from the computationalists has not provided a firm foundation yet on which to generate a meaningful response to this type of attack.

If you believe we can tell the difference between a conscious and a non-conscious entity from the 3rd person perspective, please explain how you think it can be done.
That's not the point here. It isn't my intent to prove that there is some theory of consciousness that shows from a 3'rd person perspective that a system is conscious. In fact, I don't know of one. But that's not important. Computationalism is the primary theory under review in this thread.

Edit: I'll admit my aircraft argument doesn't prove that computationalism necessarily leads to panpsychism. It only shows we can't preclude panpsychism. What I need to prove is what Putnam has stated, that "every ordinary open system realizes every abstract finite automaton." I'll see if there's a better way of putting this into the aircraft example. Regardless, Putnam claims to have proven this, or at least many people in the philisophical community believe he has. It's the reaction from the philisophical community that's interesting, as it shows how much needs to be done to the concept of computationalism to maintain it as a vaild paradigm for the mind. Food for thought.
 
Last edited:
  • #70
Not sure what that means. How does a machine "report on its internal states" in any meaningful way?

How does a human ?
 
  • #71
Emergence, Upward and Downward Causation

I would argue that emergence, at least from a scientific viewpoint, is neither strong or weak...that it springs from characteristics inherent in atomic structure. For instance, molecules of iron oxide are colourless, however, when aglomerated one of their apparent emergent properties is the familiar rust red colour. Likewise, atoms of gold are not shiny, yellow and hard, but in sufficient number form the familiar metal with its shiny, yellow appearance...shiny and yellow are apparent emergent properties of the atomic element.

Emergence often carries with it the notion of causality, because it appears that emergent properties are "caused" by the smaller bits necessary to bring about the appearance of the emergent properties. It would appear this may be so, because emergent properties occur regularly under consistent conditions, i.e., when iron (Fe) oxidizes (O2) it always turns rust red (R), and when sufficient numbers of gold atoms bond, a shiny, yellow metal appears. Another way of stating this is:

The direction of emergence is a movement from the micro to the macro, a "growing larger", hence upward causation. Following the idea of causality and emergence, an argument against downward causation can again be found in FeO2, that is, rust does not cause the emergence of unsullied iron or oxygen atoms or molecules.

I would argue that emergence and causation are distinct from one another. Emergent properties are inherent properties of their atomic particles. Therefore, inherent in the molecular bond of iron and oxygen is the reflection of red human eyes perceive.

Upward causation has been accepted within the realm of philosophy of science for some time...microscopic things build into macroscopic things, or, macroscopic things reduce to microscopic particles. Chemistry reduces to physics (don't get me started on that one :)

The idea of downward causation has been problematic to philosophers of science...for one thing, the whole idea reeks of "God"...anathema to scientists, if not philosophers, although analytical philosophers do not tend toward proofs of God either...but I digress.

But another problem philosophers of science have faced is an apparent lack of physical examples of downward causation, that is, something macro returning to its micro parts, parts unchanged. I would like to now pose a question to the group about, and example of, downward causation...what about the cycle of water (H20) (micro) evaporating from the earth, rising, condensing, forming a cloud (macro), rain falling to the earth, and returning to its constituent elements in the soil?

It would appear that in the same way the hydrogen and oxygen combined to eventually "cause" a cloud to form, the cloud eventually "causes" the formation of hydrogen and oxygen. It's a simple example, tidy, and answers to the issue of moving from the macro to the micro without addition or subtraction of "parts". Would appreciate any input on this idea. Thanks.
 
  • #72
Hi Chestnut. Glad to see you didn't get roasted on some fire over Christmas! Welcome to the board.

Chestnut said: I would argue that emergence, at least from a scientific viewpoint, is neither strong or weak...that it springs from characteristics inherent in atomic structure.

I'd agree with this statement wholeheartedly. In fact, I'd intended to steer this thread in that direction, but never got that far. In "http://www.pnas.org/cgi/reprint/97/1/32.pdf"", Robert Laughlin et al. argues for emergence on the "mesoscopic" level. By this, I believe he's referring to only quantum phenomena, though he's certainly not talking about classical mechanics. So I believe your concepts here are in line with Laughlin, and also in line with current thinking, with the exception perhaps of your proposed "downward causation argument.

I also liked the distinction you make between causation and emergence here:

I would argue that emergence and causation are distinct from one another. Emergent properties are inherent properties of their atomic particles. Therefore, inherent in the molecular bond of iron and oxygen is the reflection of red human eyes perceive.

Nicely stated! I think this is fully in line with most of the current thinking in science and is key to understanding consciousness. Emergence is only a phenomenon which can be associated with quantum mechanics, not classical mechanics.* I also believe it puts consciousness squarely into the quantum mechanical category and out of the classical mechanics category.

I'm not sure this is clear to everybody though. If one still tries to accept emergence on a classical level such as computationalism, you're stuck trying to defend a phenomenon unlike any other we know of and I think this is why Chalmers ended up supporting this mysterious concept of "higher level physical laws" (see OP). Unfortunately, most philosophers supporting computationalism simply don't get it. They don't see a need for higher level physical laws. The entire argument is at 35,000 feet to them and they want to believe in computationalism, which relies strictly on classical mechanics.

Take for example the argument put forward by physicist Henry Stapp in his 1995 "http://psyche.cs.monash.edu.au/v2/psyche-2-05-stapp.html"" only a few months later. Stapp writes:
The fundamental principle in classical mechanics is that any physical system can be decomposed into a collection of simple independent local elements each of which interacts only with its immediate neighbors.
In other words, classical mechanics relies only on weak emergence as also defined by Bedau and who's definition I put in the OP. Stapp then points out the fundamental problem with this logic:
The information that is stored in anyone of the simple logically independent computers, of which the computer/brain is the simple aggregate, is supposed to be minimal: it is no more than what is needed to compute the local evolution. This is the analog of the condition that holds in classical physics. As the size of the regions into which one divides a physical system tends to zero the dynamically effective information stored in each individual region tends to something small, namely the values of a few fields and their first few derivatives.
Ok, I'll agree his argument needs work here, but his point is still valid IMO. Subjective experience can't be created by classical level elements which hold only one tiny aggregate of information about the experience. There has to be something, some mechanism to tie them together.[1]

Finally, he points out exactly what Chalmers says about higher level physical laws and why that isn't the right answer:
One could imagine modifying classical mechanics by appending to it the concept of another kind of reality; a reality that would be thought like, in the sense of being an eventlike grasping of functional entities as wholes. In order to preserve the laws of classical mechanics this added reality could have no effect on the evolution of any physical system, and hence would not be (publicly) observable. Because this new kind of reality could have no physical consequences it could confer no evolutionary advantage, and hence would have, within the scientific framework, no reason to exist. This sort of addition to classical mechanics would convert it from a mechanics with a monistic ontology to a mechanics with a dualistic ontology. Yet this profound shift would have no roots at all in the classical mechanics onto which it is grafted: it would be a completely ad hoc move from a monistic mechanics to a dualistic one.
To me, that's a beautiful retort! He points out exactly why we can't accept something like what Chalmers is saying about higher level physical laws.

The philosopher Kirk Ludwig however, doesn't seem to recognize the differences between classical and quantum mechanics from a philisophical perspective, and I think this is what leads him, like so many other philosophers, to attack Stapp. I'd comment on his paper, but this post is getting too long as it is.

~

Ok, why did I write all that? What's the point? The point is that if we accept emergence only at the molecular (mesoscopic to Laughlin) level where quantum mechanics is necessary, computationalism is dead.

I tried to work towards this conclusion in this thread, but unfortunately never quite got there. Maybe we should start over?

~

Chestnut said: But another problem philosophers of science have faced is an apparent lack of physical examples of downward causation, that is, something macro returning to its micro parts, parts unchanged. I would like to now pose a question to the group about, and example of, downward causation...what about the cycle of water (H20) (micro) evaporating from the earth, rising, condensing, forming a cloud (macro), rain falling to the earth, and returning to its constituent elements in the soil?

It would appear that in the same way the hydrogen and oxygen combined to eventually "cause" a cloud to form, the cloud eventually "causes" the formation of hydrogen and oxygen. It's a simple example, tidy, and answers to the issue of moving from the macro to the micro without addition or subtraction of "parts". Would appreciate any input on this idea. Thanks.
Sorry Chestnut, but we'll have to part company on this one. Note the definition (in the OP) of downward causation by Chalmers:
Downward causation means that higher-level phenomena are not only irreducible but also exert a causal efficacy of some sort. Such causation requires the formulation of basic principals which state that when certain high-level configurations occur, certain consequences will follow. …

With strong downward causation, the causal impact of a high-level phenomenon on low-level processes is not deducible even in principal from initial conditions and low-level laws. With weak downward causation, the causal impact of the high-level phenomenon is deducible in principal, but is nevertheless unexpected.

The concept of downward causation has nothing to do with something macro returning to its micro parts. It has to do with the macro thing having some causal efficacy over the micro parts. The phenomenon of something returning to its micro parts, be it water or an organism that is born, lives, dies, and who's parts eventually wind up in another living organism, has nothing to do with downward causation.


*** *** ***

*I believe I'm preaching to the choir here.

[1] Note that Searle, Putnam, and many other eminent philosophers basically agree with this, but don't point to QM as the answer. They do however point out that in computationalism, each micro element is a symbol, and if this is true (which it is) the concept of computationalism inevitably devolves into panpsychism. I've also tried using this argument in this thread which is a slightly different attack on computationalism.
 
Last edited by a moderator:
  • #73
More on Ermergence, Reduction, Causation...

Hi Q-Goest...

Thanks for the welcome. Amazingingly excellent reply for 6:06 a.m. (at least that's when your message came in here...I'm in the PST...what time zone are you in?)

I'm afraid I must be very brief, partially because I slept in very late and have to get prepared for evening festivities, and partially because I have to go through my materials and find the various citations to support my statements!

Your concurrence, at least on some points, is appreciated. Following are a couple of questions/observations...

In "The Middle Way", Robert Laughlin et al. argues for emergence on the "mesoscopic" level. By this, I believe he's referring to only quantum phenomena, though he's certainly not talking about classical mechanics. So I believe your concepts here are in line with Laughlin, and also in line with current thinking

Could you please write Robert Laughlin's definition of "mesoscopic"? Although it literally means middle view, and, as you say, may be discussing quantum phenomena, how does the middle view translate into emergence?

I also believe it puts consciousness squarely into the quantum mechanical category and out of the classical mechanics category.

I'm afraid I'm a bit reluctant to mix consciousness discussion with quantum theory. Although physicists do this, and more so lately, a la Fred Alan Wolf, for instance, and although I personally believe that some of the connextions associated with quantum physics and consciousness may hold true, I believe they fall into the area of belief rather than quantifiability and proof. I am not saying these beliefs are not true, just not proven, which, while allowing for the structure of rational argumentation, fails to provide the content.

Regarding Stapp's commentary, thanks for providing his straightforward argument on the failure of classical mechanics's ability to address the unseeable, and potentially unnecessary, evolution-wise, consciousness.

This whole business of philosophy of science's attempt to address the unseen "otherness" in physical "stuff" is interesting. Consider Paneth's discussion of the elements as basic and simple subtances. This has been followed through most recently by Scerri, who describes

"...the notion of an element as a basic substance concerns just its identity and its ability to act as the bearer of properties. A basic substance does not however possesses any properties. The properties of an element however reside in the simple substance and not in the element as a basic substance. According to this view, the identity of an element and its properties are regarding as being quite separate."

This is just one example of a scientist/philospher attempting to lend legitimacy and substance to that which can't be quantified. Discussion of emergence based on atomic structure is comparatively "a walk in the park".

I tend to think of emergence, reduction, and upward and downward causation as one process. Robin Le Poidevin's paper, "Missing Elements and Missing Premises: A Combinatorial Argument for the Ontological Reduction of Chemistry", has been a rich resource in my thinking. So, too has been Robin Hendry's Chapter 9, entitled "Is There Downward Causation in Chemistry?" from the book "Philosophy of Chemistry: Synthesis of a New Discipline".

Regarding Chalmers:
Downward causation means that higher-level phenomena are not only irreducible but also exert a causal efficacy of some sort. Such causation requires the formulation of basic principals which state that when certain high-level configurations occur, certain consequences will follow. …

With strong downward causation, the causal impact of a high-level phenomenon on low-level processes is not deducible even in principal from initial conditions and low-level laws. With weak downward causation, the causal impact of the high-level phenomenon is deducible in principal, but is nevertheless unexpected.

While Chalmers is firm in the widely-held philosophy regarding downward causation and its impossibility, I would ask, "Must a higher level phenomena be irreducible in order to exert a causal efficacy?" "Can a reducible higher level phenomena exert a unique causal efficacy that would not otherwise appear if the higher level phenomena was reduced? (here's were emergentism comes in)" If the answer to that question is yes, then Chalmers' concern about "certain consequences" is allayed.

With regard to Chalmers' statement that causal impacts of high-level phenomenon on low-level processes is not deducible (strong version), and deducible only in principle (weak version)...I must acknowledge Chalmers' sincerity and scholarship on causality, first principles, etc., but I fail to see his logic...perhaps if I had a larger quote to provide greater context of his view? I would argue that causal impacts are deducible as concerns high-level phenomenal impacts on low-level processes.

My last point relates to a bit of the last Chalmers quote:
With weak downward causation, the causal impact of the high-level phenomenon is deducible in principal, but is nevertheless unexpected.

Ironically, one of the most widely held tenets concerning unexpected, undeducible outcomes is that those qualities are absolutely necessary to a property being determined to be an emergent property. Are causal impacts always to be considered surprises?

Chestnut
 
  • #74
Chestnut said:
Ironically, one of the most widely held tenets concerning unexpected, undeducible outcomes is that those qualities are absolutely necessary to a property being determined to be an emergent property. Are causal impacts always to be considered surprises?
From your two posts to date, you seem to have an extremely rational perspective. I am led to ask your intention as per the meaning of "undeducible". Do you mean "absolutely undeducible" or undeducible via the finite conscious procedures common to ordinary logic?

As far as comprehending the possible extent of “deducible emergent phenomena” you should take a look at my deduction posted quite a while ago in this thread. I am curious as to your possible reactions.

Have fun -- Dick
 
  • #75
Chestnut said: Amazingingly excellent reply for 6:06 a.m. (at least that's when your message came in here...I'm in the PST...what time zone are you in?)
It seems this website adjusts the time postings are made depending on your time zone. I'm in PA, so I actually posted that at 9:06 am, but since you're in California, the website displays that as 6:06 am. Ok, maybe 9:06 is still early but I'm not that early! lol

Hope your party went well last night. How come I didn't get an invite? grrrr
<grin>

Chestnut said: Could you please write Robert Laughlin's definition of "mesoscopic"?
Here's a good definition for U of Minn. school of physics and astronomy:
The Greek word meso means "in between". Mesoscopic Physics refers to the physics of structures of intermediate sizes, ranging from a few atomic radii to a few microns. A mesoscopic sample is too big to study its properties by the methods standard in the physics of individual atoms, and is too small for the application of the familiar physical laws of the macro world. Put another way, a macroscopic device, when scaled down to a meso-size, starts revealing the quantum signatures of conventional characteristics. For example, in the macro world, the conductance of a wire increases continuously with its diameter, but in the meso world the wire's conductance is quantized - i.e., the increases occur in steps.
And also from the examples given by Laughlin, it seems the focus is on how nature behaves when the number of atoms or molecules is too small to average themselves out such as is the case for classical mechanics, but the number is also too large to consider the behavior of individual atoms. The properties of matter however, still depend on quantum interactions between atoms in the material.

Although it literally means middle view, and, as you say, may be discussing quantum phenomena, how does the middle view translate into emergence?
Emergence is a philosophical concept which is not well defined, but in general we can look at the two forms of emergence, weak and strong, as being two diametrically opposite possibilities. Weak emergence is well defined by Bedau. Strong emergence is defined by everybody, so I selected Chalmers for the OP.

It is my view that anything that can be defined by classical mechanics such as a bridge, a car, a rocket, the orbit of planets, or a computer can be reduced to its constituent parts, such that any phenomena you observe is reducible to those parts. This view most closely follows the definition of weak emergence. The phenomena exhibited by those classical objects are weakly emergent and reducible. I believe this view is consistent with what you've said, and also consistent with scientific and engineering philosophy in general.

However, as we examine smaller and smaller 'things', there is a scale at which we find phenomena which might not be considered reducible. Laughlin and many others consider this level to be this mesoscopic level. At this level, phenomena depends on interactions which debatably, are interdependent in a way that prevents one from reducing things further.

At any rate, this irreducibility at the mesoscopic scale is IMO, a potential example of 'strong emergence' depending on how you define it. I think Chalmers' definition of strong emergence is acceptable:
Chalmers: We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principal from truths in the low-level domain.
Unfortunately, Chalmers wants to use this definition on things well above the mesoscopic level, at the level of computers.

Chestnut said: I'm afraid I'm a bit reluctant to mix consciousness discussion with quantum theory. Although physicists do this, and more so lately, a la Fred Alan Wolf, for instance, and although I personally believe that some of the connextions associated with quantum physics and consciousness may hold true, I believe they fall into the area of belief rather than quantifiability and proof. I am not saying these beliefs are not true, just not proven, which, while allowing for the structure of rational argumentation, fails to provide the content.
Consider something that exhibits phenomena which can be fully described using classical mechanics such as a computer, an aircraft, or a galaxy. Can that thing also exhibit phenomena which are not reducible to its constituent parts?

That question takes a lot of thinking. There are a few different responses:
1. Chalmers says yes, and that you therefore need additional physical laws to understand those irreducible phenomena.
2. Many computationalists will recognize the quandary and say no. The Turing test for example says all you need is to do is examine the behavior of the thing to determine if it is conscious, but a p-zombie could equally well pass the test. In short, there is no solid ground to stand on here IMO.
3. In general, I think the scientific/engineering community would say no for exactly the reason pointed out by Stapp.

Chestnut said: I would ask, "Must a higher level phenomena be irreducible in order to exert a causal efficacy?" "Can a reducible higher level phenomena exert a unique causal efficacy that would not otherwise appear if the higher level phenomena was reduced? (here's were emergentism comes in)"
If something is reducible, then by definition the interaction of those parts is both necessary and sufficient to describe everything that 'thing' is doing. We don't need to propose downward causation, since anything the 'thing' is doing is by definition, being done because of the interaction of the lower level parts. Similarly and to my point above, and to what Stapp has pointed out, we also don't need to propose there are other phenomena occurring which can't be described by the interaction of those parts which require additional physical laws such as what Chalmers suggests.

Chestnut said: I would argue that causal impacts are deducible as concerns high-level phenomenal impacts on low-level processes.
Not sure what you meant there. Can you clarify?

Chestnut said: Ironically, one of the most widely held tenets concerning unexpected, undeducible outcomes is that those qualities are absolutely necessary to a property being determined to be an emergent property. Are causal impacts always to be considered surprises?
I think for something to be truly emergent in more than the weak sense, outcomes should not only be surprises, but they must be irreducible as well.

Happy New Year!
 
  • #76
Hi Dr. Dick...

In this context, I mean absolutely not deducible at this time.

Regarding your post, "my deduction", I did open it last night after returning from a late evening...it was a lot of text! Screenfulls and screenfulls. Then I googled you...there are a lot of Richard D. Staffords out there...could only find 2 entries, of posts to websites such as this, that seemed at all relevant. However, you've clearly put a lot of work into your thinking and analysis. Do you have a website listing your publications? Also, do you have an area of specialty? Thanks.

Chestnut
 
  • #77
Causal Forces in Biological Sciences...emergence? downward causation?

Dear Q-Goest,

I've gotten off on a bit of a tangent, I'm afraid...have you read anything by Rupert Sheldrake? I've just begun his "A New Science of Life", and it appears he will be addressing causal forces as well...and appears may be in agreement with Chalmers' belief in necessity for new physical laws. The book jacket said the Royal Academy voted it the book most needing to be burned when it was published, while Nature had kudos for it.

As an aside, I find fascinating the various arguments that God exists, or that there is a purpose or meaning of life, couched in scientific or analytical philosophical terms. From Thomas Aquinas' 7 Proofs of God, to physics-as-explanation for the unprovable in the movie, "What the Bleep do we Know", to everything in between, it seems for centuries there has been a quest to couch the unprovable in scientific or analytic terms. Apparently simple belief isn't enough. I've got a love of science and philosophy, and what I believe to be a knowledge of their limits, at present. What I love to do is read the attempts to discuss the intangibles, the sums that are greater than the parts, something a layperson would call believing, or faith, or even self-evident, and see if it can be done. If it can...what a paper that would make!

At any rate, Sheldrake first discusses the physico-chemical processes/paradigm as
the framework of thought within which questions about the physico-chemico mechanisms of life processes can be asked and answered.
It is mechanistic, but he believes it
will be the only framework available to experimental biologists until another alternative is discovered.

He states
Any new theory capable of extending or going beyond the mechanistic theory will have to do more than assert that life involves qualities or factors at present unrecognized by the physical sciences; it will have to say what sorts of things these qualities or factors are, how they work, and what relationship they have to known physico-chemical processes.

He addresses the idea of a new type of causal factor, unknown to the physical sciences, which interacts with physico-chemical processes within living organisms, and describes the vitalist philosophy, the organismic philosophy and the morphogenetic field philosophy and their contributions to this idea. Vitalist: there exists a new type of causal factor, unknown to the physical sciences which interacts with physico-chemical processes within living organisms. Conclusion: no repeatable results or predictions, therefore not valid scientifically.

Organismic: not everything in the universe can be explained from the bottom up in terms of properties or atoms or hypothetical particles. Recognizes the existence of hierarchically organized systems that possesses properties which cannot be fully understood in terms of the properties exhibited by their parts in isolation from one another (not deducible). Sheldrake favors A.N. Whitehead's description of everything as an organism, where "biology is the study of larger organisms...physics is the study of smaller organisms". Conclusion: no testable predictions, therefore not valid.

Morphogentic fields:
the term itself seems to imply a new type of physical field which plays a role in the development of form.
Conclusion:
the concept can only be of practical scientific value if it leads to testable predictions that differ from those of the conventional mechanistic theory.

Sheldrake's book is said to demonstrate the last conclusion. I find this interesting because it feeds right into our discussion of emergence and downward causation. There are hints that his work supports LePoidevin (cited in another post), so am looking forward to reading more.

Hope all well in your world!

Chestnut
 
Last edited:
  • #78
Hi Chestnut,
I haven't read anything by Sheldrake, so I looked him up on the net. Wikipedia has this on him:
His best known book, A New Science of Life, was published a week after the New Scientist article. He put forward the hypothesis of formative causation (the theory of morphic resonance)[3], which proposes that phenomena — particularly biological ones — become more probable the more often they occur, and therefore that biological growth and behaviour become guided into patterns laid down by previous similar events. He suggests that this underlies many aspects of science, from evolution to laws of nature. Indeed, he writes that the laws of nature are better thought of as mutable habits that have evolved since the Big Bang.
Ref: http://en.wikipedia.org/wiki/Rupert_Sheldrake

also:

Sheldrake observed:
"The instructors [at university] said that all morphogenesis is genetically programmed. They said different species just follow the instruction in their genes. But a few moments' reflection show that this reply is inadequate. All the cells of the body contain the same genes. In your body, the same genetic program is present in your eye cells, liver cells and the cells in your arms. The ones in your legs. But if they are all programmed identically, how do they develop so differently?"
Sheldrake then became interested in "holistic" ideas after reading Johann Wolfgang von Goethe's works on the topic. He developed a theory to explain this problem of morphology, with its basic concept relying on a universal field encoding the "basic pattern" of an object. He termed it the "morphogenetic field".
The morphogenetic field would provide a force that guided the development of an organism as it grew, making it take on a form similar to that of others in its species. DNA was not the source of structure itself, but rather a "receiver" that translated instructions in the field into physical form.
Ref: http://en.wikipedia.org/wiki/Morphic_resonance

Is that an accurate description of any of his ideas? If so, I guess I'd understand why "The book jacket said the Royal Academy voted it the book most needing to be burned when it was published,".

I see the intro of his book is also given online at Amazon.com:
https://www.amazon.com/gp/product/0892815353/?tag=pfamazon01-20
It seems to confirm what the Wiki article is saying. I don't believe there's any need to resort to morphogenetic fields though. What little I know about biology is that there are chemical concentrations throughout the body during growth which are responsible for organ development. I think someone in the biology area here could shed some light on this if you're interested.

Chestnut said: What I love to do is read the attempts to discuss the intangibles, the sums that are greater than the parts, something a layperson would call believing, or faith, or even self-evident, and see if it can be done. If it can...what a paper that would make!
Yes! Stapp's paper doesn't quite make it, though I'd agree with his conclusions. Do we need something more than reductionism and "weak emergence" to understand life? I believe we do, but I disagree with Sheldrake on the level at which that operates. Sheldrake from what I understand, is suggesting something along the lines of what Chalmers is suggesting, though Chalmers is careful not to say anything that might get his books burned! lol

I think the level at which 'strong emergence' operates (if you can call it that) is this mesoscopic level. Protein folding is discussed quite a bit in the Laughlin paper I mentioned. Is that physically irreducible? I think so, and I think there are a number of irreducible phenomena at this mesoscopic level. I think consciousness and life in general are also irreducible below the mesoscopic level, but not above. Above that level, I see no reason to resort to irreducible physical laws such as Sheldrake's morphogenetic field.

You quoted Sheldrake however in this statement I believe:
Any new theory capable of extending or going beyond the mechanistic theory will have to do more than assert that life involves qualities or factors at present unrecognized by the physical sciences; it will have to say what sorts of things these qualities or factors are, how they work, and what relationship they have to known physico-chemical processes.
Interesting. It's a valid point, though I don't think it necessarily supports his ideas about morphogenetic fields. I can understand now why you're interested in this topic though. I'd agree his ideas about this field is a form of strong emergence. It seems strong emergence ideas are rather vague. In Sheldrake's case, he seems to be much less vague but unfortunately I disagree with his concept as I don't believe we need anything more than classical mechanics to account for any phenomena at this level.

Have you given any thought to the difference between classical mechanics and quantum mechanics when it comes to emergence?
 
Last edited by a moderator:

Similar threads

Replies
6
Views
6K
  • · Replies 18 ·
Replies
18
Views
5K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
113
Views
20K
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 264 ·
9
Replies
264
Views
22K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 33 ·
2
Replies
33
Views
6K
  • · Replies 5 ·
Replies
5
Views
5K