# Weak and Strong Emergence, what is it?

1. Aug 1, 2006

### Q_Goest

The idea of weak and strong emergence seems to be one that is easily confused. I've seen numerous mentions of "emergence" in the literature without classifying which they are talking about. Many times it seems the author wants to imply strong but is really only looking at a weakly emergent phenomena. In this thread I'd like to have a discussion about the two definitions.

I see http://www.reed.edu/~mab/papers/weak.emergence.pdf" [Broken]has been cited 46 times according to Google Scholar. He defines weak emergence this way:
Weak emergence is essentially a reductionist philosophy in which local causes create local effects. What exactly is a cause and what exactly is an effect seems intuitive enough for most folks to grasp, however I'd also like to better define cause and effect so I've also started a https://www.physicsforums.com/showthread.php?p=1051592#post1051592".

Similarly, strong emergence is defined by http://consc.net/papers/emergence.pdf" [Broken]this way:
In this paper, Chalmers suggests there are higher level physical laws:
Chalmers also resorts to "downward causation" and even to weak and strong downward causation though exactly why is just a bit unclear to me. He says:
Note that without some sort of downward causation, we could have strongly emergent phenomena which have no causal efficacy. They would exist but not have any way of interacting with the world. For example, a computer interacts at a local level exactly as Bedau points out. Each switch in a chip acts only because of some electrical signal provided to its control. It does not act because of any other reason. Thus, we can say the computer switch is a "micro-level" part in a system S. The macrostate of the computer exists, and is "constituted wholly out of microstates". Further, there is a microdynamic which we can call D which governs the time evolution of the microstate. This microdynamic is the application of voltage to the switch which makes it change state.

If we assume then that there is some kind of 'strongly emergent' phenomena which arises in a computational device, such as subjective experience, that phenomena has no causal efficacy over any portion of the system. One need not theorize additional physical laws as Chalmers points to. The laws governing the action of each switch are necessary and sufficient and no further description is needed. Thus, if there are any strongly emergent phenomena which might arise, it would seem that downward causation is the only way such a phenomena could have any kind of causal efficacy over the system.

I believe computationalism side steps this issue by simply suggesting that strongly emergent phenomena are 'like the weight of a polar bear's coat'. The purpose of the coat is to keep the polar bear warm, not to create weight. Yet it creates weight because that is needed to provide the insulation in this case since hair is made of matter and has weight and much of it is needed to provide the insulation. Similarly, subjective experience to a computationalist is the weight of the coat. It is not needed, and it serves no direct purpose, it is simply there. I'd found that example somewhere on the net, but it really doesn't strike me as a decent argument. Nevertheless, I suppose it will have to do. Perhaps someone else has seen a better argument?

It should be fairly clear that strong emergence and downward causation (strong or weak) can't be taken lightly. The only cases of strong emergence that should be taken seriously are molecular interactions IMO. Even there, it seems most interactions don't need anything like a strongly emergent theory to support them. They can be explained in terms of energy balance, bonds and so forth.

Last edited by a moderator: May 2, 2017
2. Aug 3, 2006

### octelcogopod

The problem as I see it, is that we haven't really defined in what context we want to apply emergence.

For instance, it seems to me that emergence is actually just an emergent property of our minds, that is, we categorize things systematically, and try to make sense of them by themselves as individual objects.

For example, to determine the emergence of a dog, it would not only be a matter of scale and point of view, but also of how the atoms that make up the dog interact, to make an object called a dog.
A dog is a weakly emergent phenomena, not taking into account its mind, should it have one.

To me, every object in the universe is weakly emergent, it can all be reduced down to its pure fundamental interactions. But then again when you think about it can it really?
I mean, it all depends on how you look at it.
For instance if we were to calculate with a computer every atom in a dog, and its interactions, we would automatically get a dog, even if we didn't realize it.

What this implies to me, is that all objects in the universe is a side effect of smaller interactions, why these interactions happen the way they do, and why objects exist in the first place, can maybe be explained only by fully understanding the smaller interactions.

However, the problem arises when we get other things, that may seem irreducible, at least at this time.
One problem is of course the whole subjective side of the human mind.
If we were to create an exact brain and body replicant in a computer, that had everything down to the smallest quark (or string :P), would all the subjective stuff that we experience right now in our minds, arise simply as machine code?

i mean if we were able to create such a complicated computer program we would most likely already have solved the problem of consciousness, but taking that aside, for the purpose of this discussion.

I won't go too deeply into this, but this also somewhat ties in with determinism.
The problem lies in the fact that IF we created a program like above, and we could get hard output on the monitor as numbers that would represent every facet of the subjective mind, then that would also show that the universe is deterministic.

But even if it was, would that exclude any chance of strongly emergent phenomena?
It's kind of hard when we don't really have an example of a strongly emergent phenomena..

3. Aug 3, 2006

### Q_Goest

Hi Octelcogopod, I'd agree with everything you've said. The conclusion is that it is hard to find, and even more difficult to have people agree on whether or not some phenomena is emergent. That really is one of the main thrusts of this thread, to shake out potential strongly emergent phenomena and see if there is ANYTHING that can be termed strongly emergent. Along with that would be to propose what downward causal actions that phenomena may have.

I think the reason such phenomena are difficult to identify as being strongly emergent is that there is no conceptual or logical tool with which we can make the determination. To advance such a tool would require some agreement as to what weak emergence entails, and I think Bedau has a very nice definition. Unfortunately it's only a definition, not a tool. But engineering uses such tools as Bedau is refering to all the time. They're conceptual tools called finite element analysis, control volumes, and many other things. Problem is, no one has recognized them as being applicable to weak and strong emergence. To do that I've started the https://www.physicsforums.com/showthread.php?p=1052467#post1052467", so feel free to comment in that one also.
If we modeled the brain using finite element analysis or the equivalent of it, we would be calculating what the physical system is doing using symbols. Do the symbols represent what is actually occuring to the actual gray matter? In the sense that we are able to interpret them, I believe the answer is yes. In the sense that the COMPUTER is able to interpret them, I believe the answer is no (ie: the computer is a p-zombie). But that conclusion must rest on some logical tool as I've previously mentioned.

Yes, I fully agree. That conclusion though may be hard for some to see. Weak emergence seems to imply a kind of 'bottom up' determinism. It implies that the system S as Bedau calls it, is completely determined by the microdynamics. Further, those microdynamics operate at a local level. We can think of a system as being broken down into small microscopic parts, larger than a molecule such that we can examine interactions at the classical level. If we do this, the classical interactions are essentially determinate and calculable. A switch in a computer for example is completely deterministic, and any system made of them similarly is.

The only thing I'd like to emphasize regarding the modeling of classical phenomena using computational means is that the computer is strictly a symbol manipulator, and does not have the same physical properties as the classical phenomena which is being modeled. However, if we made a 'physical computer' and considered it in terms of microstates similar to an FEA analysis (following Bedau's line of reasoning), then the conclusion is that many phenomena we percieve as strongly emergent are actually weakly emergent.

Last edited by a moderator: Apr 22, 2017
4. Aug 4, 2006

### Doctordick

If this is indeed what "strong emergence" means to the academic community, then I think one can confidently conclude that "strong emergence" does not exist; see my paper http://home.jam.rr.com/dicksfiles/Explain/Explain.htm [Broken].

With regard to "weak emergence" (that is with regard to the definition of "weak emergence") I feel it can also be dispensed with via the following proof. That is, emergence is emergence and there is nothing either weak or strong about it!
In that regard, the following proof is of great interest regarding "emergent" phenomena. The proof concerns a careful examination of the projection of a trivial geometric structure on a one dimensional line element.

The underlying structure will be a solid defined by a collection of n+1 points connected by lines (edges) of unit length embedded in an n dimensional Euclidean space (an n dimensional equilateral polyhedron). The universe of interest will be the projection of the vertices of a that polyhedron on a one dimensional line element. The logic of the analysis will follow the standard inductive approach: i.e., prove a result for the cases n=0, 1, 2 and 3. Thereafter prove that if the description of the consequence is true for n-1 dimensions, it is also true for n dimensions. The result bears very strongly on the possible complexity of "emergent" phenomena.

First of all, the projection will consist of a collection of points (one for each vertex of that polyhedron) on the line segment. Since motion of that polyhedron parallel to the given line segment is no more than uniform movement of every projected point, we can define the projection of the center of the polyhedron to be the center of the line segment. Furthermore, as the projection will be orthogonal to that line segment and the n dimensional space is Euclidean, any motion orthogonal to that line segment introduces no change in the projection. It follows that the only motion of the polyhedron which changes the distribution of points on the line segment will be rotations of the polyhedron in the n dimensional space.

The assertion which will be proved is that every conceivable distribution of points on the line segment is achievable by a specifying a particular rotational orientation of the polyhedron. Before we proceed to the proof, one issue of significance must be brought up. That issue concerns the scalability of the distribution. I referred to the collection of points on the line segment as the "universe of interest" as I want the student to think of that distribution of points as a universe: i.e., any definition of length must be arrived at via some defined characteristic of the the distribution itself or subset of the distribution.

Case n=0 is trivial as the polyhedron consists of one point (with no edges) and resides in a zero dimensional space. It's projection on the line segment is but one point (which is at the center of the line segment by definition) and no variations in the distribution of any kind are possible. Neither is it possible to define length. It follows trivially that every conceivable distribution of a point centered on a line segment (which is one which can be used to define the origin of the line segment) is achievable by a particular rotational orientation of the polyhedron (of which there are none). Thus the theorem is valid for n=0 (or at least can be interpreted in a way which makes it valid).

Case n=1 is also trivial as the polyhedron consists of two points and one edge residing in a one dimensional space. Since the edge is to have unit length, one point must be a half unit from the center of the polyhedron and the other must be a half unit from the center in the opposite direction. Since rotation is defined as the trigonometric conversion of one axis of reference into another, rotation can not exist in a one dimensional space. It follows that our projection will consist of two points on our line segment. We can now define both a center (defined as the midpoint between the two points) and a length (define it to be the distance between the two points) in this universe but there is utterly no use for our length definition because there are no other lengths to measure. It follows trivially that every conceivable distribution of two points on a line segment (which is one) is achievable by a particular rotational orientation of the polyhedron (of which there are none). Thus the theorem is valid for n=1.

Case n=2 is the first case which is not utterly trivial. Fabrication of an equilateral n dimensional polyhedron is not a trivial endeavorer. In order to keep our life simple, let us construct our equilateral polyhedron in such a manner so as to make the initial orientation of the lower order polyhedron orthogonal to the added dimension and move the lower order entity up from the center of our coordinate system and add a new point on the new axis below the center. In this case, the coordinates of previous polyhedron remain exactly what they were for the established coordinates and are all shifted by the same distance in the new dimension. The new point has a position zero in all the old coordinates (it is on the new axis) and an easily calculated position on in the negative direction on the new axis (it must be equal to the new radius of the vertices of the old polyhedron).

The proper movement is quite easy to calculate. Consider a plane through the new axis and a line through any vertex on the lower order polyhedron. If we call the new axis the x axis and the line through the chosen vertex the y axis, the y position of that vertex will be the old radius of the vertex in the old polyhedron. The new radius will be given by the square root of the sum of the old radius squared and the distance the old polyhedron was moved up in the new dimension squared. That is exactly the same distance the new point must be from the new center. Assuring the new edge length will be unity imposes a second Pythagorean constraint consisting of the fact that the old radius squared plus (the new radius plus the distance the old polyhedron was moved up) squared must be unity.

$$r_n = \sqrt{x_{up}^2 + r_{n-1}^2} \mbox{ and } 1 = \sqrt{r_{n-1}^2 + (x_{up} + r_n )^2 }$$
The solution of this pair of equations is given by

$$r_n = \sqrt{\frac{n}{2(n=1)}} \mbox{ and } r_{up} = \frac{1}{\sqrt{2n(n+1)}}$$

The case n=0 was a single point in a zero dimensional space. The case n=1 can be seen as an addition of one dimension x_1 (orthogonal to nothing) where point #1 was moved up one half unit in the new dimension and a point #2 was added at minus one half in the new dimension (both the new radius and "distance to be moved up" are one half). The case n=2 changes the radius to one over the square root of three and the line segment (the result of case n=1) must be moved up exactly one half that amount. A little geometry should convince you that the result is exactly an equilateral triangle with a unit edge length. Projection of this entity upon a line segment yields three points and the relative positions of the three points are changed by rotation of that triangle.

In this case, we have two points to use as a length reference and a third point who's distance from the center of the other two can be specified in terms of that defined length reference. Using those definitions, two of the points can be defined to be one unit apart and the third point's position can vary from any specific position from plus infinity to minus infinity. The infinities occur when the edge defined by the two vertices being used as our length reference is orthogonal to the line segment upon which the triangle is being projected (in which case the defining unit of measure falls to zero). Plus infinity when the third point is on the right (by convention) and minus infinity when the third point is on the left (by common convention, right is usually taken to be positive and left to be negative). It thus follows that every conceivable distribution of three points on a line segment is achievable by a particular rotational orientation of the polyhedron (our triangle). Thus the theorem is valid for n=2.

Case n=3 consists of a three dimensional equilateral polyhedron consisting of four points, six unit edges and four triangle faces: i.e., what is commonly called a tetrahedron. If you wish you may show that the radius of vertices is given by one half the square root of three halves and the altitude by the radius plus one over two times the square root of six (as per the equations given above). To make life easy, begin by considering a configuration where a line between the center of our tetrahedron and one vertex is parallel to the axis of projection on our reference line segment. Any and all rotations around that axis will leave that vertex at the center of our line segment. Essentially, except for that particular point, we obtain exactly the same results which were obtained in case n=2 (that would be projection of the triangle face opposite the chosen vertex). Using two of the points on that face to specify length, we can find an orientation which will yield the third point in any position from minus infinity to plus infinity while the forth point remains at the center of the reference segment.

Having performed that rotation, we can rotate the tetrahedron around an axis orthogonal to the first rotational axis and orthogonal to the line on which the projection is being made. This rotation will end up doing nothing to the projection of the first three points except to uniformly scale their distance from the center. Since we have defined length in terms of two of those points, the referenced configuration obtained from the first rotation does not change at all. On the other hand, the forth point (which was projected to the center point) will move from the center towards plus or minus infinity depending on the rotation direction (the infinite positions will correspond to the orientation where the line of projection lies in the face opposite the fourth point). It follows that all possible configurations of points in our projection can be reached via rotations of the tetrahedron and the theorem is valid for n=3.

Since the space in which the n dimensional polyhedron is embedded is Euclidean, we can specify a particular orientation of that polyhedron by listing the n coordinates of each vertex. That coordinate system may have any orientation with respect to the orientation of the polyhedron. That being the case, we are free to set our coordinate system to have one axis (we can call it the x axis) parallel to the line on which the projection is to be made. In that case, except for scale, a list of the x coordinates correspond exactly to the apparent positions of the projected points on our reference line.

If the theorem is true for an n-1 dimensional polyhedron, there exists an orientation of that polyhedron which will correspond to any specific distribution of n points on a line (where scale is established via some procedure internal to that distribution of points). If that is the case, we can add another axis orthogonal to all n-1 axes already established, move that polyhedron up along that new axis a distance equal to $\frac{1}{\sqrt{2n(n+1)}}$ and add a new point at zero for every coordinate axis except the nth axis where the coordinate is set at $- \sqrt{\frac{n}{2(n+1)}}$. The result will be an n dimensional equilateral polyhedron with unit edge which will project to exactly the same distribution of points obtained from the previous n-1 dimensional polyhedron with one additional point at the center of our reference line segment.

If our n dimensional polyhedron is rotated on an axis perpendicular to both the reference line segment and the nth axis just added, the only effect on the original distribution will be to adjust the scale of every point via the relationship $xcos\theta$ where theta is the angle of rotation. Meanwhile, the position of the added point will be given by $sin \theta$. Once again, the added point may be moved to any position between plus and minus infinity which occurs at ninety degrees. Once again the length scale is established via some procedure internal to the distribution of points. It follows that the theorem is valid for all possible n.

QED​

There is an interesting corollary to the above proof. Notice that the rotation specified in the final paragraph changes only the components of the collection of vertices along the x axis and the nth axis. All other components of that collection of vertices remain exactly as they were. Since the order used to establish the coordinates of our polyhedron is immaterial to the resultant construct, the nth axis can be a line through the center of the polyhedron and any point except the first and second (which essentially establish the x axis under our current perspective). It follows that for any such n dimensional polyhedron for n greater than three (any x projection universe containing more than four points) there always exists n-2 axes orthogonal to both the x and y axes. These n-2 axes may be established in any orientation of interest so long as they are orthogonal to each other and the x,y plane. For any point (excepting the first and the second which establish the x axis) there exists an orientation of these n-2 axes such that one will be parallel to the line between that point and the center of the polyhedron. Any rotation in the plane of that axis and the y axis will do nothing but scale the y components of all the points and move that point through the collection, making no change whatsoever in the projection on the x axis.

We can go one step further. Within those n-2 axes orthogonal to the x and y axes, one can choose one to be the z axis and still have n-3 definable planes orthogonal to both the x and the y axes. That provides one with n-3 possible rotations which will leave the projections on the x and y axes unchanged. Since, in the construction of our polyhedron no consequences of rotation had any effect until we got to rotations after addition of the third point, these n-3 possible rotations are sufficient to obtain any distribution of projected points on the z axis without altering the established projections on the x and y axes.

Thus it is seen that absolutely any three dimensional universe consisting of n+1 points for n greater than four can be seen as an n dimensional equilateral polyhedron with unit edges projected on a three dimensional space. That any means absolutely any configuration of points conceivable. Talk about "emergent" phenomena, this picture is totally open ended. Any collection of points can be so represented! Consider the republican convention at noon of the second day (together with the rest of the world with all the people and all the plants and all the planets and all the galaxies) where the collection of the positions of all the fundamental particles in the universe is no more than a projection of some n dimensional equilateral polyhedron on a three dimensional space.

On top of that, if nothing in the universe can move instantaneously from one position to another, it follows that the future (another distribution of that collection of positions of all the fundamental particles in the universe) is no more than another orientation of that n dimensional polyhedron. Think about that view of that rather simple construct and the complex phenomena which is directly emergent from the fundamental picture.

Have fun -- Dick

Last edited by a moderator: May 2, 2017
5. Aug 4, 2006

But Dr. Dick, there are many properties of the whole of that group of folks present at 12:00 noon at the convention that cannot be predicted from knowledge of their positions, thus your example does not explain why emergent properties are not a fundamental reality of cybernetic systems--in fact, the exact opposite is true, for when a system becomes large the properties of the whole are very different from the properties of the parts.

6. Aug 7, 2006

### Doctordick

I simply cannot comprehend your inability to fathom the consequences of what I just proved. The republican convention has nothing to do with the proof at all. I put it the way I did to express the fact that the evolution of the most complex phenomena conceivable all the way from the exact detailed behavior of an entire collection of individuals and all their intimate environment amounting to a complex community of human beings all the way to the behavior of the entire universe can be seen as no more than a projection of the vertices of a rotating n-dimensional equilateral polyhedron on a three dimensional space. And all you say is "when a system becomes large the properties of the whole are very different from the properties of the parts."

I do not know how to reach you -- Dick

7. Aug 7, 2006

### Q_Goest

Hey Dick,
I'd agree stong emergence and downward causation is a highly contentious issue, and I'd only seriously consider it at a molecular level as it's here we find a discontinuity between quantum theory and classical physics.

I read over your reference as well as other things you've posted at that site. You said:
That seems like a nice summary of what you're trying to accomplish. Correct me if I'm wrong, but your proof shows that any n dimensional structure can be seen as a projection of an n+1 dimensional structure onto an n dimensional space. Sorry if that's an oversimplification or if I've gotten something mixed up.

Would you agree that if some explanation can be shown to match reality, we still haven't proven that it does in fact match reality? String theory has this issue if I'm not mistaken. How would you prove that the universe is in fact a multidimensional structure? I like the idea and believe such a possibility might hold promise in explaining something about the world, but from what I understand such theories aren't able to predict anything and therefore they are no better than a strongly emergent phenomena without downward causation <grin> ie: additional dimensions may exist, but if there is no benefit derived from theorizing them, if everything can be explained without invoking the additional dimensions, then it seems these additional dimensions serve no physical purpose just as computationalism supposes conscious phenomena exist where such a phenomena serves no phyisical purpose.

Have you created a thread to discuss your work? If so can you provide a link? I'd rather not have discussions regarding your work in this thread and retain this one for discussions regarding weak and strong emergence.

*Questions for another thread: What causes the "rotation" and is the cause deterministic? Can all sets of dimensions be known or measured with respect to any other set of dimensions? If not, this might result in some very interesting phenomena that might help explain gaps in our understanding.

8. Aug 7, 2006

Well, it sure would help if you would explain how you came to conclude this: "any three dimensional universe consisting of n+1 points for n greater than four can be seen as an n dimensional equilateral polyhedron with unit edges projected on a three dimensional space". Well, here is crackpot that would find seven dimensions to the universe:http://homepages.ihug.co.nz/~brandon1/resources/dim3.htm [Broken] and not your three. Since you state that the correct number of dimensions in the universe MUST BE 3 ! -- what use all your explanation when in fact the correct number is found to be 4 as suggested by general relativity of Einstein(http://en.wikipedia.org/wiki/General_relativity) [Broken], or many as suggested by string theory (http://en.wikipedia.org/wiki/String_theory) [Broken] ?

Last edited by a moderator: May 2, 2017
9. Aug 7, 2006

So, you are saying that your projection allows one to "see" the "exact detailed behavior" of the simultaneous position and momentum of a collection of quantum particles--is that correct ?

10. Aug 8, 2006

### moving finger

OK, someone will need to help me out here, maybe I’m just being dense.

How can one set of phenomena be “determined by” another set of phenomena, and yet not be logically supervenient on that other set?

Can Chalmers, or anyone else, give examples of such strongly emergent phenomena (ones which fit his description)?

The problem here is that by looking at the monitor output as external observers, we have destroyed or circumvented the subjectivity (if there is any) within the machine. Subjective experience, by definition, is 1st person, and it cannot (by definition) be displayed on a monitor. That’s what people like Chalmers cannot accept, and the reason (imho) that they keep tilting at windmills trying to say that we need a whole new physics to explain subjective experience. We don’t.

To turn a definition into a tool, we just need to identify the necessary and jointly sufficient conditions for emergence – then investigate alleged emergent phenomena to see if they satisfy those conditions. So step one would be to identify the necessary and jointly sufficient conditions……

Interesting that we all (Q_Goest, octelcogopd, Doctordick & myself) seem to doubt that strongly emergent phenomena actually exist (Rade has not declared in this thread any beliefs one way or another). Is there anyone who wants to defend the notion that strongly emergent phenomena exist?

Best Regards

11. Aug 8, 2006

Staff Emeritus
Finger, I am another non-believer in strong emergence, but I just wanted to comment on this

[quote-moving finger]How can one set of phenomena be “determined by” another set of phenomena, and yet not be logically supervenient on that other set?

Can Chalmers, or anyone else, give examples of such strongly emergent phenomena (ones which fit his description)?[/quote]

This is a good point and aftr reading a lot of defenses of strong emergence and downward causation, not just within the consciousness arena, I have yet to see any defender of SE really grappple with it. Either they just present it as a gulp and accept primary fact, with handwaving toward sand piles or such, or else they argue in effect the it's technically very difficult to derive the SE phenomena from the lower level ones and personally THEY can't imagine any way to do it.

Generally speaking I consider folks like that, including Searle, and perhaps Chalmers, to be lacking in imagination and comprehension of the big complexity of the world.

12. Aug 8, 2006

### moving finger

I agree 100% - and I think you've highlighted the real "hard problem" here - the fact that it is indeed often very difficult in practice to derive the emergent phenomena from lower level properties, and some people then jump to the conclusion that "oh! there must be a whole new physics in here!".

Basically the same problems underlie the understanding of causation vs correlation, and of understanding the "emergence" of responsibility within so-called "free agents" - as exemplified in the Quantum Mechanics and Determinism thread here : https://www.physicsforums.com/showthread.php?p=1056559#post1056559

There is no need for any new physics. There's just a need to let go of false intuitions and use common sense.

Best Regards

13. Aug 8, 2006

### Q_Goest

Hi MF.

I'm a bit confused by the use of the term "supervenient", but it seems understandable to me when read in context. I interpret Chalmers as saying that strong emergence postulates there being phenomena that can't, even in principal, be determined by the low level facts or the microstates as Bedau puts it. If this is true, then to maintain physicalism I guess we must postulate additional laws that might govern the interelationship between the microstates and the system. Chalmers gives an example of what he means:

Note: I've included Chalmers reference to strong downward causation because I honestly don't see a need to invoke strong emergence without it.

I think a potential explanation for strong emergence might arrise from a discussion of multiple dimensions. The concept of more than 4 dimensions is a common one. From the perspective of the proverbial 2 dimensional ant crawling on a 2 dimensional plane, the 3'rd dimension intersects that plane at an orthoganal angle - such that from the ant's perspective, there is no 3'rd dimension and he has no reason to consider it.* The dimension makes no impact on the world. At least, that's what the ant thinks. The ant can not see nor measure any 3'rd dimension as it crawls around on this plane and there is no way for the ant to detect this dimension, even in principal.

The fact one can not measure a dimension in any way may make some sense of strong emergence and also of strong downward causation. If your yardstick is made of n dimensions, it can't measure n+1 dimensions. If however, there is another dimension, it is conceivable that it affects or is related somehow to the others.

Chalmers doesn't support this concept of course, he's only suggesting that there may exist higher level configurations which may require new physical laws, but I don't see that at any level above the quantum level. I could potentially accept such a concept at a molecular level. That is, perhaps some additional dimensions have a causal affect on molecules, and potentially those molecules then affect the overall system, but once we have a statistically large group of molecules that interact at a classical level, the outcome is essentially deterministic and governed only by weak emergence.

*The 2 dimensional ant exists in 2 linear dimensions and a time dimension, so actually it is a 3 dimensional ant, but here I've used length as a dimension as is often done for the ant analogy.

14. Aug 8, 2006

Consider the concept "cat". The concept can be viewed either as a "set" (e.g., the set of all cats) or your pet cat fluffy. IMO, strong emergence is nothing more than the common sense fact that what may be true about a set may be false (even meaningless) when applied to any element of the set. Thus, consider this statement about the concept "cat" -- it is one million years old. Is this not an example of strong emergence, a higher order phenomenon not possible for any single element of the set ?-- for fluffy may be old but not that old. Example of weak emergence using cat concept is this statement -- one half are female, for fluffy must be either male or female. In this example the higher order phenomenon is thus deduced from basic principle concerning x y chromosomes and meets definition of weak emergence. But perhaps I do not understand the motive for the division -- strong vs weak.

15. Aug 8, 2006

### moving finger

Hold on. This seems to directly contradict your earlier quote from Chalmers.

Unless there is some other strange interpretation of the verb “determined” that Chalmers is using here, this means that given antecedent “low-level facts” the “strongly emergent phenomena” arise as nomologically (if not logically) necessary consequences.

Your statement “strong emergence postulates there being phenomena that can't, even in principal, be determined by the low level facts” is thus in contradiction to Chalmers’ statement. If you actually mean “strong emergence postulates there being phenomena that can't, even in principle, be determinable by knowledge of the low level facts” then I would agree this is perhaps correct (but arguable) – because determinability (an epistemic property) is NOT the same as determinism (an ontic property). This once again gets back to the fundamental difference between ontic determinism and epistemic determinability – a recurring theme is so many threads!

This assumes the premise that consciousness “causes’ wave function collapse is true – I don’t believe it is.

I tend to agree. Epiphenomena are pretty useless (hence may be ignored as any part of an explanation) by definition.

OK. What you’re saying here is basically that “there may be more laws of nature/physics than we are currently aware of” – and I wouldn’t disagree. But I wouldn’t call this any form of emergence – it only appears like emergence because we have limited knowledge of the underlying physics. If one were to educate the ant about the existence of this 3rd dimension he would presumably say (assuming ants are sentient and can communicate) “ahhhh, I see! That’s how it works” – he wouldn’t say “ohhh, that’s an emergent phenomenon”.

This is not in fact correct. Firstly, in reality the ant is aware of that third dimension. If it starts to measure distances (btw – it has been shown that ants CAN measure distances!), then it will find some very strange geometrical properties of its world (unless it is living on a truly flat plane with no topography), from which it could infer that there exists a 3rd dimension. Secondly, if you wish to imagine truly 2D beings then these beings would have no 3rd dimension at all – thus it would be impossible for them to physically “exist” in any real sense of the word existence.

Even if I were to allow that the ant is aware of only 2 dimensions, if there is no way for the ant even in principle to determine the existence of the 3rd dimension then in what possible way can the 3rd dimension have any impact (via downward causation) upon the ant?

How can it affect the others and at the same time we cannot in principle be aware of its existence? Could you give an example?

I accept there may be some “laws” of nature that we have not yet discovered. If this is all that Chalmers is getting at then I don’t disagree. But to jump from this to “strong emergence” or “downward causation” is (imho) an irrational and unwarranted “wrong-headed” approach. It’s not that higher dimensions have “causal effects” on lower dimensions, it’s that if there are higher dimensions then we will need additional “laws of physics” to explain how all dimensions (lower and higher) interact. To my mind these laws are in principle no more “inaccessible” than laws of quantum physics or relativity or cosmology.

I don’t see why this is “strong” emergence. The concept cat may be 1 million years old, I may not be able to determine how the concept cat arose in the first place (ie where the concept came from is not epistemically determinable), but if I believe in determinism then I simply say that this concept arose as a necessary consequence of the outworking of laws of nature plus antecedent states. I don’t see what emergence has to do with it.

It is logically possible that there be an unequal split in genders – indeed it is logically possible that (ie there exist logically possible worlds where) 99.999999% of cats are female. There even exist logically possible worlds where cats reproduce asexually.

Best Regards

16. Aug 9, 2006

### Doctordick

I simply cannot comprehend your failure to fathom what I said. I merely stated my example as I did to emphasize a that the "complex distribution of a collection of positions" can display the specific details of absolutely anything ranging from the exact details of every aspect concerning the intimate behavior of all arbitrary macroscopic groups of human beings together with their surroundings all the way to the very extent of the universe. And the behavior of it all can be represented by rotation of that polyhedron. From a very simple view emerges an extremely complex phenomena.
Is that discontinuity real or merely a figment of your imagination?
I think you have gotten some very important things mixed up. You should have said,"your proof shows that absolutely any collection of three dimensional structures can be seen as a projection of the vertices of an n dimensional equilateral polyhedron with unit edges (the n dimensional version of an equilateral triangle) onto a three dimensional space. Think about what that sentence says carefully.
You would have to explain to me exactly where you find a difference in meaning between "shown to match reality" and "in fact" "match reality". I would normally take "shown to match" to mean that the match is a fact.
The problem with "string theory", as I understand it, is that, although it can produce mathematical relationships found in the experimental results, these relationships can not be uniquely tied to real experiments. My simple constraints can be directly related to real experiments through analytical definition. I might comment that, in my opinion, if one cannot provide analytical definitions of the terms they use, they do not know what they are talking about; an analytic statement itself. That is exactly why I begin with "undefined sets" A, B, C and D: i.e., working explicitly with undefined things is the only way to talk about something without knowing what you are talking about.
I wouldn't! "IS" is a very strong statement no matter what it refers to and only serves a real purpose in an analytic truth (as per Kant's definition).
I have many times tried to create a little interest in my work and have yet to find anyone both educationally capable of following my arguments and emotionally interested in following them.
You must first define and defend the concept of "cause" before intelligently discussing a cause of any kind. In my opinion, "cause" is no more than the event proceeding the event being explained by that cause: i.e., explanations (the methods of obtaining your expectations) introduce the concept of cause. Without explanations, the concept "cause" serves no purpose whatsoever.
I think the gaps in your understanding are a simple consequence of not thinking things out carefully. In particular, chasing off after poorly defined concepts as if they are facets of reality which require explanation. You need first to be very careful as to what you are talking about.
That is exactly what is presented in the post; however, you seem not to be able to follow the steps of the proof.
You "see" things in your imagination. Whatever it is that you see, in most normal human beings, it is rendered as things dispersed in a three dimensional space which change in various ways as time passes. Theories are hypotheses as to "why" things appear as they do, not proofs of what is! You are a very confused person.
It is quite simple. The presumption is that there is a fundamental law of the universe which requires many many variables to express. First, it is a "fundamental" law in the sense that it expresses a relationship inherent in the universe which is not a consequence of the collection of other fundamental "laws". And second, as expression of this relationship requires many many variables, the existence of the law has no observable consequences until that required collection of variables are under consideration. Paul and others would like to define consciousness to be such a collection, thus introducing a new "fundamental law" to explain the observed behavior.

The fundamental problem with such a concept is that it must be possible to communicate an explanation of the concept to another or it is useless. That is why my analysis of "an explanation" in terms of undefined fundamental entities A, B, C and D still applies. And further, as utterly no causality is required to explain any distribution of fundamental entities (other than "they must be different", enforced by the Dirac delta function, and the set D, what is hypothesized to exist,) in order that the observed physical laws between two fundamental elements be what is physically observed, there exist no evidence for any physical laws outside our imagination.
And I agree with you one hundred percent.
You are exactly right. The "hard problem" is solving any many body problem. I would point out to you that physics is notoriously lacking in analysis of many variable systems. Newtonian mechanics is quite easy to solve for "one" body problems (so long as the forces on that lone body can be expressed) and for "two" body problems so long as those two bodies are the sources of all significant forces (i.e., cases where the problem can be reduced to a one body problem via conservation of center of mass momentum) but general three body problems can only be solved through numerical approximation or for very special cases. What I am trying to point out is that many variable systems are, in general, very difficult to solve and determining the correct emergent behavior (except for something as simple as random gas) is actually very very difficult.

By the way, I can show that all one body problems (and that would include reducible two body problems) and random gas problems can be accurately modeled by that revolving n dimensional polyhedron so what evidence is there that the observed "emergent" behavior is not also so modeled?
You are quite correct. In the same vein, everyone seems to miss the fact that "every" physical measure (as opposed to selfAdjoints reference to Lebesque measure which is an analytic concept) must be established via references to defined "physical" phenomena internal to the universe under consideration. That fact has some very profound consequences usually missed by everyone.

Have fun -- Dick

17. Aug 9, 2006

Well, no, theories are not hypotheses, a theory FYI is an "explanation" (of facts, hypotheses, laws) not a hypothesis (we can call this Dr Dicks confused lapse #1); I see many things (as do most normal human beings) with my eyes (there is a name for this phenomenon btw--perception) not my imagination (DD confused lapse #2); I cannot see with eyes nor imagine the simultaneous phenomenon of position and momentum of a quantum particle--if you can please share as then you can publish your falsification of the Heisenberg Uncertainty Principle (DD confused lapse #3); I never stated that a theory was a "proof"--in fact I did not even use the word in my question to you that you have no idea how to answer (DD confused lapse #4). Confusion indeed in this thread.

18. Aug 9, 2006

### Q_Goest

Hi MF.
One definition of "strong emergence" per Chalmers.
Chalmers also states numerous times that he believes consciousness is strongly emergent. Chalmers is also a computationalist. Computationalism suggests that although the mechanism that produces the "high-level phenomenon" as Chalmers puts it, is completely deterministic (any computational mechanism is completely deterministic) he's suggesting that the phenomenon of consciousness is something which can't be deduced in any way from examining the operation of the computer's parts. So I don't think I've misquoted him when I say:
Perhaps the term "determined" is underspecified for a philisophical discussion like this. I mean they can't be figured out or understood, not that they aren't deterministic. We can have perfect knowledge of all the parts, and the phenomena could still not be understandable, even in principal. In the case of a computational machine, strongly emergent phenomena can obviously be created by the deterministic actions of some mechanism.

The really crazy part of all that is someone wanting to suggest there's something more, an add on, something that is created which we have no need to suspect exists and we have no way of measuring. Further, it exists without any causal efficacy. This is the computationalist's position. It is a belief, not unlike a religion, which says we need to accept that something along the lines of a strongly emergent phenomenon must be created which has no causal efficacy, can't be understood even in principal and it arises from the actions of numerous, deterministic, knowable parts which we can easily duplicate, simulate, and know everything about except we can never understand anything about the subjective experience it creates. If one believes in computationalism, you are essentially forced into believing strong emergence exists. I see no way around it, as Chalmers obviously has also concluded. If one accepts computationalism, you have little choice but to believe in strong emergence. I'd like to know how one can avoid that conclusion.

I'm suggesting that in order for any kind of strong emergence to make sense, we need to step away from our common perceptions of the "laws" of nature and physics. Here you've suggested we can know them. However, if we can't measure something, I'm suggesting we can't know what makes it work, even in principal. Not from this perspective anyway, the perspective of a conscious human living in a 4 dimensional world. Thus, the ant would never be able to say, “ahhhh, I see! That’s how it works”. He would say, "ohhh, that’s an emergent phenomenon”. Why? Because he can't measure a dimension he doesn't have privy to, despite the fact it may affect him in some way.

Regarding the two dimensional ant analogy, my apologies, I thought you'd have heard it before. It's a very common analogy used in physics to describe additional dimensions because we live in a 3-d world so it's easier for us to visualize a world with one less dimension as opposed to one more dimension. So yes, the ant is a 2-d ant, not a real 3-d ant that crawls around your yard. Here's a few examples:
http://d0server1.fnal.gov/users/gll/public/edpublic.htm

This gets back to the one example given by Chalmers where he suggests wave function collapse might be an example of downward causation. I'm actually modifying Chalmers a bit and suggesting that this wave function collapse might be the action of another dimension which acts through conscious phenomena. Downward causation in this case is the influence of this other dimension which has causal efficacy through the strongly emergent phenomenon of consciousness. I don't think this is totally untalked about in the physics community. I might have to look to see specific examples of this, but I'd say this concept is not new - although my use of terminology may be a bit unique here (ie: the use of the terms "downward causation" and "strong emergence" in conjunction with the more common discussions along these lines which only point to consciousness possibly having some causal efficacy over wave function collapse, about which there have been many discussions within the physics community.

Yes, he's saying there are additional 'natural laws' which we might potentially uncover, but note that these are NOT reductionist type laws as we've already uncovered. I opened the other thread regarding FEA to point out that physical laws at the classical level can all be seen to be reductionist type laws, laws of cause and effect at a local level. That's exactly what strong emergence and downward causation are NOT. We can model anything at a classical level assuming only cause and effect or reductionism. We can't do this at the molecular level, but I don't think anyone can really say why.

I think this has been the approach all along, but as far along as we are in being able to 'calculate' quantum phenomena, we have absolutely no philosophy for what it means, and no way to exactly determine some phenomena such as radioactive decay. It may be such things are impossible to determine because we simply do not occupy and thus do not have tools with which to measure the other dimension. I'm sure you'll think that if we DID then we COULD and everything would be DETERMINISTIC. That's overly simplistic though. It doesn't look at the reality of trying to measure an orthogonal dimension if you don't have tools which can reach that dimension. We can't create ideas or knowledge around things which are not accessable to us.

Where did you get this? I've not heard of anyone suggesting this before.

19. Aug 13, 2006

### moving finger

I don’t see how you get from this to the conclusion that determination does not entail supervenience. By definition, if X determines Y, then Y is supervenient on X. In other words :

Supervenience is the relationship between two sets X and Y (usually sets of properties or propositions), where fixing one set -- the supervenience base -- fixes the other -- the supervening set.

Determinism is the relationship between two states of the world X and Y where fixing one state – the antecedent state – fixes the other – the consequent state.

How can it be that determinism does not entail supervenience?

OK, I can go with this.

OK, and I can go with this too, up to a point (but not because I agree with Chalmers’ ideas about consciousness). Any particular conscious state S is a unique system configuration, with no other conscious state perfectly identical to it, and as such S will have many “unique” properties which are determined by the particular configuration of S. The more interesting properties as far as consciousness is concerned are the self-referential ones – the properties of the conscious state S as judged by the consciousness itself. Now since each conscious state is unique it follows that there is indeed in principle no way (as long as we analyse finely enough) that an external observer can deduce the self-referential properties of any particular conscious state S from knowledge of the microstates. This simple fact (ie that the content of consciousness is not epistemically determinable by an external observer) is the one that Chalmers cannot accept, and upon which his argument for a "whole new physics" is based. He's misguided.

I think this agrees pretty much with what I have said above.

The use of the phrase “can't be determined by” in the above is misleading and strictly incorrect, since it possibly implies a reference to lack of ontic determinism. If determinism is true (and we do not know if it is or not), then all phenomena are determined. This applies to my example above of the impossibility of deducing the the self-referential properties of any one conscious state S when one is an observer external to that conscious state S. Just because we cannot deduce the self-referential properties, it does not follow that these properties are not (ontically) determined by low-level facts – it simply means that the phenomena are not (epistemically) determinable..

I think it would be correct (therefore better) to use “determinable from” rather than “determined by”, in which case the wording becomes :

“strong emergence postulates there are phenomena that are not, even in principle, determinable from knowledge of the low level facts or the microstates”

(if we include in phenomena “subjective conscious experiences” and we assume that “determinable by” means epistemically determinable by an agent external to the conscious experience in question.)

This reflects the fact that the lack of determinability is an epistemic obstacle rather than an ontic one.

I answered the above before I read this bit. We think alike.

In which case I suggest that instead of using “determined by” (which may be incorrectly interpreted in the strict ontic causal determinism sense), it would be better to use “determinable from” – to ensure that we all understand we are talking of limits to epistemic determinability here as opposed to limits to ontic determinism.

Agreed, if we substitute “completely knowable” for “understandable” – there are senses in which I can claim to understand something without knowing all the details of that thing. (btw – I think the word you want is principle – a principal is something different). Thus we need to make sure we are clear in referring not to determinism but to determinability.

Agreed

Is this Chalmers’ position?

With some slight changes in the wording, I would agree with all of the above EXCEPT the part about lack of causal efficacy. Is the computationalist necessarily committed to believing that certain phenomena are epiphenomena? I’m not sure that follows.

What I am saying is that I agree certain phenomena, such as subjective conscious experience, can be classed as strongly emergent, by virtue of the fact that the precise details of those phenomena are not determinable by any external agent. But it does not follow from this that the phenomena in question are not causally determined by “low level facts”, and it also does not follow that they are epiphenomena.

Everything we think we “know” about the world is based on inferences made from assumptions. We can in principle infer properties of other dimensions even when we have no direct access to those other dimensions. It’s really no different in principle to inferring the structure of the atom when we have no direct (in the normal sense of the word) access to the interior of the atom, or inferring the temperature of the interior of the sun when we have no direct access to the interior of the sun. Just because we cannot measure the interior temperature of the sun directly does not mean that this temperature is an emergent pheomenon.

I disagree. Imagine the ant is living on the surface of a very large sphere (but doesn’t know it), and imagine the ant is intelligent and starts investigating trigonometry. He eventually discovers a "law" which says that the internal angles of a triangle always add up to 180 degrees. As long as his triangles are very small in relation to the sphere, he won’t find any significant discrepancy with this "law". But if he makes a very large triangle, one that is of the same order as the radius of the sphere, he will find some very strange results. If he is a very intelligent ant he may be able to deduce that the "triangle law" assumes flat Euclidean space, and one possible explanation for his strange results is that he is not in fact living in such a flat space. He could even estimate the radius of the sphere on which he is living from his measurements, even though he is resticted to working (and experiencing directly) just two dimensions. Having done all this, the ant would indeed say “ahhhhh, That’s how it works!”. Nothing at all to do with strong emergence.

OK. But I don’t accept that consciousness causes wave function “collapse”, either in our familiar 3 spatial dimensions or via any extra dimensions. How can consciousness be emergent and at the same time be responsible for wave function collapse? You have a chicken-and-egg situation there - for consciousness to emerge in the first place there must be wave function collapse, but there also needs to be consciousness around in order to cause wave function collapse.....? Which comes first? It seems to me that one can postulate either that consciousness is emergent, or that consciousness causes wave function collapse, but the two together is a contradiction.

I disagree. Strong emergence (as discussed above) has nothing necessarily to do with “non-reductionist type laws”. I believe we can have strongly emergent phenomena (consciousness is an example we have discussed) in the presence of strict determinism (obeying reductionist laws), but I don’t see any evidence here for downward causation in the sense of “non-reductionist type laws”.

We therefore need to be very careful when lumping strong emergence and downward causation together – because they are very different things.

Agreed – but this may simply be due to lack of epistemic determinability, not necessarily lack of ontic determinism.

We have plenty of philosophies (which attempt to explain what is going on in QM) – the problem is in deciding which one to choose. The physicist says that we cannot know which one is correct, so just shut up and calculate (SUAC). The philosopher is not happy with that state of affairs, but she cannot find any way to show which one, if any, of the many different interpretations might be the correct one.

Either everything is deterministic or it is not – the truth of determinsim is not a function of what we do know or even can know about the world. I think the word you mean to use here is “determinable” – and no, I don’t think that everything would or even could be determinable, even if we could occupy additional dimensions. The issue is not that we are restricted to particular dimensions, the issue is that we are part of the problem we are trying to solve. We cannot “step outside” the system and examine it objectively from the outside.

Max Planck summed it up very well when he said :
I’ve shown exactly how it can be done with the ant example above.

Of course we can (we just don’t know for certain if our ideas are correct or not – but that’s true of almost everything). The interior of the sun is not accessible to us, but we can create a very detailed theory of what is going on in there. The interior of the atom is not accessible to us, but we can create a very detailed theory of the sub-atomic realm. The "past" is not accessible to us, but we can construct detailed histories right up to the Big Bang. The importat point here is that one does not need direct access to something in order to model it - one needs only to be able to obtain some information relevant to that thing so that one can construct models. This is just what the ant on the sphere does - she does not have direct access to the 3rd dimension, but she can still construct a mathematical model of the shape of her world in that 3rd dimension. We could do the same for the 4th dimension if our 3D world was curved in the 4th dimension.

The main point I would like to make is that it is NOT simple black and white. It is not the case that we cannot model realms which are not directly accessible to us (even the interior of consciousness). Instead it is the case that we have access to a limited amount of information that allows us to construct certain models describing those realms (the interior of the atom, interior of the sun, history, consciousness, and the ant's 3rd dimension). Those models are not complete, I agree, they are missing some information. But this is not the same as saying categorically that we can't create ideas about those realms.

This follows quite logically from the observations above about the emergence of consciousness. Any particular conscious state S is a unique system configuration, with no other conscious state perfectly identical to it, and as such S will have many “unique” properties which are determined by the particular configuration of S. The more interesting properties as far as consciousness is concerned are the self-referential ones – the properties of the conscious state S as judged by the consciousness itself. Now since each conscious state is unique it follows that there is indeed in principle no way (as long as we analyse finely enough) to deduce the self-referential properties of any one conscious state S when one is an observer external to that conscious state S. Thus we (as external observers) cannot hope to deduce the subjective properties of S by simply displaying some objectively measured information about S on a monitor.

Best Regards

Last edited: Aug 13, 2006
20. Aug 14, 2006

### Q_Goest

Computationalism requires Strong Emergence

I believe Chalmers is correct in arguing that consciousness is a strongly emergent phenomena. He also correctly concludes that if computationalism is to be accepted as the paradigm for consciousness, computationalism must assume strong emergence. Further, it seems as if strong emergence must imply irreducible laws of physics are at work, laws very much unlike the laws we're accustomed to seeing at a classical level. These seem to be laws that govern large assemblages of matter and not the local cause and effect we see at the micro level. It's a rather unsettling view, one I feel shines a rather disheartening light on computationalism.

First, we have to understand weak emergence. Bedau provides an excellent discussion on this, and the quote from his paper in the OP should be sufficient to provide us a definition. In that paper, he provides the example and discusses the game of Life and explains in detail how this game is weakly emergent. We can know everything about the game of Life if we simulate it. We can see "gliders" take off, we can see other patterns emerge, and every phenomena that comes out of these patterns is understandable and predictable (in principal) from knowledge of the rules of the game and the actions of the parts. There is nothing "extra", nothing unknowable or not understandable about this game, even though it may be highly unpredictable. We can see how each pixel changes and how an image of a "glider" appears from the action of the microdynamics of the elements which are completely dependant on the rules of the game. We wouldn't say the game of Life is "conscious" or has subjective experience, nor would we say there is some phenomena about this game that can't be understood. To do so would be to propose that the game had some EXTRA quality or property which had nothing to do with the microdynamics of the microstates and would also be completely unknowable even in principal. If someone were to suggest the game creates the phenomena of zortnore, but we can't see this phenomena simply by looking at the parts, we might suggest this person seek professional help!

Chalmers recognizes Bedau's work toward the end of his paper when he writes:
This is key to understanding the problem, because here we see that Life is completely knowable and has no additional features, nor does it create any phenomena which isn't completely understandable by examining the rules and creating a simulation.

But then Chalmers notes another example of "weak emergence":

He doesn't say much about this, but what he's implying I believe is fairly straight forward. Any computer network, has "simple interactions between simple threshold logic units" which would for example be the interactions of switches which are logic units that operate only above a certain threshold voltage. The interaction of any large set of switches is determined strictly by the actuation of power to a control wire on the switch which then operates the switch. Thus, the high-level 'cognitive' behavior which emerges is seen to depend on the interaction of the switches. The microdynamics of any set of switches certainly is knowable, and thus, how that set of switches behaves is understandable by examining the microdynamics. Everything we want to know about the seemingly 'cognitive' behavior which emerges from computations which may or may not be consciously aware can be understood simply by examining the microdynamics of these "threshold logic units".

Note the strong similarity between Chalmers examples A and B. The game of Life has pixels, the computer has switches. The pixels operate depending on the local interaction, just as a switch operates from local interactions. Both are reducible.

We don't need to suggest that the computer has anything extra, some special phenomena which we can't understand from observing the interaction of the switches. The behavior of such a computer is specified completely by examining the microdynamics of the system. If we didn't claim there was anything more, we might call such a computer a p-zombie, because although it might behave exactly as a person would, it would have no subjective experience, it would not have that "something more", some phenomena which can't be deduced in principal even by examining the parts. Why can't we understand it? That should be fairly obvious now, because subjective experience requires something more than simply observing this interaction of the parts. We are at a loss to say what subjective experience the computer is having, and we can't call in such things as "self-referential properties" because to suggest such properties explain anything presumes such properties exist because of the interaction of these switches. It is impossible to understand or deduce in any way the subjective experiences of "red", "pain", or "zortnore" from the interactions of the switches.

Chalmers is a computationalist, and if you've read much of his work, you'll find he has a passion that forces him to find a way around the difficulties and allow computationalism to move forward as the paradigm for consciousness, very much like Dennet. And so, once he recognizes that computationalism is in trouble because it can't explain subjective experience through reductionist type laws, he looks to "strong emergence" to find a way.

Chalmers recognizes the problems such an issue may raise, and tries to find away around it:
Further, Chalmers contrasts the game of life with the COBAL system and notes "all the complexity of the high-level behavior is due to the complex structure that is given to the low-level mechanisms." Throughout the paper, Chalmers refers to some connection between low and high level properties, and this is consistent with others who discuss strong emergence. Here's another quote, "Still, this suggests the possibility of an intermediate but still radical sort of emergence, in which high-level facts and laws are not deducible from low level laws."

Certainly, one has to accept that strong emergence of the type proposed by Chalmers and others can not be seen in relatively small, and simple systems. Strong emergence is characterized by large and very complex systems, and does not and can not come about in smaller, less complex systems. Chalmers provides ample discussion on this using his COBAL example.

So if these new physical laws can't find any place in the simple systems, and only emerge within complex systems, then any strongly emergent phenomena must be characterized by physical laws which govern large, complex systems, not the local interactions of numerous parts. Such laws sound like irreducible laws to me, ones that only apply to large, complex, high-level systems of particles and thus they would not be reducible and reductionism would have to fail.