Analytical and Physical Separation Below the Mesoscopic Scale

In summary: I don’t know. Maybe a particle? I don’t think we can reduce molecules to particles, at least not without breaking them down into something much smaller. I think the closest we might come is to say that we can reduce a molecule to its constituent atoms or molecules, but I’m not sure if that’s really reduction. I could be wrong though. What are your thoughts? In summary, there seems to be a fundamental issue that prevents one from applying truly reductionist concepts below the mesoscopic scale, the scale of molecules and molecular interactions. Physical reducibility is the ability to reduce a system at a classical scale, while analytical reducibility is the ability
  • #1
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,012
42
There seems to be a fundamental issue that prevents one from applying truly reductionist concepts below the mesoscopic scale, the scale of molecules and molecular interactions. Note that we could also look at this scale separation as being at the level between classical mechanics and quantum mechanics.

I'm going to separate the concept of physical reducibility from analytical reducibility and look at how each differs above or below this mesoscopic scale. I'm interested in feedback on concepts used in physics, especially below the mesoscopic scale, that would help flesh out an argument which essentially says, there is both a physical and analytical separation that exists in the world at this scale. Above this scale, we can easily reduce any system, but below this scale the systems are not reducible in a truly reductionist sense.

Physical Reducibility - above mesoscopic scale
Physical reducibility would be the ability to separate things physically into small chunks or volumes while maintaining some kind of causal affect on the volume which is independent of the causal affect’s source. In other words, physical reducibility can be obtained by replacing the causal affects that some finite volume has in one physical system with equal causal affects such as is done in a lab to reproduce the affect within some experimental volume. This can be as simple as reproducing a chemical reaction on a bench top that would otherwise occur elsewhere, such as deep in the ocean or within a chemical reactor in a refinery. Another example might be a fatigue test on a piece of aluminum for example. We don’t need to put an engine bracket from an aircraft into the actual aircraft and fly it around for thousands of hours under conditions of low temperature or salt spray to understand if it will crack. We can do this in a lab, in a test chamber. Such classical mechanical interactions can all be seen to be physically reducible simply by taking some small volume of a system and subjecting it to equivalent causal actions. The main reason we can duplicate a given volume like this is because the behavior of anything at the classical scale is due to an aggregate of molecules or atoms.

Physical Reducibility - below mesoscopic scale
On the other hand, we can’t do this with molecules or atoms. For example, we can’t physically separate out the nucleus and see how one atom might react to another by subjecting the nucleus to some kind of electrical field that simulates the electrons. We can’t physically separate the nucleus from the electrons in an atom, and duplicate the interaction of one part by subjecting it to causal actions so that the interaction with other atoms can be duplicated. Similarly, I believe it is impossible to separate out matter and apply causal actions on molecules like we do to large objects. We could take C2H6 for example, and make it CH4 by cutting a carbon bond, but I don’t know how we might then make the CH4 molecule ‘act like’ a C2H6 molecule. I don’t believe we can physically simulate one part (part A) of a molecule by removing part of it and subjecting it to intramolecular forces that are identical to those forces that part A would have been subjected to had it been actually attached to the remainder of the molecule.

So for physical reducibility, we have the ability to reduce a system that operates at a classical scale but we don’t have the ability to reduce such a system below roughly the mesoscopic scale. Would you agree with this or not? How would you argue that such a physical separation exists? Any papers that might address this would be appreciated.

Analytical Reducibility - above mesoscopic scale
The second part of this is what I’ll call “analytical reducibility”. Just as classical mechanical structures such as an aircraft or chemical refinery can be physically reduced, we have concepts that allow analytical reduction. We can cut any classical level object up and apply FEA or CFD to it as is commonly done in engineering. In basic physics courses, we learn concepts such as “free body diagrams” which allow us so separate out a truss in a bridge for example, and apply forces and moments to the various points on the boundaries so that the remaining portion of the truss can be analytically solved for forces and moments. I’m unaware of any system that we can’t analytically reduce above the mesoscopic scale in this way.

Analytical Reducibility - below mesoscopic scale
Below the mesoscopic scale, I believe things get a bit more tricky. I don’t believe, but would like to hear comments, on if a molecule can be reduced to it’s constituent atoms or not. Can one for example, reduce the CH4 molecule to the equations governing those atoms, and then place some other boundary conditions on that model such that we have duplicated a C2H6 molecule? Perhaps another example might regard amino acid sequences which fold. Can one determine the bending moment at the fold simply by reducing all the atoms on one side of the hinge to a single set of equations such that the analysis of the folding point is accurate, or do we actually need to have all atoms calculated on both sides to understand how and where it will fold? Are there techniques that conceptually reduce a molecule at an arbitrary cut similar to a free body diagram? Can we argue that there is no analytical method to create 'boundry conditions' for parts of molecules or between molecules, and if not, why not? Where is the analytical separation if there is one?

I've read Laughlin's paper "The Middle Way" which addresses some of these concerns indirectly. I've also read similar papers but nothing that really touches directly on the questions above. Suggestions for papers like these would be appreciated.
 
Physics news on Phys.org
  • #2
I think I've found the answer to the question about analytical reducibility below the mesoscopic scale, which is perhaps the most important part, from reading this:
There have been, at times, heroic attempts to find tractable approximations to solve the Schrodinger equation for larger and larger molecules. We’ve also realized that the Schrodinger equation is actually an insufficient principle to do chemistry because in the end, the Schrodinger equation is a single electron equation. Reality is made up of more than 1 electron. However, there does not exist an authoritative multi-electron theory, there are several competing multi-electron theories, from quantum field theory, fractional electron solids, Wilson effective field theories, to the Walter-Kohn approximations.

Quantum chemistry is a demandingly technical field where computers are pushed to the limit, in order to calculate the detailed properties of molecules on an electronic level. Every electron is modeled as a wave function, a kind of diffuse cloud of charge that oozes into 3-dimensional space. As a rule of thumb, there are as many electron in an atom as the number of the atom in the periodic table. The complexity quickly blows up in your face.
Ref: http://boscoh.com/protein/the-schrodinger-equation-in-action

The answer to my question seems to lie in quantum chemistry, a field I have no knowledge in. It sounds like the Schrodinger equation must be applied to every electron in order to calculate how molecules can react to each other, or even to calculate certain molecular properties perhaps. But to use the Schrodinger equation on every electron in a molecule means there is no way to reduce, even in an analytic sense, a molecule further. The entire molecule must be considered, in it's exact configuration, in order to determine molecular properties and how it might interact. Is that correct? Are there alternative methods that are equally accurate that one need not consider every particle in a molecule in order to calculate how it will react with other molecules or determine molecular properties?
 
  • #3
You seem to not know the existence of many-body physics. How do you think we solved many-body systems such as superconductivity, fractional quantum hall effect (i.e. what is the Laughlin wavefunction?), etc.? The whole field of condensed matter deals with many-body systems as complex as quantum chemistry. And Laughlin is a condensed matter physicist.

Zz.
 
  • #4
How do you think we solved many-body systems such as superconductivity, fractional quantum hall effect (i.e. what is the Laughlin wavefunction?), etc.?
If I knew, why would I be asking? Do you really think I want a rhetorical question as a response?
 
  • #5
That wasn't a rhetorical question. I asked because you were reading one of Laughlin's papers, and he is a condensed matter physicist. In fact, he mentioned several aspects of condensed matter in "The Middle Way" paper. So that's why I was puzzled why you didn't catch that.

Laughlin's Nobel prize speech might be something you want to read to familiarize yourself with on emergent properties in condensed matter. Considering that the largest number of practicing physicists are in this area of study, it would, I think, be a considerable omission not to consider this field of study as an example of such emergent property, and how people like Laughlin, Anderson, and David Pines, have continuously argued against reductionism.

Zz.
 
  • #6
Z, Thanks for the suggestion. I believe this is the speech you’re referring to:
http://nobelprize.org/nobel_prizes/physics/laureates/1998/laughlin-lecture.pdf

I’ve read through it, but unfortunately fractional quantization and the fractional quantum hall effect are too far from my area of expertise to understand very well. Nevertheless, the paper brings out the typical problems I see between reductionism and emergent phenomena.

Though Laughlin never defines what “emergent” means to him, and he doesn’t specifically say what type of emergence he’s thinking of, philosophers might be tempted to call it “strong emergence”. But it seems to me that physicists have a slightly different meaning for this term which is based on a system's reducibility.

Of those that have tried to define the concept of emergence, I like Mark Bedau’s work, “Downward Causation and the Autonomy of Weak Emergence” best as it is the most direct and understandable. Nevertheless, I don’t agree with everything he says. Bedau is a professor of philosophy at Reed college. His home page is here:
http://people.reed.edu/~mab/
and the particular (unreleased version) of his paper is here:
http://people.reed.edu/~mab/publications/papers/principia.pdf

In this paper, he talks about three types of emergence: nominal, weak, and strong.
- Nominal is his acknowledgment to boneheaded philosophers who want a term to define the emergence of a circle from points equidistant for a single location. Let’s ignore this one.
- Weak emergence is also very bland, but it is “all the emergence to which we are now entitled”. Weak emergence is the reductionist version or engineering perspective of classical mechanics. It requires that “macro causal powers are wholly constituted and determined by micro causal powers”. It also says that such phenomena can only be deduced through simulation. He uses “The Game of Life” as an example of weak emergence. (Note: this is roughly equivalent to Stapp’s definition of classical mechanics quoted below.)
- Strong emergence “is scientifically irrelevant” because it “... adds the requirement that emergent properties are supervenient properties with irreducible causal powers” meaning a given object as a whole has an effect on the individual parts. “This is the worry that emergent macro-causal powers would compete with micro-causal powers for causal influence over micro events, and that the more fundamental micro causal powers would always win this competition”.

The vast majority of philosophers and many scientists would probably agree with this set of definitions. The philosophical literature is replete with similar definitions. But it’s the definitions that I believe are at fault. Here’s where I’d like some feedback. . .

Note that Laughlin’s Nobel Lecture states, “Superfluidity, like the fractional quantum Hall effect, is an emergent phenomenon – a low-energy collective effect of huge numbers of particles that cannot be deduced from the microscopic equations.” Does this sound like weak or strong emergence? He goes on to describe solitons in polyacetylene and equation 1 shows an equation which requires N electrons be considered to determine H (whatever H is). It seems (to me) he means something slightly different than weak / strong emergence. He means the system can not be broken down as I’ve pointed out in the OP. The entire system must be considered, and it is the entire system that is irreducible, both physically and analytically.

Unfortunately for us engineering buffoons, Laughlin only ever talks about emergent phenomena in regards to those phenomena which occur due to intermolecular and intramolecular interactions (micro causal powers). Note he also talks about things being ‘broken down’ when he says, “the phonon ceases to have meaning when the crystal is taken apart, of course, because sound makes no sense in an isolated atom.” Laughlin seems to be concerned with emergent phenomena only at or below the mesoscopic level, the level of molecules and atoms. Perhaps this is an incorrect assumption. Perhaps this fractional quantum hall effect varies as larger and larger masses are considered (ie: kilograms, not molecules), but I can’t tell from reading this or any other reference on the topic.

Systems of large numbers of particles such as we typically are interested in as engineers, do not exhibit any kind of strongly emergent phenomena. They exhibit only weakly emergent phenomena. Per Stapp (a physicist), “The fundamental principle in classical mechanics is that any physical system can be decomposed into a collection of simple independent local elements each of which interacts only with its immediate neighbors.” That’s the perfect description! The elements are independent and local, each interacting only with immediate neighbors. Laughlin’s work is not on this scale though. Each element would seem to be aware of not only it’s immediate neighbor, but distant ones also. The phenomena is not local and the elements (micro-constituents) are interdependent!

I believe that here is where there’s a difference in the philosophical version of emergence and the physicists version, but I want to see if any physicists would agree with this:
The physicist would say there are NO higher level laws that come into play depending on the number of particles. They would say there is no “downward causation” or that the body of particles as a whole has no causal influence on the individual electrons, protons, neutrons, etc.. themselves, such as a philosopher might suggest for ‘strongly emergent phenomena’. The physicists might however argue that the system as a whole can not be considered piecemeal as “simple independent local elements”. And if the system as a whole can not be considered piecemeal in this way, then it can’t be considered a weakly emergent phenomena either. It seems as if there’s an alternative way of looking at emergence which is not given by those definitions the philosophers have provided, and is not well defined by physicists as far as I can tell. The definition might state that emergent phenomena occur where a system of particles must all be considered as a whole in order to understand the phenomena as a whole. The system can not be reduced to individual and independent local elements, and thus the system is irreducible. It also seems to me that physicists would only argue this point at or below the mesoscopic scale, but I can’t tell from anything I’ve read yet that this is true, and I see no references that specifically make the conjunction between emergence and various scales.
 
  • #7
Er.. I don't think you have understood even what is meant by "emergent behavior" as stated by Laughlin. For example, the issue on phonons is to illustrate that this is a "collective" property. You cannot create phonons when you have 1, 2, 3, 4, ... 28, 29... particles. It requires the whole solid to participate. Only then do you have a well-defined concept called "phonons".

I have written about this many times before, and I've summarized my opinion here. It also contains several other references to emergent phenomena. Laughlin has also written a book A Different Universe on this issue where he took aim at reductionism. He elaborates further on what he has written in the PNAS papers. You might want to read that.

Zz.
 
  • #8
Er.. I don't think you have understood even what is meant by "emergent behavior" as stated by Laughlin.
The point is that the term is ill defined. It does not seem to correlate to any specific definition as provided by philosophers, and as used to discuss such things as “consciousness”. For example, what is emergent about a group of classical level objects such as switches? The only emergence that we can apply here is the philosophical concept of “weak emergence” which doesn’t correlate to the term physicists might use, let’s call that term “phy emergence”. The two concepts are obviously very different.

It requires the whole solid to participate.
Would you say the “whole solid” somehow has causal control over the particles? If not, and I don’t think you mean that, then you are also not talking about “strong emergence”. It seems to me we agree on these points.

Note also that philosophers don’t restrict their use of the term to any scale, and I’m not sure that you do either. It seems however, that Laughlin does when he writes:
However, the fact is that the length scale between atoms and small molecules on the one hand and macroscopic matter on the other is a regime into which we cannot presently see and about which we therefore know very little. This state of affairs would not be of much concern if there were a desert of physical phenomena between the very large and the very small. But as we all know, there is life in the desert.
Laughlin repeatedly uses this separation of levels. He talks about particle interactions, talks about the “mesoscopic scale” and entitles the paper, “The Middle Way”.

Is there a length scale above which no new intensive properties of matter phy emerge, and if so, how do you define it? Is it this mesoscopic scale? Or do you feel that as one increases to well above this scale, where “gazillions” of molecules are interacting (ie: a swimming pool size), there are new and unique intensive properties that are phy emergent?
 
  • #9
Q_Goest said:
The point is that the term is ill defined. It does not seem to correlate to any specific definition as provided by philosophers, and as used to discuss such things as “consciousness”. For example, what is emergent about a group of classical level objects such as switches? The only emergence that we can apply here is the philosophical concept of “weak emergence” which doesn’t correlate to the term physicists might use, let’s call that term “phy emergence”. The two concepts are obviously very different.

Notice that I said "emergent behavior AS STATED by Laughlin". I wasn't trying to fit in into whatever it is that philosophers have defined it. Since we ARE dealing with what Laughlin has written, that is the only relevant viewpoint that I was trying to clarify.

Would you say the “whole solid” somehow has causal control over the particles? If not, and I don’t think you mean that, then you are also not talking about “strong emergence”. It seems to me we agree on these points.

I have no idea with what you mean with the whole solid having a "causal control". This is not something we deal with in physics. The phonon modes are a collective behavior of the whole solid, not just a few atoms or ions. That is the point that needs to be understood.

Note also that philosophers don’t restrict their use of the term to any scale, and I’m not sure that you do either. It seems however, that Laughlin does when he writes:

Laughlin repeatedly uses this separation of levels. He talks about particle interactions, talks about the “mesoscopic scale” and entitles the paper, “The Middle Way”.

Is there a length scale above which no new intensive properties of matter phy emerge, and if so, how do you define it? Is it this mesoscopic scale? Or do you feel that as one increases to well above this scale, where “gazillions” of molecules are interacting (ie: a swimming pool size), there are new and unique intensive properties that are phy emergent?

You need to read his other PNAS paper. He isn't emphasizing "length scale". He's emphasizing the two extremes where they don't quite resemble each other. It has nothing to do with length scale. "Quasiparticle" are as small as electrons. Yet, they are collective excitations governed by many-body physics that simply falls apart as you try to examine it one particle at a time.

As Phil Anderson has been quoted many times, "http://www.cmp.caltech.edu/~motrunch/MoreIsDifferent.pdf". In physics, that is as simple as one can define what is meant by emergent phenomenon.

Zz.
 
Last edited:
  • #10
ZapperZ said:
I have written about this many times before, and I've summarized my opinion here.

Zz.


Hey Zapper, I just read your blog and came away a little bit confused due to my own limitations with the terminology you are using there. I thought a "Grand Unified Theory" was a theory where only 3 of the 4 interactions (gravity being the exception) are unified; whereas a TOE is a theory where all the 4 interactions are unified. I came to this confusion after reading this passage in your blog:

"Gravity might be the last and most difficult. However, assuming that it can be unified with the others, one then have what is called the Grand Unified Theory (GUT)."

So, could you please clarify that for me?
 
  • #11
GUT is predominantly sold as the theory that can unify all the 4 fundamental interactions. I've seen gravity being included into it and when this is done, this is often regarded as the TOE.

However, it is true that in some instances, GUT has been used as the unification of only 3 forces other than gravity.

Zz.
 
  • #12
Q_Goest said:
Such classical mechanical interactions can all be seen to be physically reducible simply by taking some small volume of a system and subjecting it to equivalent causal actions. The main reason we can duplicate a given volume like this is because the behavior of anything at the classical scale is due to an aggregate of molecules or atoms.

Do you familiar with A.Einstein analysis of the black body radiation curve? (rhetorical question).

Q_Goest said:
So for physical reducibility, we have the ability to reduce a system that operates at a classical scale but we don’t have the ability to reduce such a system below roughly the mesoscopic scale. Would you agree with this or not? How would you argue that such a physical separation exists? Any papers that might address this would be appreciated.

Not. It is the Elementary Particles Physics all about. Semi-popular paper I suggest S.Weinberg “Towards the final laws of physics”, Camb. Univ. Press (1987). There is also the popular version of it. Apparently, he also knows the philosophy.

“Quantum chemistry is a demandingly technical field where computers are pushed to the limit, in order to calculate the detailed properties of molecules on an electronic level… The complexity quickly blows up in your face.”

That what is usually happens when you looking for the stupid answer to the stupid question.

Q_Goest said:
“The fundamental principle in classical mechanics is that any physical system can be decomposed into a collection of simple independent local elements each of which interacts only with its immediate neighbors.” That’s the perfect description!

Yes, but you missed a point. Stapp (a physicist) is talking about only the classical mechanics (I.Newton) and not about the classical physics (ED, GR and statmech.).

Sorry, my level in philosophy is approximately equivalent to yours in physics. The QT emerged from the Classical Physics. I wrote the post only to illustrate Zz statements. The philosophers should fit their notions to what the physicists do.

Regards, Dany.
 
Last edited:
  • #13
Zz – Thanks for the input and pointing to Anderson’s paper. I don’t think his paper really has anything to do with what I have in mind however. Also, I’ve read Laughlin’s paper “The Theory of Everything” before but still don’t see any discrepancy in the view I’m proposing.

This thread was intended to discuss what I considered a relatively straight forward division between levels of nature. Unfortunately, we never got there. The thread is getting side tracked by a discussion on emergence which has bearing on the OP, but is also causing confusion. I’d like to get away from the discussion of emergence and focus on reducibility.

Engineers such as myself generally perceive nature broken up into finite elements or “control volumes”, though this model of nature is often shared by physicists as well. This is a strictly reductionist view point of course, but that is in fact what classical mechanics is all about, and what engineers must be best acquainted with. If for example, I want to analyze stresses in a spring that’s used in a valve that is operating inside a reciprocating pump that is supplying LOX to a rare gas purification system, I don’t need to consider the system as a whole to determine stress. All I need to do is examine how much the spring deforms. I can place ‘boundary conditions’ if you will, on the ends of the spring, and know everything I need to know about the spring. I can calculate stress, force, fatigue life, and many other things simply by considering the spring as a separate entity with specific conditions being placed on it. The spring is reducible to a single object.

If I want to determine how the valve operates with this spring, I only need to include the boundary conditions acting on the valve. Similarly, the pump can be considered in isolation. For any of these individual parts, I only need to write the equations that pertain to that particular item. If I do this, I can determine what is happening to any part of any subassembly for the system without having to look at other parts. This is what I’ll call reducible, though this might not be exactly what others have in mind for reducible, so I’ll attempt a definition for this by borrowing some words from Bedau and rewritten by myself:
Definition of Reducibility: Reducibility applies in contexts in which there is a system, call it S, composed out of "micro-level" parts; the number and identity of these parts might change over time. S has various "macro-level" states (macrostates) and various "micro-level" states (microstates). S's microstates can be considered independently from the macrostate, and these microstates will evolve over time depending only on local causal actions acting on the microstate.

This concept of reducibility is not acceptable as a concept for quantum mechanics, and I’m trying to understand better why that is. Perhaps this section from the book, “The Road to Reality” sheds some light. Penrose writes (pg 578):
The quantum-Hamiltonian approach, which provides us with the Schrodinger equation for the evolution of the quantum state vector, still applies when there are many particles, possibly interacting, possibly spinning, just as well as it did with a single particle without spin. All we need is a suitable Hamiltonian to incorporate all these features. We do not have a separate wavefunction for each particle; instead, we have one state vector, which describes the entire system. In a position-space representation, this single state vector can still be thought of as a wavefunction (W), but it would be a function of all the position coordinates of all the particles – so it is really a function on the configuration space of the system of particles …

What this indicates to me is there is single description of this system of particles which must include all the particles in the system, very much unlike the description of a system at a classical level. This description is certainly not analytically reducible. At some dimensional level however, we can generally dismiss this approach and use the larger scale equations as used in classical mechanics. It seems the classical level and the quantum level can be distinguished by this concept of reducibility.

I understand there are additional concerns that we’ve discussed regarding emergent phenomena such as superconductivity and the fractional quantum hall effect, but let’s steer away from these. I’d like to understand this quantum level irreducibility better.

Do you know of any good discussions regarding this separation of classical and quantum mechanics? I’m looking for a few good references. Thanks again.
 
Last edited:
  • #14
Q_Goest said:
Zz – Thanks for the input and pointing to Anderson’s paper. I don’t think his paper really has anything to do with what I have in mind however. Also, I’ve read Laughlin’s paper “The Theory of Everything” before but still don’t see any discrepancy in the view I’m proposing.

This thread was intended to discuss what I considered a relatively straight forward division between levels of nature. Unfortunately, we never got there. The thread is getting side tracked by a discussion on emergence which has bearing on the OP, but is also causing confusion. I’d like to get away from the discussion of emergence and focus on reducibility.

Engineers such as myself generally perceive nature broken up into finite elements or “control volumes”, though this model of nature is often shared by physicists as well. This is a strictly reductionist view point of course, but that is in fact what classical mechanics is all about, and what engineers must be best acquainted with. If for example, I want to analyze stresses in a spring that’s used in a valve that is operating inside a reciprocating pump that is supplying LOX to a rare gas purification system, I don’t need to consider the system as a whole to determine stress. All I need to do is examine how much the spring deforms. I can place ‘boundary conditions’ if you will, on the ends of the spring, and know everything I need to know about the spring. I can calculate stress, force, fatigue life, and many other things simply by considering the spring as a separate entity with specific conditions being placed on it. The spring is reducible to a single object.

If I want to determine how the valve operates with this spring, I only need to include the boundary conditions acting on the valve. Similarly, the pump can be considered in isolation. For any of these individual parts, I only need to write the equations that pertain to that particular item. If I do this, I can determine what is happening to any part of any subassembly for the system without having to look at other parts. This is what I’ll call reducible, though this might not be exactly what others have in mind for reducible, so I’ll attempt a definition for this by borrowing some words from Bedau and rewritten by myself:
Reducibility applies in contexts in which there is a system, call it S, composed out of "micro-level" parts; the number and identity of these parts might change over time. S has various "macro-level" states (macrostates) and various "micro-level" states (microstates). S's microstates can be considered independently from the macrostate, and these microstates will evolve over time depending only on local causal actions acting on the microstate.

This concept of reducibility is not acceptable as a concept for quantum mechanics, and I’m trying to understand better why that is. Perhaps this section from the book, “The Road to Reality” sheds some light. Penrose writes (pg 578):


What this indicates to me is there is single description of this system of particles which must include all the particles in the system, very much unlike the description of a system at a classical level. This description is certainly not analytically reducible. At some dimensional level however, we can generally dismiss this approach and use the larger scale equations as used in classical mechanics. It seems the classical level and the quantum level can be distinguished by this concept of reducibility.

I understand there are additional concerns that we’ve discussed regarding emergent phenomena such as superconductivity and the fractional quantum hall effect, but let’s steer away from these. I’d like to understand this quantum level irreducibility better.

Do you know of any good discussions regarding this separation of classical and quantum mechanics? I’m looking for a few good references. Thanks again.

I'm very puzzled here by your comment, especially the one that you made in the very beginning in which you don't think the points that I made regarding emergent phenomena has anything to do with "reducibility" (or not). I think it DOES, and I also think you missed it.

For example, in that Penrose quote, that is EXACTLY what I (and Laughlin) have been trying to get across to you. The Hamiltonian for a problem can either be constructed for a single-particle system (i.e. for a particle in an interaction), or you have to start with the many-particle system right away. You simply cannot describe a many-particle system by starting with the single particle system and then adding more and more complexity and interactions. That is impossible to do, and the proof is the inability of anyone so far to do that and derive superconductivity and other emergent phenomena. Period!

Your impression that QM is "irreducible" is wrong, because it depends on the situation being solved. When we teach kids intro QM, we ARE teaching them at the simplest, reductionist approach. Why? Because dealing with only a few interactions is easier. However, when one starts to solve a system with a gazillion interactions, then you can no longer do that. You now have to start already from a many-body interactions in the Hamiltonian. So you switch gears from dealing with interactions for each particle to dealing with a many-body interactions. Laughlin, Anderson, and others are trying to argue that (i) there is no way one can go from one to the other and (ii) the many- body picture is as fundamental (if not more) as the individual particle picture.

Zz.
 
  • #15
I'm very puzzled here by your comment, especially the one that you made in the very beginning in which you don't think the points that I made regarding emergent phenomena has anything to do with "reducibility" (or not). I think it DOES, and I also think you missed it.
Hi again. I guess I’m not surprised that we’re talking past each other here. Let’s talk about emergence once more just so there’s no confusion about what it is. My apologies for the length of this, but to really define what emergence is and how it relates to reductionism, we’ll need to review the literature.

You’ll find there are key points about emergence and reducibility that are the same, but there are also key points that differ. Note that both Bedau and Chalmers, whom I’ll be quoting, are quite familiar with physics, and I assure you it is nature itself they are attempting to define, not just some esoteric, philosophical concept. We can’t dismiss their definitions simply because they are not strictly scientists or physicists. In fact, Bedau uses as an example, micelles, just as Laughlin does, to discuss emergence:
Macro entities and micro entities each have various kinds of properties. Some of the kinds of properties that characterize a macro entity can also apply to its micro constituents; others cannot. For example, consider microcelles. These are clusters of amphiphillic polymers arranged in such a way that the polymers’ hydro-phillic ends are on the outside and their hydro-phobic tails are on the inside. Those polymers are themselves composed out of hydro-phyllic and –phobic monomers. In this context, the micelles are macro objects, while the individual monomeric molecules are micro objects. The micelles and the monomers both have certain kinds of physical properties in common (having a location, mass, etc.). By contrast, some of the properties of micelles (such as their permeability) are the kind of properties that monomers simply cannot possess. (3)
Bedau is referring to weak emergence when he talks about micelles. The only point I make here is that the intent of Bedau and others is to define emergence and give examples, and not simply create logical arguments without application to the real world.

Bedau defines three types of emergence as I’d mentioned above. The first one, nominal emergence, I’m going to skip because it really isn’t very interesting and isn’t pertinent to the discussion. We are then left with weak and strong emergence. I’ll start with Bedau’s definition of weak emergence:
Weak emergence applies in contexts in which there is a system, call it S, composed out of "micro-level" parts; the number and identity of these parts might change over time. S has various "macro-level" states (macrostates) and various "micro-level" states (microstates). S's microstates are the intrinsic states of its parts and it's macrostates are structural properties constituted wholly out of microstates. Interesting macrostates typically average over microstates and so compresses microstate information. Further, there is a microdynamic, call it D, which governs the time evolution of S's microstates. (1)
Note that this definition does not include the word “reducible” nor any form of it. One could interpret this definition as the macrostates reduce to the individual microstates, but that is not all Bedau has in mind here.

Bedau does see emergent phenomena as being essentially reducible, but I don’t think even he has all the bugs worked out yet. He says,
However, weak emergence postulates just complicated mechanism with context-sensitive micro-level interactions. Rather than rejecting reduction, it [weak emergence] requires (ontological and causal) reduction, for these are what make derivation by simulation possible.

The phrase “derivation by simulation” might seem to suggest that weak emergence applies only to what we normally think of as simulations, but this is a mistake. Weak emergence also applies directly to natural systems, whether or not anyone constructs a model or simulation of them. A derivation by simulation involves the temporal iteration of the spatial aggregation of local causal interactions among micro elements. That is, it involves the local causal processes by which micro interactions give rise to macro phenomena. The notion clearly applies to natural systems as well as computer models. (2)
So yes, he’s insistent that weak emergence is local and causal. That would seem to indicate reducibility, and indeed I believe that’s what he is suggesting, even for such things as micelles. Weakly emergent phenomena are reducible, in principal. Weak emergence is deducible were we to simply have sufficient computational ability. There would be no weakly emergent phenomena we could not deduce in principal.

I’d like to touch on one other thing Bedau mentions regarding cellular automata (ie: the Game of Life) which he uses extensively to describe and build his case for weak emergence. Here’s an interesting comment on the GoL:
In fact, Conway proved that these gates can even be cunningly arranged so that they constitute a universal Turing machine (Berlekamp et al. 1982). Hence, the Game of Life can be configured in such a way that it can be interpreted as computing literally any possible algorithm operating on any possible input. As Poundstone vividly puts it, the Game of Life can “model every precisely definable aspect of the real world” (Poundstone 1985, p. 25)
But cellular automata and the Game of Life are also fully reducible per the definition I’ve provided. In fact, Turing machines are so reducible, they are deterministic (ie: they are simply performing computations). I’m stressing all this about weak emergence to emphasize that there is a definition and a viewpoint that weak emergence is reducible in principal by examining the local, causal actions at each point. Also, so that weak emergence might be contrasted with strong emergence, because it is with strong emergence we enter into a bit of trouble. We need a solid grasp on what is being called weak emergence and how it relates to reducibility before we move on to the strong variety.

Strong Emergence:
Bedau talks about two hallmarks of emergence:
1. Emergent phenomena are dependent on underlying processes.
2. Emergent phenomena are autonomous from underlying processes.

Taken together, the two hallmarks explain the controversy over emergence, for viewing macro phenomena as both dependent on and autonomous from their micro bases seems metaphysically problematic: inconsistent or illegitimate or unacceptably mysterious. It is like viewing something as both transparent and opaque. The problem of emergence is to explain or explain away this apparent metaphysical unacceptability. (2)
Doesn’t this sound like what Laughlin is referring to? It also sounds a lot like something that is irreducible. Laughlin is referring to emergent phenomena that can’t be deduced, even in principal, due to configurational organizing principals that emerge. Laughlin’s description of emergent pheonemena does not seem to fit into the definition of weak emergence. It seems straightforward to suggest that the underlying processes of quantum mechanics can produce emergent phenomena that are autonomous in some sense, doesn’t it? Superconductivity and the fractional quantum hall effect for example, might be viewed as being phenomena that are autonomous from the movement of specific particles. Here’s another definition by Chalmers that also seems to allow such phenomena to be termed “strongly emergent”.
We can say that a high-level phenomenon is strongly emergent with respect to a low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principal from truths in the low level domain. (3)
This seems to fit into the definition of emergence provided by Laughlin, right? Maybe… Note that he says “deducible” not “reducible” which means to me that one can’t determine analytically, even in principal, the emergent phenomena. We’ll use the term “deducible” instead of “reducible” because I believe this would be the term Laughlin would also use here. What do you think?

As we look closer, there seems to be a rub. Chalmers continues:
Strong emergence has much more radical consequences than weak emergence. If there are phenomena that are strongly emergent with respect to the domain of physics, then our conception of nature needs to be expanded to accommodate them. That is, if there are phenomena whose existence is not deducible from the facts about the exact distribution of particles and fields throughout space and time (along with the laws of physics), then this suggests that new fundamental laws of nature are needed to explain these phenomena. (3)
This is an interesting suggestion, but personally, I don’t think Chalmers has the best handle on the problems here, so I’m going back to Bedau who I feel does the best job at explaining this issue.

Bedau quotes someone that’s been extremely influential in the area of the philosophy of emergence. Hopefully, it will be clear soon why I want to use Bedau’s quote. He starts this section with the title, “Problems with Strong Emergence”
To glimpse the problems with stronger forms of emergence, consider the conception of emergence defended by Timothy O’Conner (1994). O’Conner’s clearly articulated and carefully defended account falls squarely within the broad view of emergence that has dominated philosophy this century. His definition is as follows: Property P is an emergent property of a (mereologically-complex) object O iff P supervenes on properties of the parts of O, P is not had by any of the object’s parts, P is distinct from any structural property O, and P has a direct (“downward”) determinative influence on the pattern of behavior involving O’s parts.
The pivotal feature of this definition, to my mind, is the strong form of downward causation involved. O’Conner (pp 97f) explains what he wants
to capture a very strong sense in which an emergent’s causal influence is irreducible to that of the micro-properties on which it supervenes; it bears its influence in a direct ‘downward’ fashion, in contrast to the operation of a simple structural macro-property, whose causal influence occurs via the activity of the micro-properties which constitute it. (2)
Consider typical emergent phenomena such as superconductivity and the like. Does this sound like a definition which can be used for that? The problem with this definition is explained:
… strong emergence adds the requirement that emergent properties are supervenient properties with irreducible causal powers. These macro-causal powers have effects at both macro and micro levels, and macro-to-micro effects are termed “downward” causation.

One problem is the so-called “exclusion” argument emphasized by Kim (1992, 1997, 1999). This is the worry that emergent macro-causal powers would compete with micro-causal powers for causal influence over micro events, and that the more fundamental micro-causal powers would always win this competition. . . . By definition, such causal powers cannot be explained in terms of the aggregation of the micro-level potentialities; they are primitive or “brute” nature powers that arise inexplicably with the existence of certain macro-level entities. This contravenes causal fundamentalism – the idea that macro causal powers supervene on and are determined by micro causal powers, that is, the doctrine that “the macro is the way it is in virtue of how things are at the micro . . . Still, causal fundamentalism is not a necessary truth, and strong emergence should be embraced if it has compelling enough supporting evidence. But this is where the final problem with strong emergence is scientifically irrelevant. Virtually all attempts to provide scientific evidence for strong emergence focus on one isolated moribund example: Sperry’s explanation for consciousness from over thirty years ago (e.g., Sperry 1969). There is no evidence that strong emergence plays any role in contemporary science. (2)

Hopefully this helps explain why I’d like to steer clear of talking about “emergence” in this thread and talk only of definitions about reducibility. They are in fact, different concepts, depending on who you ask. What I’m most interested in is why interactions between molecules can not be reduced such that they can be considered independently as the spring in the valve example given earlier. I understand it has to do with a single wave function as Penrose discusses, but I haven’t finished reading this book yet (just got it over the weekend) and probably won’t finish it any time soon, so any additional help in understanding this and how we might define reducibility and irreducibility better would be of interest.

(1) Bedau, “Weak Emergence” http://www.reed.edu/~mab/papers/weak.emergence.pdf
(2) Bedau, “Downward Causation and the Autonomy of Weak Emergence” http://people.reed.edu/~mab/publications/papers/principia.pdf
(3) Chalmers, “Strong and Weak Emergence” http://consc.net/papers/emergence.pdf
 
  • #16
Hi Q...it's been too long. I don't believe you can find an argument based on tangibles that will "flesh out" an argument for reductionism in the particle world, primarily because quantum physics is based on probabilities and predictions regarding invisible, and only probable, bits and actions. As you're well aware, the only way quantum physics "proves" quanta exist (and all the other quantum-esque things) is through formulae that have been applied to something measurable...if the tangibles behave as the probabilities of quantum physics formulae dictate they will, then, voila!, the formula--and quantum physics--must be true. Surely, there are semantic arguments...logical arguments to support your view, but they would miss your point. Believing in the probabilities of quantum physics, and looking for some way to "prove" these are true, or reducible, is a bit like believing in God...it's an act of faith. Other than indirect methods of proving quantum laws work, there is no proof, or picture, of atomic particles, orbitals, strings. Please don't get me wrong, I do believe in theories of probability, etc., but it is based on applied outcomes and the notion that so far, it's the best we've got.
 
Last edited:
  • #17
Because of quantum mechanics statistical nature it is by definition a holistic theory, a theory that emphasizes wholes rather than parts. Also by definition, holistic theories describe more than reductionist theories, so much more that reductionist theories can be found within holistic theories.
 
  • #18
What the Bleep??

wuliheron said:
Because of quantum mechanics statistical nature it is by definition a holistic theory, a theory that emphasizes wholes rather than parts. Also by definition, holistic theories describe more than reductionist theories, so much more that reductionist theories can be found within holistic theories.
Okay, I agree that in the generalist realm of things, reductionist theories and reductionism may be part of a greater whole, that is, may help define part of a greater whole, but even the sentence I just wrote seems nebulous. Can I ask a couple questions? Why does the statistical nature of quantum mechanics make it holistic? Are you saying it is, or holds, the explanation for everything? Also, since quantum mechanics is very much about the probabilities of actions of parts (particles), why do you say it emphasizes the whole? Are you saying that, "...as the particles go, so goes the universe?" Finally, you wouldn't happen to be a fan of that movie, "What the Bleep Do We Know?" What I'm sensing you're saying, albeit unclearly, is what many non-scientists believe, and that is that quantum physics holds the key to explaining the universe, and self-actualization. For instance, there is a belief that because two particles "seem" to be in the same place at the same time, we human beings can somehow make ourselves be in two places at the same time. Akin to this is the belief that because two particles "seem" to be aware of what the other is doing...no matter how far apart, we humans are capable of the same. I honestly do not believe the micro translates to the macro in such instances. I will grant you that there is much we do not know about the universe, or self-actualization, and while quantum physics may yet hold those answers, it doesn't now. And please, if I have totally misinterpreted your post, let me know.
 
  • #19
Hi Chestnut,
Hi Q...it's been too long. I don't believe you can find an argument based on tangibles that will "flesh out" an argument for reductionism in the particle world,
Actually, I’m not trying to find an argument for reductionism in the particle world. In fact I’m trying to find an argument against it. Penrose seems to be my best reference so far. He points out the need for a single wavefunction for all the particles in the system. But this is very different from the classical world in which a single equation of any kind might be needed to describe an aircraft for example. FEA analysis for example, assumes elements are completely independent of each other, so that the equations for each element are in fact, independant of each other.

I think this is where the definition of reductionism fails us. P Anderson’s for example, is a reductionist. He points out that everything can be found to be made up of smaller parts, and those parts made of smaller parts. There are no scale changes such that one thing on one level might obey different laws such as the concept of emergence dictates. Reduction simply says that everything can be reduced to a fundamental set of physical laws, specifically “simple electrodynamics and quantum theory” (per P. Anderson).

The problem I see is that reductionism is ill defined. What matters is not whether something is made up of smaller parts, but whether or not those smaller parts can be analytically or physically reduced such that the various micro-level parts can be considered to be acting independently of each other. Many engineering and physics concepts such as the control volume approach, free body diagrams, and finite element analysis, make this distinction. They show that what happens within some volume of space can be analyzed using only those local causal actions acting on a specific micro-level part.

This philosophy at the classical level is different than the philosophy used at the quantum mechanical level. As wuliheron notes,
wuliheron said: Because of quantum mechanics statistical nature it is by definition a holistic theory, a theory that emphasizes wholes rather than parts.
Penrose seems to provide the best explanation of this that I can find right now, but I’d be interested in hearing any other explanations, such as why QM must be a holistic theory, and what that really means with respect to the philosophy of science.

Thanks, Q.
 
  • #20
Chestnut said:
Why does the statistical nature of quantum mechanics make it holistic? Are you saying it is, or holds, the explanation for everything? Also, since quantum mechanics is very much about the probabilities of actions of parts (particles), why do you say it emphasizes the whole? Are you saying that, "...as the particles go, so goes the universe?" Finally, you wouldn't happen to be a fan of that movie, "What the Bleep Do We Know?"

Quantum mechanics cannot tell us what any individual quanta is going to do. In contrast, Newtonian mechanics can easily predict what an individual cannon ball is going to do. Being statistical means quantum mechanics can only provide exact answers for groups of things. In fact, rather large groups of things. Hence, it provides answers for wholes, for groups.

Relativity is also a holistic theory. It does not provide answers for just space, time, mass, or energy. It provides answers for the space-time continuum, for mass/energy, etc. Thus the entire foundation of modern physics is a holistic one.

I have never seen the movie, but I have heard of it and have no desire whatsoever to see the movie. Perhaps some day when I'm in a humorous mood. :?)

Personally, I think metaphysical beliefs are garbage. However, saying something is holistic is not synonymous with either metaphysics or New Age beliefs. It is merely a description of what it emphasizes. For example, when people talk about the environment they are discussing a holistic viewpoint, but one that has immediate and practical uses.
 
  • #21
What is Reductionism?

What is reductionism? I don’t think there’s a single answer for this, and in fact, I think there are quite a variety of things people mean when they talk about reductionism. Stuart Kauffman notes this when he says, “Like any other world view, reductionism is hard to pin down.” I have to agree with Mr. Kauffman.

Steven Weinberg is a Nobel Laureate who wrote “Dreams of a Final Theory”. He talks about “naïve forms of reductionism”.
The opponents to reductionism occupy a wide ideological spectrum. At its most reasonable end are those who object to the more naïve forms of reductionism. I share their objections.
I believe Philip Anderson, another Nobel Laureate, has a very similar perspective. I’ll explain more later, but I’ll also quote Philip as saying:
Before beginning this I wish to sort out two possible sources of misunderstanding. First, when I speak of scale change causing fundamental change I do not mean the rather well understood idea that phenomena at a new scale may obey actually different fundamental laws – as, for example, general relativity is required on the cosmological scale and quantum mechanics on the atomic. I think it will be accepted that all ordinary matter obeys simple electrodynamics and quantum theory, and that really covers most of what I shall discuss. (As I said, we must start with reductionism, which I fully accept.)

I think both of these people have similar views on reductionism. In both cases, it seems obvious that although they are both reductionist, they would probably agree that they are not “constructionists”. Anderson dislikes constructionism because:
The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe.

The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity. The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles.


So constructionism is the concept that “a Laplacean super-being could, in principal, deduce all the high-level facts about the world, including the high-level facts about chemistry, biology, economics and so on.” <Chalmers, “Strong and Weak Emergence”>
I think one could also go one step farther and state that this Laplacean super-being could, in principal, predict the future and calculate the past, given perfect knowledge of some point in time (and assuming deterministic, physical laws, which are questionable). But let’s skip this potential aspect of constructionism. Let’s suggest that there are truly random aspects of particle physics, and move on.

I’d agree that a constructionist has a daunting task if they seriously think that there’s a snowball’s chance in hell of being able to deduce higher level laws given the most fundamental physical laws (simple electrodynamics and quantum theory as Anderson would say) even in principal. So it is because there is so much difficulty in deducing higher level laws (or rules) that people such as Weinberg say:
I consider myself a reductionist, but I do not think that the problems of elementary particle physics are the only interesting and profound ones in science, or even physics. I do not think that the chemists should drop everything else they are doing and devote themselves to solving the equations of quantum mechanics for various molecules. I do not think that biologists should stop thinking about whole plants and animals and think only about cells and DNA.

Bravo! Very well said, Mr. Weinberg. Scientific diversity is good.

So if reductionism is not “constructionism”, what is it? Weinberg continues:
For me, reductionism is . . . an attitude toward nature itself. It is nothing more or less than the perception that scientific principals are the way they are because of deeper scientific principals (and, in some cases, historical accidents) and that all these principals can be traced to one simple connected set of laws. At this moment in the history of science it appears that the best way to approach these laws is through the physics of elementary particles, but that is an incidental aspect of reductionism and may change.

It's interesting that Weinberg should call it, "an attitude toward nature”. For example, Alwyn Scott is heavy into nonlinear physical phenomena, and so his attitude is that “nonlinear phenomena are those for which the whole is greater than the sum of its parts.” <ref> Scott’s perspective is from that of a nonlinear scientist. I can't say I agree with Scott, but I don't believe his concept of reductionism is the same as the one I've outlined above. That said, I actually agree with most of what Scott says, if not his definition of nonlinear phenomena.

Anderson and Weinberg seem to be saying that reductionism today is the concept that there are a set of deeper principals at each level we examine, right the way down to “the physics of elementary particles”. Is that reductionism? Does a reductionist say that various scientific principals (laws or ‘rules’ if you will) must be deduced at the level at which they are found, and not from lower levels despite the fact that they are dependant on such lower levels? Is it only that there are ‘twin difficulties’ (scale and complexity) that perhaps prevent us from determining higher level laws? Or are there other definitions of reductionism that might be applicable? Or is reductionism false altogether?
 
  • #22
Q_Goest said:
I think this is where the definition of reductionism fails us. P Anderson’s for example, is a reductionist. He points out that everything can be found to be made up of smaller parts, and those parts made of smaller parts. There are no scale changes such that one thing on one level might obey different laws such as the concept of emergence dictates. Reduction simply says that everything can be reduced to a fundamental set of physical laws, specifically “simple electrodynamics and quantum theory” (per P. Anderson).

Whoa! Just remember to read this thread. This is False. Phil Anderson is not a reductionist. In fact, he's the one who coined the phrase "More Is Different". And certainly, being a condensed matter physicist, he certainly would not subscribe to the reductionism of Steven Weinberg.

While he "accepts" reductionism, you need to make sure you realize in what CONTEXT he is 'accepting' it in. This is because if you don't, then his statement to this and "More is different" would be contradictory. Read carefully what he wrote in "More is Different":

The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe

Reductionism works in elementary particle physics. This is why Laughlin has even said that "The End of Physics" may in fact be just the end of "Elementary Particle Physics" or the end of Reductionism, and not the end of physics itself. This is because there are many aspect of other areas of physics where the elementary description cannot predict and arrive at other emergent phenomena. This is a fact that both camps agreed upon.

Zz.
 
Last edited:
  • #23
Whoa? hmmm… I see this argument has gone on for a long time.
https://www.physicsforums.com/showthread.php?t=68265&page=14

I agree with vanesch and Stingray. Note post #196 by vanesch regards what might be described as “weak emergence” in #1, and “strong emergence” in #2. Like Anderson, I don’t disagree that, “The constructionist hypothesis breaks down when confronted with the twin difficulties of scale and complexity.” I don’t even disagree that whatever Nature does, isn’t computable. I agree it’s not computable, at least with a classical computer and without actually simulating nature with the actual pieces (see Feynman, “Simulating Physics with Computers” *). But I have to agree with v when he rejects this particular idea (ie: strong emergence):
the laws describing the elementary phenomena ARE NOT THE CAUSE of the collective phenomena

I understand your argument, but I disagree with it. Let’s just agree to disagree, and move on to what is important for this particular thread.

Classical mechanics can describe virtually any phenomena on a scale larger than <insert level here>. Engineers do this every day using FEA and CFD. Whatever is inside some given element is known to interact by a given set of laws. Further, we can calculate phenomena of any given size using the laws of classical physics without adding or changing any of the physical laws when we extrapolate out to larger dimensions. We only need to consider such things as shear loads, forces, momentum and energy interactions that cross some given surface around a given element and we can determine with accuracy commensurate with the knowledge of those interactions, exactly what will happen within some given element. The laws for example that govern flow in very small tubes are the same ones that govern the flow of ocean currents. In this sense, more is not different, it is simply more.

Quantum mechanics on the other hand, is a different type of description. It is not physically, nor analytically reducible to smaller ‘chunks’ or elements like classical physics is. What I’d really like to understand better is why? Why are particles not reducible analytically or physically like classical physics is? I’ve quoted Penrose:

The quantum-Hamiltonian approach, which provides us with the Schrodinger equation for the evolution of the quantum state vector, still applies when there are many particles, possibly interacting, possibly spinning, just as well as it did with a single particle without spin. All we need is a suitable Hamiltonian to incorporate all these features. We do not have a separate wavefunction for each particle; instead, we have one state vector, which describes the entire system. In a position-space representation, this single state vector can still be thought of as a wavefunction (W), but it would be a function of all the position coordinates of all the particles – so it is really a function on the configuration space of the system of particles …

How, then, are we to treat many-particle systems according to the standard non-relativistic Schrodinger picture? As described in 21.2, we shall have a single Hamiltonian, in which all momentum variables must appear for all the particles in the system. Each of these momenta gets replaced, in the quantization prescription of the position-space (Schrodinger) representation, by a partial differential operator with respect to the relevant position coordinate of that particular particle. All these operators have to act on something and, for consistency of their interpretation, they must all act on the same thing. This is the wavefunction. As stated above, we must indeed have one wavefunction W for the entire system, and this wavefunction must indeed be a function of the different position coordinates of all the separate particles.

Penrose goes on to explain how each particle can’t be represented simply by it’s own separate wavefunction. I’m still digesting this, and would be interested in any additional comments, explanations, or references like Penrose that try to explain this on a level for those of us with a scientific background who do not specialize in quantum mechanics.

*If you’d like a copy, I can email.
 
  • #24
Q_Goest said:
Classical mechanics can describe virtually any phenomena on a scale larger than <insert level here>. Engineers do this every day using FEA and CFD. Whatever is inside some given element is known to interact by a given set of laws. Further, we can calculate phenomena of any given size using the laws of classical physics without adding or changing any of the physical laws when we extrapolate out to larger dimensions. We only need to consider such things as shear loads, forces, momentum and energy interactions that cross some given surface around a given element and we can determine with accuracy commensurate with the knowledge of those interactions, exactly what will happen within some given element. The laws for example that govern flow in very small tubes are the same ones that govern the flow of ocean currents. In this sense, more is not different, it is simply more.

This is where you make your fatal error.

The classical mechanics laws that are used to describe large objects are just that - laws that describes LARGE OBJECTS! Try using classical mechanics to model the stability of a building starting at the small particles of the steel used to make up that building. You won't get that building.

Similarly, with fluid mechanics, you use classical mechanics to describe the fluid, AS A WHOLE, not the small chucks of the fluid. In other words, you use those laws starting at the COLLECTIVE scale, not individual particle scale!

The same with QM. It isn't JUST a description of microscopic, single "particle". It can be used at a "large particle" scale. Example: superconductivity, Fermi Liquid theory, etc... They all started at the many-body ground state and go on from there.

So just because these laws work at the large scale doesn't imply anything with regards to your ability to reduce it.

Zz.
 

1. What is reducibility?

Reducibility is the ability to break down complex systems or phenomena into simpler components in order to better understand and explain them.

2. How is reducibility related to reductionism?

Reductionism is the philosophical stance that all complex phenomena can be explained by reducing them to their most basic components. Reducibility is the practical application of reductionism in scientific inquiry.

3. What are the benefits of reductionism in science?

Reductionism allows scientists to study complex phenomena in a systematic and organized manner. It also helps in identifying the underlying mechanisms and principles that govern these phenomena.

4. What are the limitations of reductionism?

Reductionism may oversimplify complex systems, leading to a limited understanding of the phenomenon as a whole. It also neglects the emergent properties that arise from the interactions of the components in a system.

5. How is reductionism applied in different scientific disciplines?

Reductionism is commonly used in fields such as biology, chemistry, and physics to study living organisms, chemical reactions, and physical processes, respectively. However, it can also be applied in other disciplines such as psychology, economics, and sociology.

Similar threads

Replies
7
Views
852
Replies
190
Views
9K
Replies
2
Views
955
  • Other Physics Topics
Replies
1
Views
1K
Replies
14
Views
1K
Replies
3
Views
1K
  • Classical Physics
Replies
9
Views
1K
  • Introductory Physics Homework Help
Replies
4
Views
614
  • Feedback and Announcements
Replies
6
Views
1K
Back
Top