Questions About Quantum Theory: What's Wrong?

  • Thread starter reilly
  • Start date
  • Tags
    Qm
In summary, after 75 years of success, some people still have issues with Quantum Theory. However, it is widely considered the most successful and tested physics theory. The problem lies in confusion between interpretation and formalism, as well as misconceptions about the randomness of QM events. QM was developed through experiments and it is necessary for understanding many aspects of physics.
  • #176
Davorak said:
I would like to point out Stingray that a TOE may mean that you can simulate every and any phenomena with enough computing power. This does not immediately mean that it is possible to derive every and any phenomena.

Are you defining "derive" to mean something that can be done by hand, whereas "simulate" necessarily involves a computer? If so, I agree with your statement, but the method by which a conclusion is reached isn't important for verifying logical statements (math).

It might also be worth noting that an equation doesn't have to be solved for one to show that experimental results follow from a given theory. Take a pendulum for a trivial example. We can write down the differential equation for its motion, but make believe that we don't know how to solve it. Some experimentalist can come along and measure displacement versus time for various pendula. His data can then be substituted into the DE, and one can check whether left-hand side=right-hand side to within experimental error. For more complicated systems, this is much simpler than a traditional solution. It is also just as valid in showing whether an equation is correct (though not nearly as satisfying).
 
Physics news on Phys.org
  • #177
Crosson said:
I found a book this past week, it is called "Who is afraid of Schrodinger's cat?" and is intended to explain modern physics to laymen. Fortunately for this discussion, this book gave me a focused example of exactly what I think is wrong with Quantum Physics. Here is the first paragraph in the book:

Schrodinger's cat is the mascot of the new physics. Concieved to illustrate some of the apparently impossible conundrums associated with quantum reality, he has become a symbol of much that is "mind boggling" about 20th century physics...would think the cat is either alive or dead, but this is a quantum cat so things don't work that way. In the quantum world, all possibilities have a reality of there own, ensuring the cat is both alive and dead.


This is my problem, physicists embracing nonsense. This view is all to popular, certainly in the general public, and much more importantly in the physics community. To say that the cat is alive and dead because of QM is asinine, because QM is a theory of observations. It does not make claim's about the way things "are".

The attitude is "look, we can get people interested in physics by making it seem weird and exotic". But what these sell outs (like Briane Greene) end up saying is such nonsense that it turns me off entirely. Most of you are unlikely to identify with this extreme nonsense view, but I have sat through lectures with professors who have subscribed to the "spooky quantum world" flavor of physics.

What exactly is wrong with this view? Would it help if the Cat was not a flesh and blood cat but rather a quantum cat that can have the states dead or alive? Somewhat like an electron can have a spin up or down. This makes the argument seem much more plausible does it not?

Quantum tunneling allows for the possibility [for] me to walk through a brick wall, however if I tried to do [so] I would just get a lot of bruises. Why? Because I am an object made of a number of constituents each which has a very small chance of tunneling through the brick wall.

Appling Quantum rules like superposition and tunneling [on a macroscopic level] is not mumbo-jumbo it is just very impractically.
 
Last edited:
  • #178
Although, I'm not sure that I'd agree with all of Crosson's issues regarding Schrodinger's cat, I'm a little annoyed at it as well. It has become popular for general physics writers to make a lot of statements designed solely to sound as outrageous as possible. This is obviously done so that people think that physics is "cool," but I think it is a bit of a disservice to those who try to think deeply about what they've read. The books rarely make an attempt to convince the reader that physics is indeed rational.

I think I first noticed this when talking to a student in a freshman physics class I was TA'ing. He was an engineering major, but had read some popular books. Anyway, he told me that these books basically made him equate physicists with theologians. One of the main things he couldn't believe was the concept of virtual particles. I gave him an idea of what they really were, but I didn't blame him. "Brian Greene virtual particles" are indeed pretty crazy.
 
  • #179
Stingray said:
Are you defining "derive" to mean something that can be done by hand, whereas "simulate" necessarily involves a computer? If so, I agree with your statement, but the method by which a conclusion is reached isn't important for verifying logical statements (math).

It might also be worth noting that an equation doesn't have to be solved for one to show that experimental results follow from a given theory. Take a pendulum for a trivial example. We can write down the differential equation for its motion, but make believe that we don't know how to solve it. Some experimentalist can come along and measure displacement versus time for various pendula. His data can then be substituted into the DE, and one can check whether left-hand side=right-hand side to within experimental error. For more complicated systems, this is much simpler than a traditional solution. It is also just as valid in showing whether an equation is correct (though not nearly as satisfying).

By derive I mean predict before doing the experiment what will happen. This experiment can happen in a computer or it can happen in the real world. The computer experiment is not necessarily accurate though since it is a numerical simulation.

Some what like you can not derive the motion of the chaotic pendulum. You preform numerical methods and get its motion with a certain accuracy.

The unpredictability of the chaotic pendulum increases with the time you let it run. The unpredictability of emergent phenomena increases with the number of interacting particles.
 
  • #180
What exactly is wrong with this view? Would it help if the Cat was not a flesh and blood cat but rather a quantum cat that can have the states dead or alive?

Somewhat like an electron can have a spin up or down. This makes the argument seem much more plausible does it not?


It wouldn't help this discussion, because I am talking about a cat and you are talking about an electron. Quantum Mechanics correctly describes only one of these things.


Quantum tunneling allows for the possibility me to walk through a brick wall, however if I tried to do I would just get a lot of bruises. Why? Because I am an object made of a number of constituents each which has a very small chance of tunneling through the brick wall.

Appling Quantum rules like superposition and tunneling is not mumbo-jumbo it is just very impractically.

It is mumbo jumbo, because there is no experimental evidence or mathematical proof that a macroscopic object could pass through a brick wall. Remember, we have no reason to believe in QM any further than experiments can confirm its predictions (because its postulates are ad hoc).

In the bolded sentence you assume that the whole shares the properties of the parts, a common error in these "explanations" of quantum mechanics. Perhaps the macroscopic object has zero probability of being on the other side of the wall even though each of its parts has a nonzero property of being there. Unknown composite {whole more than parts} effects could play a role.

Here is the big problem: Cats cannot be both alive and dead. This shows that the superposition of states does not apply to cats, whatsoever. The most rational conclusion is: "QM is useful for predicting some things, but is not meant to be a literal description of reality." In their enthusiasm for selling books and attracting students, these authors and professors assert that QM supercedes logic! (Nothing can have both the property of being, and not being, at the same time.)
 
  • #181
Crosson said:
In the bolded sentence you assume that the whole shares the properties of the parts, a common error in these "explanations" of quantum mechanics. Perhaps the macroscopic object has zero probability of being on the other side of the wall even though each of its parts has a nonzero property of being there. Unknown composite {whole more than parts} effects could play a role.
Wel if by definition each part has a independent probability of being on the other side then the whole part has a probability of being on the other side of a barrier. Each tunneling event could effect the probability of the next tunneling event, but this effect would never bring the probability of the next particle tunneling to zero. This argument should certainly allow for my entire body to tunnel through a barrier.

You seem to be suggesting a hypothetical rule(new physics) that would limit quantum theory to the microscopic world. Is there evidence of this?

This suggests some extra rule that limits the density of tunneling events in space or in time? This would be interesting new physics.

I do not see the problem when I write down a [N] bodied Schrödinger hamiltonian.
[tex]
H =\sum_{i=1}^N \frac{P_{i}^2}{2 m_i} +\sum_{i=1}^N (\sum_{j=1}^N V_{i,j})
[/tex]

I know that the solution must be continuous and the derivative must be continuous. If these conditions are meet the solution can only equal zero at infinity and infinitesimal points in-between minus infinity and plus infinity.

What would be a function that go to zero for a larger stretch then an infinitesimal point and still have it’s derivative be continuous? This type of function would allow for zero probability on the other side of the barrier. All other functions do not satisfy this, and would allow the possibility of tunneling.
 
Last edited:
  • #182
Stingray said:
I was indeed unaware that any physicists held this view. It still seems logically impossible to me. I think that understanding things at the mesoscopic scale will make condensed matter people change their minds, but it's clear that our current argument isn't going anywhere :smile:.

Hi,

I have to say I agree 100% with what you write and I'm even amazed that some physicists hold these "holistic" views, which remind me more of 19th century ideas that in order to do "organic chemistry" you need (emergent?) "forces of life". Maybe it is based upon a misunderstanding, which you outline very well: it is not because you can write down the microscopic laws that there is any evident way to bring in evidence "collective phenomena", and that in most cases, a phenomenological model of the phenomenon in question is far more instructive and useful than the microscopic laws. This is something which is now widely recognized, but was maybe not before the "chaos" hype of the 80ies (style Mandelbrot and Co). So people were right to outline that it is not because you can write down the microscopic laws, that you can "easily extract" collective behaviour.

But the reason - I thought that was almost universally accepted - is not that those microscopic laws do not "contain" the phenomenon, only that in order to derive that phenomenon from the microscopic laws, we should apply derivations of which we have no clue how to deal with. Usually we can derive "macroscopic properties" from microscopic laws, making certain approximations and simplifications ; usually emergent properties arise when certain of these approximations are NOT allowed anymore. But often we don't know 1) when we cannot make them (such as the necessity to introduce long-range correlations) and 2) when we do not make those approximations, the calculations become too difficult to continue.

Also the "chaos revolution" is more an indication of the validity of reductionism than the opposite: indeed, as has been demonstrated there is that even VERY SIMPLE basic laws can give rise to extremely complex behaviour, at least in mathematical toy systems.
So I'm really amazed to learn that some physicists seem to think the opposite: that the basic laws of the constituents (which are much richer than the simple laws mentionned above) would NOT be sufficient to describe collective phenomena!

cheers,
Patrick.
 
  • #183
Stingray said:
I've only read the first page of this where he talks about giving his students the "impossible" problem. His reference for the claim that superconductivity cannot be derived from microscopics is a paper by Anderson in 1972. I looked that up, and found no such statements. Instead, it was filled with claims that it is IMPRACTICAL to deduce the properties of macroscopic systems from microscopic laws. This is essentially given as a defense that condensed matter physics is a "worthwhile" endeavor. He even says "we must all start from reductionism, which I fully accept." I completely agree with Anderson's viewpoint, but it is very different from your's (and apparently Laughlin's). Am I just skimming things too quickly here?

I certainly do not get the same story out of that as you did. In fact, this is the paper in which the phrase "More is Different" was derived that is now used frequently. Note that Anderson made his stand even clearer in the book "More is Different: Fifty Years of Condensed Matter Physics" (Princeton, 2001), and his review of Weinberg's book "Facing Up" in the July 2002 Physics Today.

What is the difference between N-body and many body? Does the latter just mean N->infinity? The analytic solution uses an infinite number of "particles."

This is irrelevant to my point, though. I was not giving the Ising model as an example of concrete physics, but of mathematics. It shows that you can get a phase transition from something very simple that doesn't appear to have any interesting features.

But again, you cannot predict where along any variation of a parameter, where the phase transition is going to occur. You can one parameter by hand and then look at the state of the system. You keep doing this until (if you have sufficiently large N) you get such a transition.

Many-body problem and N-body problem are two different areas of study in physics. N-body problem starts with the exact hamiltonian of every member of that system. Many-body problem does no such thing and again, as I've said, starts off right away with a many-body hamiltonian.

No, of course QM should work here. You seemed to be implying that the FQHE makes no sense because you get fractional charges when electrons should be indivisible. I was replying that, while this effect is interesting and surprising, it doesn't say anything for reductionism. QM is a wave theory. Our intuition of electrons as being little balls flying around is not remotely rigorous. The particle picture itself isn't even fundamental in field theory. Quantum "particles" are remnants of perturbation theory if you don't recall.

But in a collection of electron gas, as in a conductor, one never expects that the smallest unit of transmission would be less than the single unit of a charge carrier. If this isn't that astounding, as you seem to imply above, then we would have seen this frequently, and such a discoverywould not warrant a Nobel Prize (or 3, in this particular case).

On that note, you might be interested to know how field theorists and relativists define "emergent" phenomena: Something is "emergent" if nobody can figure out how to understand it perturbatively (but nonperturbative methods work). QCD is an example of this, I believe. Anyway, it is well-known to mathematicians that perturbation theory does not generally agree with the theory from which it was derived. This is true even in regimes where "physics math" would claim otherwise. So the existence of emergent phenomena in this sense is not surprising. I also think that this is the definition that your field should be using.

And note that superconductivity (as in the BCS case) cannot be derived via perturbation method because it diverges. So this isn't unknown.

I was indeed unaware that any physicists held this view. It still seems logically impossible to me. I think that understanding things at the mesoscopic scale will make condensed matter people change their minds, but it's clear that our current argument isn't going anywhere :smile:.

And that's ALL I wish to indicate. You are not alone. A lot of people are not aware that such disagreement exists. The media and the attention on physics have not been given much to such a large and diverse field, even if CM holds the largest percentage of the number of practicing physicists, and in spite of the fact that many advancement in field theory, particle physics, etc. came right out of CM theory (broken gauge symmetry of Anderson, and the Higgs mechanism just to name a few). It is presumtious to assume that GUT=TOE especially when many prominent and practicing physicists do not buy into this.

Zz.
 
  • #184
You seem to be suggesting a hypothetical rule(new physics) that would limit quantum theory to the microscopic world. Is there evidence of this?

In my opinion, it is the idea that QM allows for macroscopic tunneling which I think is unjustified. There is no experimental proof, and we have no reason to trust quantum mechanics any further than experiment can confirm. (please, give me a reason). Look at the following example from statistical mechanics:

Think of the air in your room, an ideal gas. Since each of its microstates of the gas is equally probable, there is a small chance that all of the gas molecules will collect in a dense clump in the middle of the room. Fortunately the actual probability of this is zero, because we have ignored intermolecular forces and sizes of individual molecules.

Others in this thread are talking about emergent behavior of composite systems. Macroscopic objects have properties that their quantum constituates do not necessarily share; so macroscopic tunneling is another example of embracing bogus physics because it seems neat.
 
  • #185
Crosson said:
In my opinion, it is the idea that QM allows for macroscopic tunneling which I think is unjustified. There is no experimental proof, and we have no reason to trust quantum mechanics any further than experiment can confirm. (please, give me a reason).

Because we haven't got any replacement for it ! If you stick rigorously to your reasoning, then NO theory what so ever has ANY predictive power, because you can only trust any theory for those specific cases where it has been experimentally tested ; no interpolation, no extrapolation. That reduces physics to a catalogue of performed experiments in the past.
Of course that existential doubt is there to a certain degree, and it is reasonable to ask yourself if you're not working outside the scope of a given theory, but as long as there is no alternative, I don't see why you cannot reasonably talk about predictions of a theory, even in a domain where it is not tested.

Look at the following example from statistical mechanics:

Think of the air in your room, an ideal gas. Since each of its microstates of the gas is equally probable, there is a small chance that all of the gas molecules will collect in a dense clump in the middle of the room. Fortunately the actual probability of this is zero, because we have ignored intermolecular forces and sizes of individual molecules.

True, but those forces would INCREASE that (very small) probability !
I don't see what's wrong with a theory that predicts that the probability of an unobserved phenomenon is 10^(-134534567). I would say that it agrees quite well with observation, and that you will have a damn hard time devising an experiment that contradicts this prediction, in favour of your prediction which is that the probability is equal to 0.

Others in this thread are talking about emergent behavior of composite systems. Macroscopic objects have properties that their quantum constituates do not necessarily share; so macroscopic tunneling is another example of embracing bogus physics because it seems neat.

I really don't see the logic in this reasoning. First of all, it is, I think, by all recognised that there are emergent properties if you put a lot of individual microscopic systems together. The discussion here is if that emergent property is determined by the microscopic behaviour of those systems, or not.
I thought that the prevailing view was that this emergent property IS a consequence of the microscopic behaviour of the components and not something that has nothing to do with the microscopic behaviour of the components, as Zzapper seems to claim.
I think everybody also agrees that it is not necessarily an easy job to derive that emergent property from the microscopic laws, but the point is if it is *in principle* determined by those laws or not. I understood that some here claim that the microscopic laws are NOT RESPONSIBLE for the emergent properties. They just "happen" (?) This, to me, is a weird view.
But I don't see the link with tunneling of macroscopic objects. The predicted probability of tunneling of a macroscopic object is mindbogglingly small ; so what's wrong with such a small probability which means in fact, that it will never happen during the lifetime of the visible universe ? Why is this bogus, and why should you insist on having that probability exactly 0 ? And what does this have to do with those emergent properties ?
After all, in this case we derive easily the correct "macroscopic property" that stones do not tunnel through a brick wall, from the microscopic laws.

cheers,
Patrick.
 
  • #186
because we haven't got any replacement for it ! If you stick rigorously to your reasoning, then NO theory what so ever has ANY predictive power

What I am saying is, we have no reason (logic) to believe in the axioms of QM other than experiment. So there is no reason to trust the deductions made from these axioms (other than experiment).

Other physical theories are based on logical axioms such as:

"All microstates are equally probable".

"The exist a maximum speed in the universe"

I trust these axioms without the need to do any experiment, because they are based on logic only. Therefore, I trust the predictions deduced from these axioms.

Do you see the difference?

such a small probability which means in fact, that it will never happen during the lifetime of the visible universe ? Why is this bogus

Because, by your own admission it will "in fact" never happen! If it will never happen, the probability is zero, regardless of what you can calculate using an empirical theory.
 
  • #187
Crosson said:
Because, by your own admission it will "in fact" never happen! If it will never happen, the probability is zero, regardless of what you can calculate using an empirical theory.

He said that it will never happen within the lifetime of the visible universe, not that it will never happen.

But even so, a small probability doesn't mean that it will never happen in any finite amount of time. It just means that it's unlikely. So his "own admission" should not be confused for a prediction of QM.

And by the way:

"All microstates are equally probable".

"The exist a maximum speed in the universe"

I trust these axioms without the need to do any experiment, because they are based on logic only. Therefore, I trust the predictions deduced from these axioms.

Neither of these axioms are based on "logic only". No axiom in physics is. If they were, they would have no content because logic knows nothing of "microstates", "probability", "speed", or "the universe".
 
  • #188
Crosson said:
What I am saying is, we have no reason (logic) to believe in the axioms of QM other than experiment. So there is no reason to trust the deductions made from these axioms (other than experiment).

Other physical theories are based on logical axioms such as:

"All microstates are equally probable".

"The exist a maximum speed in the universe"

I trust these axioms without the need to do any experiment, because they are based on logic only. Therefore, I trust the predictions deduced from these axioms.

Do you see the difference?


Because, by your own admission it will "in fact" never happen! If it will never happen, the probability is zero, regardless of what you can calculate using an empirical theory.
I do not see the difference, and I do not think that your axioms are any more logical then QM's axioms. You are picking and chossing what seems more logical to you.

All microstates are not equaly likly in Quantum statistics though, that is a classical concept is it not?
 
  • #189
"I do not see the difference, and I do not think that your axioms are any more logical then QM's axioms. You are picking and chossing what seems more logical to you."

The difference is that the assumptions of classical statistical mechanics and special relativity are based on logic rather than experiment. I did not say they are necessarily singled out by logical principles alone, only that they are based on logic rather than experiment (QM is based on experiment).
 
  • #190
Logic is a human invention from our observation of the world. If quantum mechanics played an important role in the everyday world then it too would be consider logical.<-Too close to philosophy so I will not purse this one any further.

The Classical assumption are not logical at all, at least not more so then QM’s assumptions. What follows after classical or QM assumptions is most definitely logical.

What support, logical o other wise, for you axioms can you provided that does not come from experiment?
"All microstates are equally probable".
"The exist a maximum speed in the universe"

Zapperz also has a new journal entry that gives some references to so macro quantum effects.
 
  • #191
Tom Mattson said:
But even so, a small probability doesn't mean that it will never happen in any finite amount of time. It just means that it's unlikely. So his "own admission" should not be confused for a prediction of QM.

Ok, then, it will "never" happen for sufficiently large values of "never" :rofl:

cheers,
Patrick.
 
  • #192
Crosson said:
The difference is that the assumptions of classical statistical mechanics and special relativity are based on logic rather than experiment. I did not say they are necessarily singled out by logical principles alone, only that they are based on logic rather than experiment (QM is based on experiment).

And you are still wrong. To put a finer point on it, let me ask you this:

Precisely which rule of deduction was used to derive the assumptions of classical stat mech? Keep in mind that rules of deduction do not refer to anything in the non-analytic world.

vanesch said:
Ok, then, it will "never" happen for sufficiently large values of "never"
:rofl:

You know what I mean! :mad:

And Crosson knew what you meant. :wink:

Which is why I felt compelled to get him to stop using your quote as evidence that QM predicts something that it does not. :rolleyes:
 
  • #193
Crosson said:
What I am saying is, we have no reason (logic) to believe in the axioms of QM other than experiment. So there is no reason to trust the deductions made from these axioms (other than experiment).

This is true for any axioms is physical theories,QM,CM,SR,GR,QFT,...

Crosson said:
Other physical theories are based on logical axioms such as:

"All microstates are equally probable".

Nope.Even when it does hold (the equilibrium ensembles,microcanonical ensemble),it can be formulated as a theorem.The axiomatical approach to equilibrium SM derives this result...


Crosson said:
"The exist a maximum speed in the universe"

Here we don't get the direct experimental confirmation.We never will.We trust it,because it leads to verifiable results,but we'll never test it directly.

Crosson said:
I trust these axioms without the need to do any experiment, because they are based on logic only.

That's not true.The axioms of QM (for example) were formulated by Dirac after having seen the powers of Heisenberg & Schrödinger's theories in confirming the experimental results...It's not the same story as in Einstein's GR,but it's pretty much the same story with his SR...

Crosson said:
Therefore, I trust the predictions deduced from these axioms.

Nope,u trust them because experiments confirm the theoretical results...


Daniel.
 
  • #194
Hi,

I think that the problems with QM do not stem from it's logic or mathematical formalism. Rather they stem from the meanings given to the formalism apart from the math itself.

What is the meaning of superposition in terms of the physical quantities that it purports to describe? What is the meaning of the collapse of the wave function in the same terms? What does it mean in these terms to say that the wave function is localized or not? What does entanglement mean in terms of the actual material field/particle/wave scenario?

These and other questions are what causes divisions in the understanding of QM. The math works, but how is it to be mapped to an actual physical event space.

juju
 
  • #195
I probably shouldn't prolong this discussion, but against my better judgement, I will.

In this week's issue of Nature, Andersion reviews Robert Laughlin's new book "A Different Universe: Reinventing Physics from the Bottom Down". I will quote a few passages from this review:

A Different Universe is a book about what physics really is; it is not only unique, it is an almost indispensable counterbalance to the recent proliferation of books by Brian Greene, Stephen Hawking and their fellows, who promulgate the idea that physics is a science predominantly of deep, quasi-theological speculations about the ultimate nature of things...

... The central theme of the book is the triumph of emergence over reductionism: that large objects such as ourselves are the product of principles of organization and of collective behaviour that cannot in any meaningful sense be reduced to the behaviour of our elementary constituents. Large objects are often more constrained by those principles than by what the principles act upon...

... Those who devour the work of Greene, or decorate their coffee table with Hawking, will find this book a useful antidote. It should spike the interest of those who read the physics popularizers, although in its personalized coverage and opinionated style it is sui generis. My message is this: buy the book.

Now, if you think that Anderson and Laughlin are "buddies buddies", you haven't seen the two of them in the same room at a physics conference. These two disagree with each other often, and some of the confrontation can be quite testy. One can already see this in the middle part of Anderson's review of this book. However, as far as emergent phenomena is concerned, they (and most of CM physicists) hold the same school of thought.

My point here is NOT to convince you this is correct. Rather, it is to make you aware of the existence of this approach. It is obvious that many are unaware that there is such a disagreement in physics, even though CM physicists holds the largest percentage of practicing physicists.

Zz.
 
Last edited:
  • #196
ZapperZ said:
My point here is NOT to convince you this is correct. Rather, it is to make you aware of the existence of this approach. It is obvious that many are unaware that there is such a disagreement in physics, even though CM physicists holds the largest percentage of practicing physicists.

I'm still in doubt about what is actually claimed.
Is it claimed:
1) it is much more practical to have phenomenological models incorporating (by hand, sometimes) collective phenomena, than to start from the laws of the elementary constituents to make progress, now, today ;

or is it claimed:
2) the laws describing the elementary phenomena ARE NOT THE CAUSE of the collective phenomena ?

It is point 2) that I thought you claimed to be the essence of the expressed viewpoint here, and I strongly doubt it. However, if it is viewpoint 1), then you are apparently not aware that most "reductionists" ALSO subscribe to that view ! In that the knowledge of the standard model doesn't have much practical incidence on your ability to know what will be the failure-stress in a certain brand of stainless steel, for instance. I think that view is quite universally accepted, more than you seem to think ; also in the elementary particle world.
However, viewpoint 1) allows you in principle, with enough computing power, in simple enough cases, to really deduce the collective phenomena while viewpoint 2) says that this is not possible.

cheers,
Patrick.
 
  • #197
ZapperZ,
If I had happened to read Laughlin's and Anderson's writings without you pointing me to them, I would have interpreted their comments as being under vanesch's class #1 (I still think this about Anderson...). But you are saying that they intend #2. I'm not implying that you're wrong. I just think it is interesting where language's imprecision leads us. This kind of thing probably contributes to the communication problem that you're trying to bring out. That, and the fact that different parts of physics have so little relation to each other that we tend to stay in our own little worlds.
 
  • #198
vanesch said:
I'm still in doubt about what is actually claimed.
Is it claimed:
1) it is much more practical to have phenomenological models incorporating (by hand, sometimes) collective phenomena, than to start from the laws of the elementary constituents to make progress, now, today ;

or is it claimed:
2) the laws describing the elementary phenomena ARE NOT THE CAUSE of the collective phenomena ?

It is point 2) that I thought you claimed to be the essence of the expressed viewpoint here, and I strongly doubt it. However, if it is viewpoint 1), then you are apparently not aware that most "reductionists" ALSO subscribe to that view ! In that the knowledge of the standard model doesn't have much practical incidence on your ability to know what will be the failure-stress in a certain brand of stainless steel, for instance. I think that view is quite universally accepted, more than you seem to think ; also in the elementary particle world.
However, viewpoint 1) allows you in principle, with enough computing power, in simple enough cases, to really deduce the collective phenomena while viewpoint 2) says that this is not possible.

cheers,
Patrick.

Or what about 3) that collective phoenomena cannot be deduced from knowledge at the reductionism level. This is different than your Point 2, because to be able to say that, one has to already have a series of observation at the mesoscopic scale, which we don't! Besides, a collective behavior doesn't mean the coulombic interaction between individual particles aren't part of the cause, for example. It is just that accounting for all of them, as "fully" as one can, doesn't mean you can get to the collective phenomena. Again, I have mentioned this earlier, that at this point, we are in a similar stage as the EPR-type experiment before Bell.

If it is really Point 1, then Anderson, Laughlin, Pines, and even S.C. Zhang[1] (who WAS trained as a high energy physicist before jumping over to CM) would not have any issues with Weinberg and Co. But they do. Anderson's review of Weinberg's latest book clearly revealed that. In fact, there are arguments that "elementary particles" may in fact be emergent, collective behavior themselves (quark fractional charges).

Zz.

[1] http://arxiv.org/PS_cache/hep-th/pdf/0210/0210162.pdf
 
Last edited by a moderator:
  • #199
ZapperZ said:
Or what about 3) that collective phoenomena cannot be deduced from knowledge at the reductionism level. This is different than your Point 2, because to be able to say that

There is something potentially misunderstandable here.
To make it clear, this knowledge is NOT practical knowledge on how to handle the formidable mathematical problem. The knowledge is just the axioms of "reductionist" physics at an appropriate level (say, non-relativistic quantum mechanics). So, in the case of condensed matter, it is the masses, charges and spins of the nucleae, the charge and spin of the electron, the EM interaction (maybe treated semi-classically, maybe some elements of QED have to be included).

In that case, I don't understand what you mean by "point 3)" because my points 1) and 2) are logically complete:
point 1) says that the fundamental laws as we know them, are responsible for the collective phenomena
and point 2) says that they aren't.
I don't see room for a point 3), except if it would mean some necessary change in our understanding of our fundamental "reductionist" laws, but that brings us back to point 1) ! We then simply didn't have the correct laws to start with, then, and once we're using the correct ones, we ARE back in the reductionist scheme.

It is just that accounting for all of them, as "fully" as one can, doesn't mean you can get to the collective phenomena. Again, I have mentioned this earlier, that at this point, we are in a similar stage as the EPR-type experiment before Bell.

I don't see the relationship with Bell. Before Bell, people thought that there was always the possibility of an underlying classical, deterministic theory that explained their QM probabilities.
Bell cleared out the following issue: "in order to have an underlying deterministic, classical theory that produces the same probabilities as QM, your deterministic, classical theory must contain non-local interactions".
So it suddenly became interesting to look at those specific cases of QM predictions where Bell's theorem explicitly entered in action. Well, it became interesting (just as a matter of principle) to verify QM also in those predictions. But in fact, if you thought QM was a correct theory, you didn't even have to do those experiments. Bell's result only made it a bit harder to deny superposition on a macroscopic scale, as a modification of QM.

I don't see the relationship with our "state of ignorance" concerning collective phenomena ?

If it is really Point 1, then Anderson, Laughlin, Pines, and even S.C. Zhang[1] (who WAS trained as a high energy physicist before jumping over to CM) would not have any issues with Weinberg and Co. But they do. Anderson's review of Weinberg's latest book clearly revealed that. In fact, there are arguments that "elementary particles" may in fact be emergent, collective behavior themselves (quark fractional charges).

Yes, and that's very interesting. It is even generally believed that what we think are elementary particles are just the effective dynamics of something deeper. You even have discussions of phase transformations of the vacuum and so on. But that is the other way around ! It means that what we think, are elementary phenomena, are in fact collective phenomena of a still deeper dynamics. This doesn't mean that building UP from a deeper dynamics is impossible ! The holy grail of particle physics is in fact just to reveil that deeper dynamics. QCD is already full of it.
 
  • #200
ZapperZ said:
[1] http://arxiv.org/PS_cache/hep-th/pdf/0210/0210162.pdf

I skimmed through this great paper. But it says exactly the opposite of what you seem to be claiming: they *derive* the emergent properties from the underlying "reductionist" models ! And that was claimed to be impossible... ?

cheers,
Patrick.
 
Last edited by a moderator:
  • #201
From everything I read they derive reductionist models from emergent properties, not the otherway around. And that's the trick, isn't it?
 
  • #202
vanesch said:
There is something potentially misunderstandable here.
To make it clear, this knowledge is NOT practical knowledge on how to handle the formidable mathematical problem. The knowledge is just the axioms of "reductionist" physics at an appropriate level (say, non-relativistic quantum mechanics). So, in the case of condensed matter, it is the masses, charges and spins of the nucleae, the charge and spin of the electron, the EM interaction (maybe treated semi-classically, maybe some elements of QED have to be included).

In that case, I don't understand what you mean by "point 3)" because my points 1) and 2) are logically complete:
point 1) says that the fundamental laws as we know them, are responsible for the collective phenomena
and point 2) says that they aren't.
I don't see room for a point 3), except if it would mean some necessary change in our understanding of our fundamental "reductionist" laws, but that brings us back to point 1) ! We then simply didn't have the correct laws to start with, then, and once we're using the correct ones, we ARE back in the reductionist scheme.

OK, let's try this with a possibly-bad analogy (meaning if this doesn't work, I have wiggle room to sneak out of it.) :)

Look at the behavior of a crowd at a sporting game. I don't know about you, but being near the Chicago Cubs baseball field (Wrigley Field), I've seen some "interesting" fan behavior when they're in a large group of people. Yet, if you simply take that person out, analyze his behavior, you could get a mild-mannered, law-abiding citizen. Yet, put him in a group of people at a baseball game, and he's a foul-mouthed maniac. The individual behavior cannot explain the "collective" behavior.

In condensed matter, there are "higher order" collective behavior that simply do not emerge out of looking at all the interactions at the individual particle level. Does that mean the interactions at the individual particle level are completely irrelevant (your Point 2)? No. Without those, you don't have the material or the fabric. But the fabric does not explain the shape of the clothing, or the pattern of the collar, or the shape of the sleeves, etc. (Your Point 1). There is an additional "hand" at work here.

For CM physicists, what additionally indicates that this is the case is the so-called "quantum protectorate"[1], in which the "uneveness" and disorder at the individual particle scale do NOT play any role in various collective behavior such as superconductivity. These emergent phenomena are immune to such details.

Zz.

[1] P.W. Anderson, Science v.288, p.480 (2000); http://arxiv.org/abs/cond-mat/0007287
 
  • #203
ZapperZ said:
In condensed matter, there are "higher order" collective behavior that simply do not emerge out of looking at all the interactions at the individual particle level. Does that mean the interactions at the individual particle level are completely irrelevant (your Point 2)? No. Without those, you don't have the material or the fabric. But the fabric does not explain the shape of the clothing, or the pattern of the collar, or the shape of the sleeves, etc. (Your Point 1). There is an additional "hand" at work here.

For CM physicists, what additionally indicates that this is the case is the so-called "quantum protectorate"[1], in which the "uneveness" and disorder at the individual particle scale do NOT play any role in various collective behavior such as superconductivity. These emergent phenomena are immune to such details.

Zz.

It seems that the problem always stays with the interpretation of the words (causality, etc...).
Probability theory has given the main result; the weak law of large numbers. From a collective set of independant objects (choosing the good representation), we have a deterministic global result that does not depend on the individual values (we can even mix different sets of independant objects). In other words there is no functionnal relation (a cause, one possible interpretation of causality) between the global average value (for example the form of the collection of objects) and the values of the individual objects.
Therefore, we may say that this collective set of objects has porperties that do not depend on the individual object values.

This is a very formal statement (more strict versions can be found in probability texts).
Therefore, what Zzaper says about condensed matter seem to be very raisonnable.


Seratend.
 
  • #204
If the Law of Large Numbers, in any form, describes some sort of emergent property, then this property is virtually universal, almost as much as is the use of algebra in the sciences. That is, the LLN can apply to clinical trials of new pharmaceutical drugs, survey research, compilation of the "best" particle data from individual laboratories, calibration of lab equipment, advertising evaluations,...

Remarkably free of assumptions as it is the LLN is, it does require a few restrictions on the sample. The classic formulation requires a sample space of independent events
all governed by the same probability distribution. And, most importantly, this distribution must have a finite variance. So that, ultimately, the big errors are less probable, and they average out -- and this happens for smaller and smaller big errors, until the distribution of the mean becomes a very sharp Gaussian, as the sample size goes to infinity.

My sense is that the LLN is a powerful property of our language, and, perhaps the idea is more reflective of "emergence" in human thought rather than in nature.

Regards,
Reilly Atkinson
 
  • #205
Zz, your sports analogy seems to imply that "reductionists" don't look at interactions. I know that's not what you really meant, but the analogy breaks down without it.

ZapperZ said:
For CM physicists, what additionally indicates that this is the case is the so-called "quantum protectorate"[1], in which the "uneveness" and disorder at the individual particle scale do NOT play any role in various collective behavior such as superconductivity. These emergent phenomena are immune to such details.

What is the difference between this and what happens in the regimes where classical physics is (nearly) correct? Isn't that also a "quantum protectorate?"
 
  • #206
reilly said:
If the Law of Large Numbers, in any form, describes some sort of emergent property, then this property is virtually universal, almost as much as is the use of algebra in the sciences.

In addition, it is the way to reconciliate the determinist with the probabilist view. In other words, each deterministic value may be viewed, formally, as the value of a collection of random variables (while we are used to view probability results as unknown deterministic variables).

reilly said:
Remarkably free of assumptions as it is the LLN is, it does require a few restrictions on the sample. The classic formulation requires a sample space of independent events all governed by the same probability distribution. And, most importantly, this distribution must have a finite variance.
Note (for the users of PF) this is the weak LLN. The strong LLN does not require a finite variance, just a finite mean value. There are other formulations (mainly with the finite variance restriction) that allow different mean values for each random variable and we still get a "deterministic" result.

Seratend.
 
  • #207
Stingray said:
Zz, your sports analogy seems to imply that "reductionists" don't look at interactions. I know that's not what you really meant, but the analogy breaks down without it.

Humm... does the fact that you didn't comment on my "fabric" analogy means that it is ok? :)

I'll be the first people to point out that analogies are very, very weak technique to illustrate physics ideas (and I have!). I would have chopped up that analogy even more vigorously than what you did. :)

What is the difference between this and what happens in the regimes where classical physics is (nearly) correct? Isn't that also a "quantum protectorate?"

I dunno. I've never thought about that other than to point out that phenomena falling under the quantum protectorate regime are not classical phenomena. Whether they share similarities in "principle", that's something I do not know.

Zz.
 
  • #208
ZapperZ said:
Humm... does the fact that you didn't comment on my "fabric" analogy means that it is ok? :)

The fabric analogy did help, but I'm still a little confused. Once you know the fabric, you could in principle find all possible types of clothing that could be constructed out of it. Of course you're right that knowing the fabric won't uniquely fix the end result, but the class of all possible results can be known beforehand (again, in principle).

Going back to physics, it seems like different types of laboratory materials could in principle be "predicted" as different subsets of the class of all possible "stable" states with a sufficiently large number of degrees of freedom, large mass, etc. Of course defining all of those things appropriately, and carrying out the solution from microscopic physics would be extremely difficult, but it is logically possible.

Most anything in physics can be put into the form of an initial value problem (at least formally). Initial data are specified, and then appropriate evolution equations are used to find what happens in the future (or past). The set of all possible initial data sets is usually extremely nontrivial. The laws of physics impose constraints on what is allowable even at one instant in time. Are you taking this into account?

If I'm still misunderstanding you, maybe you could point to one these Laughlin-Anderson-Weinberg debates. Google didn't turn up anything useful.
 
  • #209
ZapperZ said:
Look at the behavior of a crowd at a sporting game. I don't know about you, but being near the Chicago Cubs baseball field (Wrigley Field), I've seen some "interesting" fan behavior when they're in a large group of people. Yet, if you simply take that person out, analyze his behavior, you could get a mild-mannered, law-abiding citizen. Yet, put him in a group of people at a baseball game, and he's a foul-mouthed maniac. The individual behavior cannot explain the "collective" behavior.

This would mean that if I only have one individual, I cannot obtain a behaviour that is observed in the crowd. But let us now take our single individual, and put 3D goggles and earphones on his head, and (very important !) fill his belly with 4 cans of beer. Now let us play the movie of a crowd in a stadium on his 3D headset, in such a way that our individual is convinced to be sitting on his chair in the stadium. I'm pretty sure we will now observe similar "interesting" behaviour!
But that would then mean that this behaviour is "reductionist-wise" encoded into his individual behaviour, and can be displayed given the right visual, alcoholic and auditory stimuli, even if those stimuli are normally not present in every-day situations (and hence the behaviour seems to be different: we're simply exploring a different input-output range of the model of behaviour in every - day situations and in the stadium). And I'm pretty sure that when writing out the statistical physics model of several such individuals, in a relationship as is the case in a stadium, the collective behaviour comes out as one of the solutions.

In condensed matter, there are "higher order" collective behavior that simply do not emerge out of looking at all the interactions at the individual particle level. Does that mean the interactions at the individual particle level are completely irrelevant (your Point 2)? No.

I didn't mean to say that they are completely irrelevant. I just meant that point 2 indicated that they are not the cause of the collective behaviour. In that if you could calculate, without any un-allowed for approximation, the expected collective behaviour purely based upon the "reductionist" model, but taking into account all interactions, you would find a DIFFERENT behaviour than what is the true behaviour. I have to say I find it extremely difficult to believe that there are many physicists out there holding such a view.
Of course, as it has been pointed out, there are often different cases possible, depending on initial conditions.

For CM physicists, what additionally indicates that this is the case is the so-called "quantum protectorate"[1], in which the "uneveness" and disorder at the individual particle scale do NOT play any role in various collective behavior such as superconductivity. These emergent phenomena are immune to such details.

That, by itself (universality and so on) is not an argument for point 2). You can indeed have universality emerging from big classes of underlying reductionist models, as long as they satisfy some broad properties. This is indeed, as has been pointed out, of a similar behaviour as the law of large numbers: many different "reductionist" distributions, when added together, result in a gaussian distribution.
But that doesn't mean that you cannot obtain that gaussian starting with a given reductionist distribution ! Indeed, the gaussian is very well simulated if you do a monte carlo simulation.

However, it is a clear indication of the opposite claim: in certain cases, the collective behaviour is so universal, that it doesn't contain much information anymore of the underlying reductionist model. So you cannot use these data to deduce the reductionist model of individual behaviour out of the collective behaviour. This is what puts "barriers" between different scales of observation, and it is at the same time a curse and a blessing. It is a blessing, because it allows you to forget about the individual behaviour, and start from scratch from the collective behaviour, and it is a curse because the information of the individual behaviour is largely lost, and you can only do "elementary particle" experiments to find out precisely the individual behaviour.
But again, this is NOT point 2).
Point 2 says: "reductionist behaviour" NOT -> "collective behaviour"
Universality says: "collective behaviour" NOT -> "reductionist behaviour" because MANY reductionist behaviours lead to the same collective behaviour.

cheers,
Patrick.
 
  • #210
In the examples in the paper, it seemed to me that the emergent symmetries were already present in the high-energy short scale primary phenomena, but hidden or broken by the intense short range interactions. It was only by "integrating out" those short range effects that the symmetries became visible. But when you integrate out the individual behaviors, what you get are the collective behaviors, and so it is these that exhibit the symmetries.

Information has not been gained by this integration, rather the contrary, but something that was obscured has been made plain, like squinting your eyes to see a shape in the bushes.
 

Similar threads

Replies
6
Views
1K
  • Quantum Physics
Replies
24
Views
1K
  • Quantum Physics
2
Replies
69
Views
4K
Replies
5
Views
286
  • Sticky
  • Quantum Physics
Replies
1
Views
5K
  • Quantum Physics
7
Replies
232
Views
16K
  • Quantum Physics
Replies
31
Views
4K
Replies
80
Views
4K
  • Quantum Physics
Replies
33
Views
3K
Back
Top