I Murray Gell-Mann on Entanglement

  • I
  • Thread starter Thread starter Thecla
  • Start date Start date
  • Tags Tags
    Entanglement
  • #301
stevendaryl said:
But that theory isn't standard QM, it's a proposed alternative theory.
I don't claim anything for GRW. It has a passing similarity to what I was thinking.
 
Physics news on Phys.org
  • #302
secur said:
Anyway Demystifier's statement is justified. A traditional classical physicist - such as Einstein - considers it "cheating" for QM to simply refuse to predict (one single) experimental result. If we ever come up with a new, deeper, theory that can do that, Demystifier's (and Einstein's) point would become obvious and accepted by all. Until then, it remains rather subtle and requires some cogitation to appreciate.

I think one has to realize that this point is absolutely standard, and that one can take Bohr or Einstein's view coherently. What is being debated here is whether the claims by vanhees71 (following Ballentine), arnold neumaier etc are correct - their views do not fall into either the class of Bohr or of Einstein's. Certainly, they are not textbook views - one would have to believe that Bohr, Einstein, Dirac, Landau & Lifshitz, Cohen-Tannoudji, Diu, Laloe, Bell, Weinberg etc all failed to understand quantum mechanics.
 
  • #303
secur said:
I certainly thought that in classical relativistic physics the past light cone(s) of the objects in question (including the space, of course, with its curvature; and the stress-energy tensor) contain all info that could possibly affect the physics. And, theoretically perfect prediction is possible. (In fact given that the theory is local all you really need is "here-and-now" information - anything in contact - but that's not relevant at the moment). Can you please explain further?

[EDIT] assume there's only one inertial frame used for both observations and predictions ... I can't think of any other loopholes I might be missing

Maybe he's referring to "cosmic censorship" scenarios in general relativity, it's the only example I know of where that stops being true.
 
  • #304
stevendaryl said:
Do they? That would seem to mean that if you are trying to measure the spin of an electron, then initial conditions in the measuring device determine the final measurement result. That's a kind of hidden-variable theory, except that the variable is not in the thing being measured, but in the thing doing the measurement.
You misunderstand. The initial conditions happen after preparation and before measurement.
 
  • #305
stevendaryl said:
But my claim is that there is nothing in quantum mechanics that would then select a single alternative out of the set of possibilities described by that mixed state.

Yes there is: the so-called collapse, when a measurement is made. Of course you mean, apart from that.

Mentz114 said:
If you allow dissipative sub-systems in QT then it is the initial conditions that decide the outcome.

GRW posits spontaneous collapse. Presumably that has a "passing similarity" to your "dissipative subsystems"? But stevendaryl's response applies equally well to your idea, as to GRW:

stevendaryl said:
But that theory isn't standard QM, it's a proposed alternative theory.
 
  • #306
Mentz114 said:
You misunderstand. The initial conditions happen after preparation and before measurement.

I don't see how that could work. If an electron being spin-up causes a detector to enter state UP, and an electron being spin-down causes a detector to enter state DOWN, then by the linearity of the evolution equations of quantum mechanics, an electron in a superposition of spin-up and spin-down would cause a detector to enter into a superposition of states UP and DOWN, if there is nothing going on in the detectors other than ordinary quantum mechanics.

Of course, something macroscopic like a detector will interact with the environment, which enormously complicates things. But the same thing applies to electron + detector + environment: Linearity of quantum mechanics would that the composite system will enter into a superposition. So, we would end up with a superposition of two different composite states: |\psi_{up}\rangle, where the electron is spin-up, and the detectors detects spin-up, and the environment is whatever condition is appropriate for the environment interacting with a detector that detected spin-up, and |\psi_{down}\rangle, where all three components are in states appropriate for the electron being spin-down.

It doesn't make sense to say that details of the detector, or the environment, will cause it to shift to just one "branch". That would violate the linearity of the evolution equations. You could propose new, nonlinear corrections to quantum mechanics that might accomplish the kind of objective collapse that you're talking about, but it isn't possible in standard quantum mechanics. (Unless you consider wave function collapse to be part of standard quantum mechanics, which some people do.)
 
  • #307
secur said:
Yes there is: the so-called collapse, when a measurement is made. Of course you mean, apart from that.

Right, the issue is whether a separate "collapse" hypothesis is needed, or whether the effect of collapse is derivable from just unitary quantum evolution.
 
  • #308
atyy said:
I think one has to realize that this point is absolutely standard, and that one can take Bohr or Einstein's view coherently. What is being debated here is whether the claims by vanhees71 (following Ballentine), arnold neumaier etc are correct - their views do not fall into either the class of Bohr or of Einstein's. Certainly, they are not textbook views - one would have to believe that Bohr, Einstein, Dirac, Landau & Lifshitz, Cohen-Tannoudji, Diu, Laloe, Bell, Weinberg etc all failed to understand quantum mechanics.

I'm sorry, your comment seems orthogonal to my post. Please make the connections (which, no doubt, exist) explicit, if you like.

ddd123 said:
Maybe he's referring to "cosmic censorship" scenarios in general relativity, it's the only example I know of where that stops being true.

This brings up an interesting point. "Cosmic Censorship" - which of course is only a conjecture - proposes that a naked singularity never happens. Therefore if the evolution of some system would lead to that, it won't - instead it will do something else. On the face of it that sounds like Nature must "look ahead" to see the result of some process, and if Nature sees that it will be "censored", then it changes the (other) laws of physics in this one instance, to avoid that "illegal" outcome. That's teleology.

Ignore QM entirely for this discussion, stick to purely classical, because QM can confuse the following points I want to make.

For perspective consider the conservation laws of energy and momentum, applied to a couple of (perfectly elastic) billiard balls. As we all know you can determine how they'll bounce off each other most easily by applying those conservation laws. The two resulting simultaneous equations are easily solved. But certainly we don't normally think that Nature does such a look-ahead computation. Rather the billiard ball trajectories evolve via differential equations, "contact transformations", according to Newton's laws of motion and the law of elastic collision. Nature never "looks ahead" during this process. But it so happens that, when the collision is done and the balls are heading off to infinity, energy and momentum have been conserved.

There are many similar examples, e.g. various forms of the Action Principle. Many places where by solving the original dynamical differential equations we come up with (very useful) global constraints, expressed as integral equations. Loosely we say that Nature "must obey" these. But - in the normal ontology of classical physics - we don't imagine Nature is looking ahead, beyond the past light cone, to decide what to do. The instant-by-instant diff EQ's are all Nature knows about.

It's the same for Cosmic Censorship. If it's true that Nature never "allows" a naked singularity, it must happen due to ordinary physical laws (including, perhaps, currently-unknown ones) which operate only on the currently available info (past light cone) in such a way that, it turns out, naked singularity never happens.

You may be right that A. Neumaier is thinking of something like this; of course, we don't know. I was planning to give the above answer if he did respond as you suggest.

This general issue of "teleology in physics" is wandering off-topic; there's a lot more one could say about it. Bottom line, I think it should always be viewed as merely a convenient heuristic - sometimes very convenient - but Nature never really does "look ahead". Ignoring QM, where it's not so clear.

stevendaryl said:
Right, the issue is whether a separate "collapse" hypothesis is needed, or whether the effect of collapse is derivable from just unitary quantum evolution.

For what my opinion's worth, it seems very clear that mere unitary quantum evolution can't do it. You need an extra hypothesis to explain the collapse. Every alternative interpretation has one - including MWI, despite their claim that they don't.
 
  • #309
stevendaryl said:
There is nothing in quantum mechanics that bounds the standard deviation of a variable such as position. A single electron can be in a superposition of being here, and being 1000 miles away. A single atom can be in such a superposition. A single molecule can be in such a superposition. There is nothing in quantum mechanics that says that a macroscopic object can't be in such a superposition.
See the new thread https://www.physicsforums.com/threads/the-typical-and-the-exceptional-in-physics.885480/
 
  • #310
ddd123 said:
it's unclear where the pure superposition is supposed to end.
It is nowhere there in the first place. It is an artifact of the initial idealization.
 
  • #311
secur said:
Can you please explain further?
One needs the information on a Cauchy surface, not on the past light cone, to make predictions. More precisely, to predict classically what happens at a point in the future of a given observer, the latter's present defines (at least in sufficiently nice spacetimes) a Cauchy surface where all information must be available to infer the desired information.
it is no different in quantum mechanics when one makes (probabilistic) predicions. The apex of the light cone is the point in space-time at which all information needed to do the statistics is available. See https://www.physicsforums.com/posts/5370260 and the discussion there between post #187 and #230.
 
  • Like
Likes vanhees71
  • #312
stevendaryl said:
Under what circumstances does an electron measure its own spin? Never, right? So it doesn't make any sense at all to say that an isolated electron has a 50% probability of being spin-up in the z-direction. What about a pair of electrons? When does one electron measure the spin of another electron? Never, right? So for a pair of electrons, probability doesn't make any sense.

Probability only makes sense for an interaction in which one of the subsystems is a macroscopic measuring device.
You measure the spin, e.g., with a Stern-Gerlach apparatus, which leads to an entanglement between the spin component and the position of the particle, which then can be detected. All you know about the outcome of such a measurement is that with 50% probability you find the one or the other possible value of this quantity. Of course, this doesn't tell you much (in fact as little as possible in the sense of information theory) about a single spin. Probabilities in practice are relative frequencies of the occurance of the property when you perform measurements on an ensemble of independently prepared spins in this state. I don't know, why we have to repeat this all the time in our discussions. It's common practice with all experiments in all labs around the globe!
 
  • #313
stevendaryl said:
To me, if you and your equipment are all described by the same physics as electrons and photons, etc., then to say that "I prepared things in such-and-such a way" means "Me and my equipment were put into such and such a macroscopic state". So there is a notion of "state" for macroscopic objects that does not depend on yet another system to prepare them in that state. They can put themselves into a particular state. But you're saying that for an electron, or a photon, or any microscopic system, the only notion of state is a preparation procedure by a macroscopic system. That seems incoherent to me. At best, it's a heuristic, but it can't possibly be an accurate description of what's going on. If macroscopic systems have properties without being observed, then why can't microscopic systems?
Common practice today discproves you. It has become more and more possible in the recent decades to handle single particles and photons and prepare them in many kinds of pure and mixed states, everything in accordance with standard QT.
 
  • #314
vanhees71 said:
Common practice today discproves you.
So are you saying that particle (microscopic system) can acquire definite quantum state spontaneously?
 
  • #315
No, they require a definite quantum state by being prepared in it. I don't know what you mean by "spontaneously".
 
  • #316
atyy said:
I think one has to realize that this point is absolutely standard, and that one can take Bohr or Einstein's view coherently. What is being debated here is whether the claims by vanhees71 (following Ballentine), arnold neumaier etc are correct - their views do not fall into either the class of Bohr or of Einstein's. Certainly, they are not textbook views - one would have to believe that Bohr, Einstein, Dirac, Landau & Lifshitz, Cohen-Tannoudji, Diu, Laloe, Bell, Weinberg etc all failed to understand quantum mechanics.
According to Ballentine, it is really the case that all these mentioned men failed to understand quantum mechanics properly.

Each of them (including Ballentine) has a slightly different view of QM. Personally I like the Bell's view the most, but I see some merits in all of them.
 
  • #317
vanhees71 said:
No, they require a definite quantum state by being prepared in it. I don't know what you mean by "spontaneously".
With "spontaneously" I mean property of macroscopic systems to prepare themselves in definite state as suggested by stevendaryl:
stevendaryl said:
So there is a notion of "state" for macroscopic objects that does not depend on yet another system to prepare them in that state.
 
  • #318
Demystifier said:
According to Ballentine, it is really the case that all these mentioned men failed to understand quantum mechanics properly.

Each of them (including Ballentine) has a slightly different view of QM. Personally I like the Bell's view the most, but I see some merits in all of them.

That's definitely not true. My view is very conservative and minimal. Weinberg's point of view is, according to his newest textbook on QM, that the interpretation problem is unsolved. Landau&Lifshitz and Dirac are very close to my view. I've never understood Bohr, who used to write very enigmatic papers. Einstein's view is, in my opinion, ruled out by the outcome of Bell experiments. I don't know the other books mentioned well enough to say anything concerning their view on interpretation.
 
Last edited:
  • #319
zonde said:
With "spontaneously" I mean property of macroscopic systems to prepare themselves in definite state as suggested by stevendaryl:
Think about it. A macroscopic system tends to "prepare itself" in a state of (local) thermal equilibrium (let's not consider systems with long-ranged forces for the moment) but that takes time. So I still don't know, what you mean by "spontaneously".
 
  • #320
A. Neumaier said:
It is nowhere there in the first place. It is an artifact of the initial idealization.

That's a very interesting statement. I'm not sure I understand your view on things so is it possible to clarify what you mean here?

Are you suggesting that the 'textbook' axioms are incorrect (I think you called them 'ridiculous' in another post)?

Or are you suggesting that superposition (in a quantum sense) is not a physical phenomenon?

I agree that it may be extremely practically difficult to devise an experiment that is capable of testing an 'idealized' quantum system - so I can see why it might be possible to say that the idealized axioms don't apply FAPP - but is it your view that the idealized axioms are actually wrong?

Or is your view that the states and wavefunctions and mathematical machinery of QM is nothing more than a collection of mathematical devices, divorced from 'reality', that allows us to calculate probabilities in experiments? So the maths gets us the right answers but tells us absolutely nothing about what might be 'going on'?
 
  • #321
vanhees71 said:
That's definitely not true. My view is very conservative and minimal. Weinberg's point of view is, according to his newest textbook on QM, that the interpretation problem is unsolved. Landau&Lifshitz and Dirac are very close to my view. I've never understood Bohr, who used to write very enigmatic papers. Einstein's view is, in my opinion, ruled out by the outcome of Bell experiments. I don't know the other books mentioned well enough to say anything concerning their view on interpretation.
Is it so hard to press the Quote button? :cool:
 
  • #322
Demystifier said:
Is it so hard to press the Quote button? :cool:
I usually only quote a message, if my answer is not directly after the message I refer to. That's not working with this thread, because the frequency of answers is too high. zonde was quicker with his posting than I could write mine. For clarity I copied the quote into my message. Sorry for the confusion.
 
  • #323
vanhees71 said:
That's definitely not true.
What exactly is definitely not true? I really think that Ballentine thinks that most of the others have not understood QM properly.
 
  • #324
Demystifier said:
What exactly is definitely not true? I really think that Ballentine thinks that most of the others have not understood QM properly.
It's definitely not true that I think that all the "founding fathers" of QT are wrong or haven't understood their own theory. Ballentine, in my opinion, also follows just standard QT. He's even emphasizing the bare physics content of it, and there's no contradiction of "minimal interpretation" to the Copenhagen flavor without collapse. As I said, I never understood Bohr completely, but as far as I can see he had a pretty similar view, taking the quantum states as epistemic.
 
  • #325
Simon Phoenix said:
Are you suggesting that the 'textbook' axioms are incorrect
They are appropriate for an introductory course where emphasis is on simple, paradigmatic systems. But already a simple position measurement is not covered, since it cannot collapse to an eigenstate - position has no normalizable eigenstates. Realistic measurement is a highly complex subject, not something appropriate for foundations.
 
  • Like
Likes dextercioby
  • #326
vanhees71 said:
Ballentine, in my opinion, also follows just standard QT. He's even emphasizing the bare physics content of it, and there's no contradiction of "minimal interpretation" to the Copenhagen flavor without collapse.
But why then Ballentine made a wrong prediction about the quantum Zeno effect? Is it just a little mistake that can happen to everyone? Or is it a deep disagreement with the others?
 
  • #327
vanhees71 said:
He's even emphasizing the bare physics content of it, and there's no contradiction of "minimal interpretation" to the Copenhagen flavor without collapse.
Can you clarify what you mean by "without collapse"? Is it just a matter of words, calling it update instead of collapse, as from the earlier discussion with atyy? Or is there a difference in the approach to calculations? How do you do without update in all experimental scenarios?
 
  • #328
vanhees71 said:
Think about it. A macroscopic system tends to "prepare itself" in a state of (local) thermal equilibrium (let's not consider systems with long-ranged forces for the moment) but that takes time. So I still don't know, what you mean by "spontaneously".

The point is that a measuring device does not have yet another measuring device measuring it. So the idea that the state of a system is only meaningful in predicting probabilities for what a measuring device would measure is not true for macroscopic systems.
 
  • #329
vanhees71 said:
Common practice today disproves you. It has become more and more possible in the recent decades to handle single particles and photons and prepare them in many kinds of pure and mixed states, everything in accordance with standard QT.

I don't understand how your remarks address what I said. For a microscopic system, the "state" is meaningful in two ways: (1) the preparation procedure needed to put the system in that state, and (2) the probabilities that state gives for future measurements. But for a macroscopic system, there is a notion of state that doesn't have either of those features. A macroscopic system simply is in some definite state or another.
 
  • #330
ddd123 said:
Can you clarify what you mean by "without collapse"? Is it just a matter of words, calling it update instead of collapse, as from the earlier discussion with atyy? Or is there a difference in the approach to calculations? How do you do without update in all experimental scenarios?
There is anyway no difference in calculations when it comes to the physical content of quantum theory. The minimal interpretation is just saying that the state has probabilistic information about the outcome of future measurements and nothing else.
 
  • #331
Demystifier said:
But why then Ballentine made a wrong prediction about the quantum Zeno effect? Is it just a little mistake that can happen to everyone? Or is it a deep disagreement with the others?
Yes, I think that's simply a mistake. Why from the minimal interpretation one should deny the quantum Zeno effect is not clear to me.
 
  • Like
Likes Demystifier
  • #332
Did Ballentine actually change his mind though? From the last time I read wiki, it seemed he was still arguing against the Zeno effect, so there's an impression something is at stake.
 
  • #333
stevendaryl said:
I don't understand how your remarks address what I said. For a microscopic system, the "state" is meaningful in two ways: (1) the preparation procedure needed to put the system in that state, and (2) the probabilities that state gives for future measurements. But for a macroscopic system, there is a notion of state that doesn't have either of those features. A macroscopic system simply is in some definite state or another.
The macroscopic observables, which are an average over a vast amount of microscopic states observables ##^*##, appear classical. Of course, on the microscopic level a macroscopic system is described by QT. There's no contradiction between these too levels of description.

##^*## corrected due to the hint in #340
 
Last edited:
  • #334
ddd123 said:
Did Ballentine actually change his mind though? From the last time I read wiki, it seemed he was still arguing against the Zeno effect, so there's an impression something is at stake.
Can you share the link to that wiki?
 
  • #336
vanhees71 said:
The macroscopic observables, which are an average over a vast amount of microscopic states, appear classical. Of course, on the microscopic level a macroscopic system is described by QT. There's no contradiction between these too levels of description.

Claiming it doesn't make it so. If each of the microscopic states is only meaningful in that it makes predictions for future measurements, then how does a macroscopic state have meaning that doesn't involve future measurements?
 
  • #337
stevendaryl said:
Claiming it doesn't make it so. If each of the microscopic states is only meaningful in that it makes predictions for future measurements, then how does a macroscopic state have meaning that doesn't involve future measurements?

More specifically: Why does an average over a vast number of microscopic states, each of which only meaning in terms of future measurements, produce a macroscopic value that has meaning independent of measurements? That seems like an outlandishly improbable claim. That doesn't make it false, but it shouldn't be a default assumption without further argument supporting it.
 
  • #338
ddd123 said:
Ok, I cannot see a problem with [46]. Indeed there's no "collapse", but just the interaction between the atom (simplified to a three-level toy model) and the RF field that causes the "quantum Zeno effect". So, of course, Ballentine is not denying the measured facts.
 
  • #339
stevendaryl said:
the state of a system is only meaningful in predicting probabilities for what a measuring device would measure is not true for macroscopic systems.
The (always mixed) state of a measurement device indeed fully determines the measurement reading to a very high accuracy, not only probabilistically. That's how measurement devices are made.
 
  • #340
vanhees71 said:
The macroscopic observables, which are an average over a vast amount of microscopic states
over a vast amount of microscopic observables, not states!
 
  • Like
Likes vanhees71
  • #341
A. Neumaier said:
The (always mixed) state of a measurement device indeed fully determines the measurement reading to a very high accuracy, not only probabilistically. That's how measurement devices are made.

I think you misunderstood what I said. There are two states involved here: the state of the measuring device, and the state of the system being measured. The first has a meaning that does not depend on measurements by yet other measuring devices.
 
  • #342
Macroscopic state variables such as the position of the center of mass of a macroscopic object have two features that are different from microscopic state variables: (1) There are no observed interference effects between different states, and (2) they have a small standard deviation (relative to the appropriate scale for the variable; for example, the standard deviation for the position of a brick is typically small compared to the size of the brick). Decoherence explains the first effect, but not the second. Pure quantum mechanics in the minimal interpretation cannot explain why macroscopic state variables have definite (up to a small standard deviation) values.

Bohmian mechanics halfway explains it. According to that interpretation, all objects have definite positions at all times. However, in Bohmian mechanics, the state, or wave function, evolves smoothly at all times, so in those cases where quantum mechanics would predict a large standard deviation, Bohmian gives (or seems to--maybe I'm misunderstanding something) schizophrenic results: The macroscopic object such as a brick is well-localized, since each of its constituent particles is well-localized. On the other hand, the standard deviation, as computed using the wave function, may still be quite large.

Many-worlds attempts (and I'm not sure how successful it is) to say that even though a macroscopic object can have a large standard deviations for its position, that is unobservable. Rather than "seeing" a brick with a large standard deviation, the state of the world splits into different branches, each of which sees the brick as localized.
 
  • #343
stevendaryl said:
I think you misunderstood what I said. There are two states involved here: the state of the measuring device, and the state of the system being measured. The first has a meaning that does not depend on measurements by yet other measuring devices.
Well, your formulation invited the misunderstanding. Anyway, whether the state of a single electron has a meaning at all is one of the controversial points in the foundations. Generally agreed is only that an ensemble of many equally prepared electrons has a state. And this automatically leads to a probabilistic framework.
 
  • Like
Likes vanhees71
  • #345
A. Neumaier said:
over a vast amount of microscopic observables, not states!
true! I've corrected it.
 
  • #346
  • #347
stevendaryl said:
That's just incorrect. The law of large numbers is not sufficient to explain this effect. You are mistaken.

I think that this might be an insurmountable obstacle to reaching a conclusion, because to me, your [A. Neumaier's] efforts to prove that macroscopic objects have definite positions (give or take a small standard deviation) is assuming your conclusion. It's circular reasoning. You want to launch into the use of density matrices of a particular form that only make sense under the assumption that you're trying to prove.

On the other side, I think I could demonstrate definitely that you are wrong by considering the pure state of an isolated system that includes macroscopic objects. You would refuse to even look at such an argument, because you insist that macroscopic can't have pure states.

So that's an impasse. You reject out of hand the reasoning that would prove you wrong, and I find your reasoning to be circular.
 
  • #348
But could we not consider a variant of the cat paradox where a brick sits on a trap door, and falls its full height if a nucleus decays? Then decoherence would make it such that we never observe the brick in a superposition, but the two possibilities do still occur in experiments, so we do get a large standard deviation in the brick's location.
 
  • #349
stevendaryl said:
You want to launch into the use of density matrices of a particular form that only make sense under the assumption that you're trying to prove.
It is legitimate to start with different basic assumptions on which to erect the edifice of quantum mechanics. The only condition is that the basic assumptions are consistent with experiment. Everything else is a matter of choice, and the quality of the choice is measured by the conclusions one can draw from it and how well they fit the real world.

You start with the traditional textbook assumptions and get into all the trouble with meaningless superpositions of macroscopic objects, for which nobody has been able to give a meaning in reality. Note that the superposition principle is already known to be inconsistent with physics as it leads to immediate contradiction with rotations when you superimpose a spin 0 and a spin 1/2 state. (Try to rotate by ##2\pi## and observe what happens to inner products of two arbitrary such superpositions.)

stevendaryl said:
I find your reasoning to be circular.

I start with the algebraic approach to quantum mechanics where quantities are functions of elements of a C^* algebra (e.g. the algebra of linear operators on a Schwartz space, which encodes Dirac's bra-ket setting) and states are positive linear operators - the natural analogue of what one has in classical stochastic physics. This is a far better starting point than the unrealistic textbook axioms used in introductory textbooks. Nothing is circular in this approach.

In the algebraic approach there is no superposition principle, and it naturally accounts for superselection sectors such as that for integral/half-integral spin. Moreover, it gives a far simpler approach to statistical mechanics compared to the standard approach. Finally, and most importantly, it leads to exactly the same predictions as the shut-up-and-calculate part of quantum mechanics and hence is a fully trustworthy foundation.

So my approach cannot be proved wrong, while the superposition principle is proved wrong by the existence of spin 1/2.
 
Last edited:
  • Like
Likes vanhees71
  • #350
stevendaryl said:
The law of large numbers is not sufficient to explain this effect.
If ##A_1,\ldots,A_N## are uncorrelated operators with the same standard deviation ##\sigma## then ##X:=N^{-1}(A_1+\ldots A_N)## has standard deviation ##N^{-1/2}\sigma##, as a simple calculation reveals. The arguments in statistical mechanics are similar, except that they account (in many important instances) for the typical correlations between the ##A_k##.

Please point out where the argument is faulty. If successful, all books on statistical mechanics must be rewritten to account for your revolutionary insight.
 
Last edited:
  • Like
Likes vanhees71
Back
Top