When Did Einstein First Encounter Quantum Entanglement?

thenewmans
Messages
168
Reaction score
1
To me, the concept of entanglement sounds like an epiphany. I’m sure I can’t find one specific moment for it but I’d like to get closer. And I’d like your help. So far, I see that Einstein had issues with Born’s matrix mechanics (1925) among other things, which heated up the Bohr Einstein debates. A decade passed before the EPR paradox. So somewhere in there, I assume that Einstein noticed that QM does not allow the spin to be set at the emitter. I’m not sure how he recognized that.

Even that might not go back far enough for me. Somebody figured out 2 things. 1 – Given the right conditions, two particles must have opposite spin. I assume that comes from the conservation of angular momentum. QM contains the correspondence principle, which means that any QM prediction must average out at the macro scale and match the principals of classical physics. But I don’t quite get how that means two particles must have opposite spin.

2 – These two particles can only be described by a single function (wave function). My guess is that separate formulas would exceed Heisenberg’s non-commutativity rule. But I don’t know how to make that connection.
 
Physics news on Phys.org
I think it is a pity that no one has replied to this thread, because I too think it is interesting to know who first predicted that particles could be split into sub particles that have quantum entanglement. It seams as if at the time the EPR paradox was being formulated that quantum entanglement was a "well known fact", even though as far as I am aware there were no experiments carried out before that time that proved it was indeed a fact. Does anyone know of any such experiments? Shouldn't whoever predicted quantum entanglement have some historical recognition for this amazing correct prediction?
 
1 - This is related to the conservation of angular momentum. Given a specific material that is able to emit two photons at once it is clear that the initial state |atom*> and the final state |atom, 2 photons> must have the same agular momentum. So if L|atom*> = 0, then L|atom, 2 photons> = 0; here * means an excited state.

Using spin is not essential for the EPR argument. It is convenient b/c both the theoretical explanation and the experimental setup are most transparent if one uses spin, but entanglement can be formulated based on other observables as well.

2 - This is the qm specific input to EPR: Two indistinguishable bosons / fermions must be symmetrized / anti-symmetrized. That is one main difference between classical and quantum statistics. In classical statistics you count the results for flipping two coins as follows (using H and T for "heads" and "tails)
|HH>
|HT>
|TH>
|TT>
But in QM you can't say that the first particle has spin up and the second particle has spin down as you can't distinguish them. There is no additional property telling you which particle is the first one and which is second one. So instead of using four classical states the spin up and down (using U and D) results in three states
|UU>
|DD>
|UD> + |DU> (for bosons)
The last state describes an entangled state where one particle has spin up whereas the other one has spin down - w/o being able to tell which particle is in which state. This becomes clear only after the measurement of one particle which fixes the spin of the other particle.

Entanglement for bosons (fermions) is related to the symmetrization (anti-symmetrization)postulate which says that given N particles the state does not change if one interchanges any two particles. What are the reasons for this symmetrization postulate?
1) you can't distinguish identical particles (so the postulate makes sense theoretically)
2) classical counting would result in wrong predictions (demonstrated empirically in many-particle theory and statistical mechanics)
 
thenewmans said:
To me, the concept of entanglement sounds like an epiphany. I’m sure I can’t find one specific moment for it but I’d like to get closer.
What tom.stoer said. Plus, since I share your curiosity about entanglement, here's my two cents.

Quantum entanglement has to do with the relationship between two or more quanta that have interacted, or had a common origin, or have had an identical torque imparted to them, etc. The concept that knowledge (or at least the assumption via preparation) of a thusly related quantum system allows more to be predicted wrt the system (via the relationship and applicable conservation laws) than can be predicted via separate consideration of its subsystems is a concept that was, I would guess, well enough known to physicists prior to EPR -- though it was apparently Schrodinger who coined the term, quantum entanglement (or at least included the term entanglement in the quantum physics lexicon via a rough association with its ordinary language meaning).

Quantum Entanglement and Information (Stanford Encyclopedia of Philosophy

The interesting thing about entanglement is that it isn't necessary to be able to make accurate predictions about, or to have a complete physical description of, the individual subsystems in order to make accurate statistical predictions regarding the system as a whole. It's only necessary to have a certain knowledge (or set of working assumptions) regarding the nature of the relationship between the subsystems -- and that knowledge is based on the specific preparations involved in a particular setup.

Are entanglement relationships determined locally via interactions, common origins, etc.? As far as I can tell, that seems to be the most rational assumption.

Since a detailed physical account of the evolutionary mechanics involved is lacking, some might take that as an indication that it's possible that the relationships are determined in some way other than via the local interactions described in the preparations and accompanying models, but I can't imagine (at least not rationally, given my current knowledge of this stuff) what that might entail.

For example, looking at the 1982 Aspect et al. setup and the physics of the atomic calcium cascades that produced the polarization entangled photons, it seems that pairing detection attributes that are presumably associated with counter-propagating photons emitted by the same atom (a few nanoseconds apart and at different frequencies, but during the same atomic transition) and applying the law of conservation of angular momentum is sufficient to account for the observed correlation between thusly (ie., via time of detection) paired, joint detection attributes and the associated angular difference of the polarizer settings.

Insofar as quantum nonlocality is synonymous with quantum entanglement, then, wrt quantum experimental phenomena and the orthodox interpretation of standard qm formalism, the term nonlocality doesn't connote ftl propagation or action at a distance (at least that's my current understanding).

Let us know what your historical research on the origin of the concept of quantum entanglement reveals.
 
tom.stoer said:
1 - This is related to the conservation of angular momentum. Given a specific material that is able to emit two photons at once it is clear that the initial state |atom*> and the final state |atom, 2 photons> must have the same agular momentum. So if L|atom*> = 0, then L|atom, 2 photons> = 0; here * means an excited state.

Using spin is not essential for the EPR argument. It is convenient b/c both the theoretical explanation and the experimental setup are most transparent if one uses spin, but entanglement can be formulated based on other observables as well.

2 - This is the qm specific input to EPR: Two indistinguishable bosons / fermions must be symmetrized / anti-symmetrized. That is one main difference between classical and quantum statistics. In classical statistics you count the results for flipping two coins as follows (using H and T for "heads" and "tails)
|HH>
|HT>
|TH>
|TT>
But in QM you can't say that the first particle has spin up and the second particle has spin down as you can't distinguish them. There is no additional property telling you which particle is the first one and which is second one. So instead of using four classical states the spin up and down (using U and D) results in three states
|UU>
|DD>
|UD> + |DU> (for bosons)
The last state describes an entangled state where one particle has spin up whereas the other one has spin down - w/o being able to tell which particle is in which state. This becomes clear only after the measurement of one particle which fixes the spin of the other particle.

Entanglement for bosons (fermions) is related to the symmetrization (anti-symmetrization)postulate which says that given N particles the state does not change if one interchanges any two particles. What are the reasons for this symmetrization postulate?
1) you can't distinguish identical particles (so the postulate makes sense theoretically)
2) classical counting would result in wrong predictions (demonstrated empirically in many-particle theory and statistical mechanics)

Thank you! I never understood that it's primarily a matter of probability calculus. Now the idea makes much more sense to me (and different from before). :smile:
 
In addition to the excellent answers above, I would add an additional point-- one can also imagine entanglement in systems of non-identical particles, like a positron and electron created by two photons. It is really the conservation law, not the statistics, that would matter in that situation. So I would say that even more fundamentally than statistics, we have the concept of information, and the constraints that information must follow. Indeed, I think it would be fair to say that physics was never the laws of behavior of matter, it was always the laws of the behavior of information.

We might even notice that perhaps the most amazing and insightful QM experiment you can do is one that is about as simple as you could possibly imagine-- put a single particle in the ground state of a box (infinite square well), and shine a very bright light on one half of the box. Then separate out all the instances where the light does not scatter off the particle. You have a class of situations where there was no interaction, yet the wave function of the particle is not the ground state any more (it is restricted to half the box). So the particle has acquired energy, or at least, the information we have about the particle must now accommodate a higher energy expectation value than it did before. How did that energy get into the particle, if it didn't interact with the light?

I would say the answer to that is that energy is not so ruled by the logic of interactions as we might think, instead it is ruled by the logic of consistency with information. There is energy associated with changes of information, and it is not always necessary to be able to identify a specific interaction. That isn't directly related to entanglement, but we can bring in entanglement if we look at two identical electrons in the same atom, and consider the "exchange energy" associated with their indistinguishability. What is the "interaction" that is associated with that energy? There isn't one (you can think of "exchange" as an interaction, but only in the sense of an interaction of information, not matter), the energy is associated with information about the system, not forces within it.

So I think that's the general principle here, we should not start with forces and interactions and try to understand why they give the outcomes they do (entangled or otherwise), we should start with the information, the constraints the system must satisfy, and expect all the physics to emerge from that. Along the way, we may find it useful to organize certain information-affecting processes in terms of "interactions", but we should also not get bothered when we cannot do that-- it was always just a shorthand for information-in-motion, if you will, and the latter is a more general notion than can be framed strictly in terms of interactions, or even interactions plus counting statistics plus conservation laws. We don't build it up from parts like that, the system is a bundle of information, and those didactic pieces we use to talk about it are just conveniences to guide our thinking, and can sometimes lead our intuition astray (as it did to EPR).

Then the next question is, of course, is there some "true set" of information about any system that "nature itself knows", or is that just a convenient construct as well, and in fact there is never anything there but the information that the various different information processors present have access to? In other words, is the very concept of "state" just an effective notion? I believe Bohr held that the answer to that is yes.
 
It is misleading to say that "entanglement is only probability calculus". Entanglement is deeper than probability calculus. It is not directly visible, but one can prove that sufficiently complex quantum systems cannot be modeled via classical probabilty calculus and standard logic (boolean algebra).

"We often think that when we have completed our study of one we know all about two, because 'two' is 'one and one.' We forget that we still have to make a study of 'and'."
Arthur Eddington
 
tom.stoer said:
It is misleading to say that "entanglement is only probability calculus". Entanglement is deeper than probability calculus. It is not directly visible, but one can prove that sufficiently complex quantum systems cannot be modeled via classical probabilty calculus and standard logic (boolean algebra).
[..] [/RIGHT]

I did not state that; however, I am now studying papers (one to be soon published) that appear to claim just the contrary of what you claim! I have in mind to bring it up for discussion in the new papers section when it comes out. :smile:
 
harrylin said:
I am now studying papers ... that appear to claim just the contrary of what you claim!
What do you mean by that? It would read something like

"Probability calculus is deeper than entanglement. One can prove that all quantum systems can be modeled via classical probabilty calculus and standard logic."
 
  • #10
tom.stoer said:
What do you mean by that? It would read something like

"Probability calculus is deeper than entanglement. One can prove that all quantum systems can be modeled via classical probabilty calculus and standard logic."
I agree with your position on this. And even if quantum systems could, in principle, be modeled that way, then quantum entanglement, presumably being rooted in the deep reality of nature, and quite beyond our direct apprehension, would still be deeper than any sort of probability calculus that might be devised to account for observed instrumental statistical results.

I think where harrylin is coming from is that certain recent papers have presented some formidable arguments regarding the difference between what experimental violations of Bell inequalities have been historically interpreted to be telling us about nature and what they are actually telling us about nature (eg., the more or less recent Hess, Michielsen, De Raedt papers). The point he might be missing is that even if Bell's theorem (or GHZ, or Hardy, etc. theorems) aren't telling us anything about nature, it doesn't necessarily follow that local realistic models of quantum entanglement are possible. Afaik, all extant LR models of quantum entanglement that might be taken seriously are questionable wrt their claims to being realistic, or explicitly local, or both.

The interesting thing about the recent criticisms of Bell's theorem is that they seem to be approaching a definitive interpretation of the meaning of Bell's (and other no LR) theorems. That is, they point to the actual reasons for BI violations and GHZ inconsistencies, etc. And those reasons don't have anything to do with locality or predetermined properties, but are, rather, rooted in the unwarranted associations of the formalisms with an underlying reality. Which is to say that what they say their formalisms assume is not what their formalisms assume.
 
Last edited:
  • #11
I would like to add my 2 cents, after the interesting things that have been said already here.

Quantum entanglement is simply a special case of quantum superposition, where the superposition now includes combinations of spatially or otherwise separated systems. It's mystery is not "deeper" than "normal" superposition: it is actually simply a dramatic illustration of the weirdness of superposition all by itself, but which may not be grasped immediately. Quantum entanglement is nothing else but a dramatic confirmation of the "mystery" or the weirdness of the two-slit experiment.

The fundamental idea of quantum theory is "superposition": the given that if |A> is a thinkable state of the system, and |B> is a thinkable state of the system, that |A> + |B> (with complex weights) is ALSO a thinkable state of the system. That idea is so terribly strange and weird, that it is not immediately obvious HOW strange it is. So when people talk about the "wavefunction of an electron" they don't feel dazzled, even though they should. They think of something wavy like an electromagnetic wave or something, while it is already totally bizarre: it is a combination of "the electron is in this point" and "the electron is in that point" AT THE SAME TIME with COMPLEX WEIGHTS in a way which is not a probability ensemble. But you can still calm your fears, and temper your amazement, by erroneously making yourself believe that it is some kind of "wavy field" or some kind of "statistical stuff" (even though if you would continue the logic behind that, you would see that it doesn't work).

And so people "do quantum mechanics" with their minds eased, until they hit ANOTHER application of the superposition principle, the very same superposition principle that gave us wave functions for electrons in the hydrogen atom in QM 101:

|spin at Joe up and spin at Alice down> + |spin at Joe down and spin at Alice up>

with complex coefficients.

And now suddenly, people realize that their "wavy" or "probability ensemble" view on things which didn't work already with the hydrogen atom, STILL fails, but much more obviously now.

So they seem to get the retarded shock of the weirdness of quantum superposition, although they have been using quantum superposition all the time since they looked at the hydrogen atom, like M. Jourdan who was talking prose all his life without knowing it.
 
  • #12
ThomasT;3140299[.. said:
even if BI violations (or GHZ, or Hardy, etc. theorems) aren't telling us anything about nature, it doesn't necessarily follow that local realistic models of quantum entanglement are possible[..]

Obviously - that's again a different matter! The claim was that BI violations are telling us something about nature.
 
  • #13
vanesch said:
I would like to add my 2 cents, after the interesting things that have been said already here.

Quantum entanglement is simply a special case of quantum superposition, where the superposition now includes combinations of spatially or otherwise separated systems. It's mystery is not "deeper" than "normal" superposition: it is actually simply a dramatic illustration of the weirdness of superposition all by itself, but which may not be grasped immediately. Quantum entanglement is nothing else but a dramatic confirmation of the "mystery" or the weirdness of the two-slit experiment.

The fundamental idea of quantum theory is "superposition": the given that if |A> is a thinkable state of the system, and |B> is a thinkable state of the system, that |A> + |B> (with complex weights) is ALSO a thinkable state of the system. That idea is so terribly strange and weird, that it is not immediately obvious HOW strange it is. So when people talk about the "wavefunction of an electron" they don't feel dazzled, even though they should. They think of something wavy like an electromagnetic wave or something, while it is already totally bizarre: it is a combination of "the electron is in this point" and "the electron is in that point" AT THE SAME TIME with COMPLEX WEIGHTS in a way which is not a probability ensemble. But you can still calm your fears, and temper your amazement, by erroneously making yourself believe that it is some kind of "wavy field" or some kind of "statistical stuff" (even though if you would continue the logic behind that, you would see that it doesn't work).

And so people "do quantum mechanics" with their minds eased, until they hit ANOTHER application of the superposition principle, the very same superposition principle that gave us wave functions for electrons in the hydrogen atom in QM 101:

|spin at Joe up and spin at Alice down> + |spin at Joe down and spin at Alice up>

with complex coefficients.

And now suddenly, people realize that their "wavy" or "probability ensemble" view on things which didn't work already with the hydrogen atom, STILL fails, but much more obviously now.

So they seem to get the retarded shock of the weirdness of quantum superposition, although they have been using quantum superposition all the time since they looked at the hydrogen atom, like M. Jourdan who was talking prose all his life without knowing it.
Hi vanesch, nice to read your words again. Ok, so quantum superposition doesn't refer to quite the same thing as wave superposition. Is that what you're saying? No problem.

But what then is the essence of entanglement? In my current understanding it has to do with relationships that are identifiable (ie., will presumably be produced) via preparation prodedures.
 
  • #14
harrylin said:
Obviously - that's again a different matter! The claim was that BI violations are telling us something about nature.
Ok, then if you think that they aren't telling us anything about nature, then that's my current understanding also.
 
  • #15
vanesch said:
It's mystery is not "deeper" than "normal" superposition: it is actually simply a dramatic illustration of the weirdness of superposition all by itself, but which may not be grasped immediately.
But "normal" superposition isn't weird. Is it? It seems readily, easily 'grasped'. That is, if the amplitude of wave A produces X and the amplitude of wave B produces Y, then the combined amplitudes, A + B, produce X + Y.

vanesch said:
Quantum entanglement is nothing else but a dramatic confirmation of the "mystery" or the weirdness of the two-slit experiment.
Ok, you lost me here. Please elaborate.

vanesch said:
The fundamental idea of quantum theory is "superposition": the given that if |A> is a thinkable state of the system, and |B> is a thinkable state of the system, that |A> + |B> (with complex weights) is ALSO a thinkable state of the system. That idea is so terribly strange and weird, that it is not immediately obvious HOW strange it is.
You're right, it isn't obvious to me how/why this is a "terribly strange and weird" idea. In fact, I remember from my quantum theory text (long since deceased), Bohm 1950, that a necessary part of any wave theory is that if A is a solution and B is a solution, then A+B is also a solution. So, insofar as qm is a wave theory (vis Schrodinger), then it doesn't seem that strange.

vanesch said:
So when people talk about the "wavefunction of an electron" they don't feel dazzled, even though they should. They think of something wavy like an electromagnetic wave or something, while it is already totally bizarre: it is a combination of "the electron is in this point" and "the electron is in that point" AT THE SAME TIME with COMPLEX WEIGHTS in a way which is not a probability ensemble.
Not sure what you're saying. The thing is, I believe in the wave nature (as well as the particle nature) of ... reality. Quanta, including electrons and photons, ARE waves. And I would venture that their wave nature is better understood than their particle nature. Doesn't everything you know about physics point to a fundamental wave reality?

Anyway, are you going to give us something that we can sink our teeth into wrt an understanding of entanglement or what? If not, then I'll just stick with my current understanding of it's nature or essence as written in my first post in this thread.

Who's M. Jourdan?
 
  • #16
Entanglement is how a priori information translates in quantum mechanics.
 
  • #17
ThomasT said:
I think where harrylin is coming from is that certain recent papers have presented some formidable arguments regarding the difference between what experimental violations of Bell inequalities have been historically interpreted to be telling us about nature and what they are actually telling us about nature (eg., the more or less recent Hess, Michielsen, De Raedt papers).
Can you please summarize in a few sentences?
 
  • #18
Hi !

Of course, it all depends on how you "introduce" quantum mechanics, and what I wanted to point out is that many introductions of quantum theory seem to hide the essential idea of quantum superposition (and its inherent strangeness) by doing things like talking about waves. They're in good company, as many of the historical founders of quantum mechanics did the same. But my personal opinion is that this is just delaying the surprise, and you can't hide it anymore with entanglement - although entanglement is nothing else but a specific application of quantum superposition. In fact Einstein SAW this immediately and used entanglement to *illustrate* the (for him unacceptable) weirdness of quantum superposition in general.

Usually, when you are first exposed to quantum mechanics, you've had courses in classical mechanics, and in electromagnetism. So you know about fields, and you know about particles. And quantum mechanics is often introduced by "associating" a wave to a particle. As for linear wave dynamics, the superposition principle is true, the superposition principle for the "wave equation" in quantum mechanics is not so very surprising. It looks like the linearity of the Maxwell equations. And then you are drawn into the technicalities of the calculations, and you end up doing quantum mechanics, solving the hydrogen atom and so on, while thinking in "classical waves". And at the end of the calculation, you take the Born rule, and produce probabilities. And you feel at ease. More or less. Yes, there is the funny two-slit experiment, but it is "interference", like in optics.

I like Feynman's lectures (3rd volume) a lot, because Feynman explicitly does NOT go that route and confronts people immediately with the fundamental weirdness of QM.

The superposition principle does NOT say that you associate a wave to a particle. The superposition principle says this, and it is profoundly shocking:

if a system can be in a state A, where certain observables take on specific values ;
and that system can also be in a state B, where certan observables take on other specific values

then that system can be considered to be in ANY state c1 |A> + c2 |B>, with c1 and c2 complex weights.

Note that it DOES NOT MEAN that those observables now take on the value c1 a + c2 b or something, no. It means that the system "takes on SIMULTANEOUSLY the values a and b with complex weights", but if you measure them, you get a, or you get b, with probabilities given by |c1|^2 resp |c2|^2.
This by itself would give you a statistical ensemble where the phase of c1 and c2 doesn't matter, but things go further:

certain superpositions yield precise values for (other) observables, which can only have those precise values when we have those superpositions.

And the "ensemble" is dead, we have genuine quantum weirdness.

The big difference between this quantum superposition and "field superposition" is that the *values* of the observables are not simply c1 a + c2 b.

Take an electron. State A is: the electron is on my desk. State B is: the electron is in the dustbin.

The state c1 A + c2 B is NOT: the electron is somewhere in between my desk and the dustbin. No. It means that the electron is "with amplitude c1" on my desk, and "with amplitude c2" in the dustbin. But maybe it has now a well-defined energy which it didn't have in the state "is on my desk" or "is in the dustbin".

That's terribly weird. The electron is SIMULTANEOUSLY and with complex weights "on my desk" and "in the dustbin". If I try TO FIND OUT where the electron is, I will never find the answer "somewhere mixed on my desk and in the dustbin". No, I will get a straight answer: "on my desk" or "in the dustbin". With each a certain probability. But if I don't find out, it is in two places at once.

Now, for a single particle, a general state corresponds to a complex number for each of the potential position states it can be in, which corresponds to a complex number for each point in space, and we think that is a "field". We forget that it is a superposition of individual position states. The calculations are as if it are fields, as long as we have one single particle.

You are only hit again by the strangeness of superposition when you consider 2-particle states. They are not "two fields". They are superpositions of all possible position COUPLES, because a priori, the description of a single state of two particles consists of a pair of positions: particle 1 is here, and particle 2 is there.
Possible states of an apple and an orange:

state 1: apple is on my desk, orange is in the dustbin
state 2: apple and orange are on my desk
state 3: apple is in the dustbin, orange is on my desk
state 4: apple and orange are in the dustbin

Applying the superposition principle, we have that our apple and our orange can be in a state given by 4 complex numbers, c1, c2, c3, c4:

c1 | apple is on my desk and orange is in the dustbin> + c2 |state2> + c3 |state 3> + c4 |state 4>

Now, it can be that the system has only a well-defined energy, say, in the states:

|state1> + |state3>

|state1> - |state3>

|state2> + |state4>

|state2> - |state4>

these are entangled states. There's nothing particular here: we simply applied the superposition principle, as we did also for single-particle states: we listed all "observable" states with a well-defined value for a certain set of observables (positions of apple and orange), and then we applied the superposition principle to find ALL allowed states of this system.

ThomasT said:
But "normal" superposition isn't weird. Is it? It seems readily, easily 'grasped'. That is, if the amplitude of wave A produces X and the amplitude of wave B produces Y, then the combined amplitudes, A + B, produce X + Y.

Yes, for a real field.


Ok, you lost me here. Please elaborate.

The essential point in the two-slit experiment is that the number of electrons arriving at a certain point on the screen is NOT the sum of the number of electrons that went through slit A and arrived at that point PLUS the number of electrons that went through slit B and arrived at that same point, but on the other hand that if you go and measure right behind the slits, each electron arrives OR at slit 1 OR at slit 2.

So this means that there is a (superposition) state "goes through slit 1 and goes through slit 2" which is not the same as "goes through slit 1 OR goes through slit 2 with probabilities 50% 50%", because then we would have simply 2 populations of electrons, and the number of electrons on a point on the screen would be the number of electrons on that point from population 1 plus the number of points on the screen from population 2, which it isn't.

In other words, the two-slit experiment proves that the quantum state |slit 1> + |slit 2> genuinely exists, and is NOT simply an ensemble of electrons "slit 1" and electrons "slit 2".

The point is that with the two-slit experiment, because it is a one-particle system, you can still get away with it thinking you have actually "waves".

Not sure what you're saying. The thing is, I believe in the wave nature (as well as the particle nature) of ... reality. Quanta, including electrons and photons, ARE waves. And I would venture that their wave nature is better understood than their particle nature. Doesn't everything you know about physics point to a fundamental wave reality?

The point is that if you associate a single wave to each particle, you STILL do not get agreement with quantum mechanics, because of 2-particle states. And then you only see the difference with "entangled states". Entangled states simply illustrate, on the 2 or more particle level, that the TRICK of getting away with superposition on the 1-particle level, namely associating waves with particles, was of limited utility, and that you hit the same difficulty now again.

Who's M. Jourdan?

http://en.wikipedia.org/wiki/Le_Bourgeois_gentilhomme

His philosophy lesson becomes a basic lesson on language in which he is surprised and delighted to learn that he has been speaking "prose" all his life without knowing it
 
  • #19
Quite a long post so I'll just comment on one part, since I'm not sure if I can make it fit with my own view.

vanesch said:
Take an electron. State A is: the electron is on my desk. State B is: the electron is in the dustbin.

The state c1 A + c2 B is NOT: the electron is somewhere in between my desk and the dustbin. No. It means that the electron is "with amplitude c1" on my desk, and "with amplitude c2" in the dustbin. But maybe it has now a well-defined energy which it didn't have in the state "is on my desk" or "is in the dustbin".

That's terribly weird. The electron is SIMULTANEOUSLY and with complex weights "on my desk" and "in the dustbin". If I try TO FIND OUT where the electron is, I will never find the answer "somewhere mixed on my desk and in the dustbin". No, I will get a straight answer: "on my desk" or "in the dustbin". With each a certain probability. But if I don't find out, it is in two places at once.

I would say that it is possible to view the electron as being in between the two states.

In your example you say that you are measuring in the basis (|bin>, |desk>), and by doing so find that the electron was only ever in one of those two states and never in between. However, we can instead chose to measure an electron in a superposition state, by using a different basis:

|+> = |bin> + |desk> and
|-> = |bin> - |desk>

respectively. In this case we might find the electron in the state, say, |+> 100% of the time, indicating that the electron is really in this state, which is actually in between the two original states!

I think the strangeness of QM that you describe, only comes in when you are forcing a measurement in a basis where your measurement basis does not contain the state the electron is actually in. Attempting to put a simple analog to this picture would be to ask whether a grey square is black or white. You're bound to get a strange (and very crudely approximating) answer, which does not give you all information about the square's true color, because it is in fact grey, regardless of what you measure, Instead, asking whether it is grey or anti-grey would give you a more correct answer because your measurement basis now actually contains the particle's true state.


Though I must admit, I haven't heard anyone else describe it like this, so I'm interested to see if people here agree or if I'm missing something in this picture.
 
  • #20
vanesch said:
That's terribly weird. The electron is SIMULTANEOUSLY and with complex weights "on my desk" and "in the dustbin". If I try TO FIND OUT where the electron is, I will never find the answer "somewhere mixed on my desk and in the dustbin". No, I will get a straight answer: "on my desk" or "in the dustbin". With each a certain probability. But if I don't find out, it is in two places at once.
Maybe you would be interested in this quote of Feynman if you like him:
"In this example, arrows were multiplied and then added to produce a final arrow (the amplitude for the event), whose square is the probability of the event. It is to be emphasized that no matter how many arrows we draw, add, or multiply, our objective is to calculate a single final arrow for the event. Mistakes are often made by physics students at first because they do not keep this important point in mind. They work for so long analyzing events involving a single photon that they begin to think that the arrow is somehow associated with the photon. But these arrows are probability amplitudes, that give, when squared, the probability of a complete event.
Keeping this principle in mind should help the student avoid being confused by things such as the “reduction of a wave packet” and similar magic."
 
  • #21
Zarqon said:
Quite a long post so I'll just comment on one part, since I'm not sure if I can make it fit with my own view.



I would say that it is possible to view the electron as being in between the two states.

In your example you say that you are measuring in the basis (|bin>, |desk>), and by doing so find that the electron was only ever in one of those two states and never in between. However, we can instead chose to measure an electron in a superposition state, by using a different basis:

|+> = |bin> + |desk> and
|-> = |bin> - |desk>

respectively. In this case we might find the electron in the state, say, |+> 100% of the time, indicating that the electron is really in this state, which is actually in between the two original states!

I mostly agree with what you write. Maybe it is semantics: "in between" or "both at once". I prefer to say "both at once" and even that doesn't capture everything, because it doesn't capture the relative phase.


I think the strangeness of QM that you describe, only comes in when you are forcing a measurement in a basis where your measurement basis does not contain the state the electron is actually in. Attempting to put a simple analog to this picture would be to ask whether a grey square is black or white. You're bound to get a strange (and very crudely approximating) answer, which does not give you all information about the square's true color, because it is in fact grey, regardless of what you measure, Instead, asking whether it is grey or anti-grey would give you a more correct answer because your measurement basis now actually contains the particle's true state.

Yes, but that doesn't do away with the strangeness. Maybe I didn't get my point across. I fully agree with what you write, but what I wanted to say is that the weirdness of quantum superposition is often hidden behind the concept of a "wave" so as to make you (erroneously, in my opinion) think that particles have 'waves' associated to them or something, and we then think of that wave as a classical field (like classical electromagnetism). The so-called "wave-particle duality". But this trick only works for systems consisting of one single point particle. Yes, the quantum state of a *single* point particle LOOKS LIKE a wave, that is to say, a complex function over space. So you're "saved from weirdness" but only for a moment. You're saved from weirdness the time you're only considering single particles.
But you're back again into weirdness from the moment you consider 2 or more particles, because then "entanglement" sets in. You can't save yourself now with "each particle has its wave". If you try so, you now have "superpositions of wave couples, and this time there is no classical concept to save you. You have "entanglement" from the moment that a 2-particle state cannot be written as f1(x1,y1,z1) x f2(x2,y2,z2), that is to say, from the moment that you do not have a "wave for particle 1" and a "wave for particle 2"...

Your example with "grey written as black and white" doesn't quite cut it, because "grey" is an observable state just as well as black or white ; it is a state "in between", perfectly observable and (hence) conceivable. However, it doesn't work with something like positions: the superposition between a state "here" and "2 meters further" is NOT the same as the state "1 meter from here", which is an entirely different (and orthogonal) state.
In fact, what comes closer (but isn't the same, we'll come to that) is "50% chance that it is white, and 50% chance that it is black". That STILL isn't "grey".

The point is that for a (limited) set of observables, we were (or we thought we had been) exhaustive. If we talk about positions, every thinkable point in space seems to be exhaustive concerning positions. Superpositions of positions don't seem to make sense, and CERTAINLY aren't "positions in between". Superpositions of positions are VERY WEIRD position states. They are "several positions at once".

Now, you touch upon something very interesting, which is indeed that the state space is spanned by states corresponding to precise values of only a LIMITED set of observables. In our case: positions. If ALL observables would have precise values and determine some states, then the superposition principle wouldn't bring in anything worthwhile: it would just make up statistical ensembles. If quantum states would be (position-momentum) couples, then superpositions of such states would yield nothing else but a statistical ensemble of classical states. We would never be able to reveal the phase relationships between the terms in a superposition, and the preferred basis would be the basis of classical states. No observable would correspond to another basis.
It is because our basis states are only founded on a limited number of observables, and that there ARE other, incompatible observables with different bases, that the superposition principle actually makes sense, and indeed, the only way to discriminate between a "genuine pure state as a superposition" and a "statistical mixture" is to look at the final state in a different basis than the prepared initial state was prepared in.

In the two slit experiment, the different bases are the one generated by the individual slits on one hand, and the one generated by different positions on the screen on the other hand. A position on the screen is a superposition of "pure slit" states, and a "pure slit" state is a superposition of pure screen states.

In spin experiments, we prepare spins along one axis, to measure them along another axis. As long as you don't change basis, you won't notice the difference between a superposition and a statistical mixture.

So you're right that the weirdness comes about from superposition of states of one set of observables which yield well-determined values for other observables.
 
  • #22
This is all good stuff, and I completely agree. What I would add is that I don't think the "wave" concept is really so awful, but one must certainly bear in mind that it is not the whole story, or else one misses why quantum mechanics is not classical waves. And that is the "quantum" in quantum mechanics, which opens up the possibility of higher-level, more holistic, information-- there is a kind of internal information connectedness present in those quanta. I would say the problem with using the classical wave concept is not just that it doesn't work in quantum mechanics, it is that it was never really right in classical fields either.

My problem with how QM is generally taught mirrors what vanesch said, but I might express it slightly differently. We are usually introduced to waves first as a kind of collective oscillation in a medium, so that we think wave attributes are aggregate attributes. We get that so drilled in we begin to equate them. Then when we learn QM, we are told "this is just like waves, except it applies to single quanta also," and that seems real weird-- but only because we already think that waves are aggregate properties of a system!

Had we instead learned that classical waves are a "single thing" that is repeated over and over and weaved together in a potentially very subtle way in, say, a sound wave, but the measurements we are trying to predict using that "single thing" do not actually access the single thing, they access a statistical sample of a vast number of those single things, then we'd be ready to handle quantum mechanics. Granted, there is no need to know this statistical property of, say, a local pressure measurement, when we do classical physics-- we are so deeply invested in treating statistical properties as if they were the "single things" that we are tracking, we forget that we are only dealing in statistical outcomes in the first place.

So what I'm saying is, when we measure the pressure in a sound wave at some place and time, we are not actually measuring the pressure in a single sound wave, because what we should mean by a single sound wave (i.e., not a Fourier mode) does not actually have a pressure-- pressure is an aggregrate property from the get-go. Instead, a pressure measurement is a statistical aggregate of the influences that a vast number of single "sound waves" (which we could call phonons) have on our averaging apparatus. If we understand sound waves this way, which is actually much closer to the physical truth, we would be ready for waves in quantum mechanics. We just didn't realize that we had married our notions of aggregate averages to our concept of waves when we first learned waves-- that's the problem, not the wave concept itself.

In other words, we are taught that QM waves are analogous to classical waves, but we should have been taught that classical waves are themselves the analogy-- they are the aggregate versions of the actual wave concept, and the meaning of a "single wave" is never an aggregate concept, it is always an elementary concept. That elementary concept is combined via entanglement not superposition, but when the correlations are random, the entanglement has no bearing on aggregate averages, becoming instead a superposition (if the phases that persist the projection onto single-particle states are correlated) or a statistical ensemble (if the phases that persist the projection onto single-particle states are random). What gets forgotten in all this is the entanglement-- that was just left out of classical waves because classical waves are thought of simply as aggregates rather than what they really are: entanglements prepared in such a way that the entanglements never matter. In short, we are taught that waves can superpose, but we are not taught that they can entangle, and that is the "weirdness" vanesch is properly bringing out. Entanglement is not an aggregate-average property, it is something you lose when you do aggregate averages because it stores higher-level, more interconnected, information than an aggregate average can extract.
 
Last edited:
  • #23
thenewmans said:
To me, the concept of entanglement sounds like an epiphany. I’m sure I can’t find one specific moment for it but I’d like to get closer. And I’d like your help. So far, I see that Einstein had issues with Born’s matrix mechanics (1925) among other things, which heated up the Bohr Einstein debates. A decade passed before the EPR paradox. So somewhere in there, I assume that Einstein noticed that QM does not allow the spin to be set at the emitter. I’m not sure how he recognized that.
At that time discussions where going around position and momentum measurements or about when exactly will happen radioactive decay of nucleus.
And EPR paradox was meant to illustrate weirdness of Uncertainty principle and for that purpose entangled state was found.

thenewmans said:
Even that might not go back far enough for me. Somebody figured out 2 things. 1 – Given the right conditions, two particles must have opposite spin. I assume that comes from the conservation of angular momentum. QM contains the correspondence principle, which means that any QM prediction must average out at the macro scale and match the principals of classical physics. But I don’t quite get how that means two particles must have opposite spin.

2 – These two particles can only be described by a single function (wave function). My guess is that separate formulas would exceed Heisenberg’s non-commutativity rule. But I don’t know how to make that connection.
These two particles can be described by separate wave functions but they are a bit dull like you get 0.5 probability of measuring UP spin and 0.5 probability of measuring DOWN spin.
 
  • #24
ThomasT said:
Extended Boole-Bell inequalities applicable to quantum theory
http://arxiv.org/PS_cache/arxiv/pdf/0901/0901.2546v2.pdf

Please do not hijack the thread. This paper should be discussed elsewhere. This thread is about the origins of entanglement concepts, and is not a debate about Bell and whether local realism should be considered viable.
 
  • #25
In "Sneaking a Look at God's cards" Giancarlo Ghirardi says:
... the assertion "the photon is in the superposition |O> + |E>" is logically different from all of the following: "it propagates itself along path O or along path E" or "it follows both O and E" or "it follows other paths."
 
  • #26
ThomasT said:
Yeah that was a mistake. I should have sent it via PM. Can you just delete that post?

I'll delete mine if you delete yours... :smile:
 
  • #27
Thanks vanesch, Ken G, et al. for insightful comments.

So, is the physical essence of the entanglement produced in, say, Aspect 1982 the relationship between counter-propagating photons emitted by the same atom during the same transition interval, and are they related (ie., entangled) because they were emitted in opposite directions by the same atom during the same transition interval?
 
Last edited:
  • #28
I would say they are entangled by the process that created them, which then forces subsequent information about them to satisfy certain constraints such as counterpropagation. The bottom line is, entanglement is not something you can piece together out of other things that you know about nature, it is something new about nature, that you have to take on its own terms.
 
  • #29
Ken G said:
I would say they are entangled by the process that created them, which then forces subsequent information about them to satisfy certain constraints such as counterpropagation.
So, should I take it that you agree or disagree that the physical (not the qm formal) essence of the entanglement produced in, say, Aspect 1982 is that there is a relationship between counterpropagating photons emitted by the same atom during the same transition interval, and that the reason they're related (ie., entangled) is because they were emitted in opposite directions by the same atom during the same transition interval?

Ken G said:
The bottom line is, entanglement is not something you can piece together out of other things that you know about nature, it is something new about nature, that you have to take on its own terms.
I'm not sure whether I should agree with this, or many of the other more cryptic and vague pronouncements about entanglement being floated. I'll agree with vanesch (and you?) and others that the qm formal nature of entanglement is somewhat weird. But, from that, it doesn't necessarily follow that nature is weird or that we're prohibited from an understanding of the physical nature of quantum entanglement via inference from what's known (or, at least from, so far, solid assumptions). I think that the physical nature of what's happening in Aspect 1982, and the deep physical meaning of formal quantum entanglement has to with relationships between quantum entities such as wrt the paired counterpropagating optical disturbances (apparently emitted by the same atom during the same transition interval, and therefore related via the law of conservation of angular momentum) incident on polarizers during certain intervals in Aspect 1982, because what's known points to and allows that characterization. Are you (and vanesch, et al.) saying that this way of thinking about it is wrong?
 
  • #30
ThomasT said:
So, should I take it that you agree or disagree that the physical (not the qm formal) essence of the entanglement produced in, say, Aspect 1982 is that there is a relationship between counterpropagating photons emitted by the same atom during the same transition interval, and that the reason they're related (ie., entangled) is because they were emitted in opposite directions by the same atom during the same transition interval?
Almost-- I would say that they are entangled because they were emitted in the same quantum event, and they were emitted in opposite directions because they are entangled. I would not say they are entangled because they were emitted in opposite directions, that seems to reverse the logic, but I don't know if that distinction carries any importance to what you are saying.
I think that the physical nature of what's happening in Aspect 1982, and the deep physical meaning of formal quantum entanglement has to with relationships between quantum entities such as wrt the paired counterpropagating optical disturbances (apparently emitted by the same atom during the same transition interval, and therefore related via the law of conservation of angular momentum) incident on polarizers during certain intervals in Aspect 1982, because what's known points to and allows that characterization. Are you (and vanesch, et al.) saying that this way of thinking about it is wrong?
No, there's nothing wrong about thinking about entanglement as a relationship-- but that's quite a vague characterization. There's nothing weird about relationships writ large-- the point about entanglement is that it is a weird type of relationship, a type that shows up nowhere else in our experience. To say it is weird is not to say that it is unusual or of secondary importance, it just means we never anticipated it from anything we observe in our daily lives. That's because we never observe quanta-- everything we observe is an aggregate property of very many quanta, which loses any sense of correlation between individual events. The guts of entanglement is a very high-level information of correlations that we never even knew existed when all we saw was averaged over ensemble aggregates.

Perhaps the key point not coming across is the need to contrast quantum entanglement with simply saying that information about one object can give us information about a correlated object, which is like saying that if I'm playing poker and have three kings I know my opponent cannot have two. That doesn't count as "entanglement", though it is certainly a type of "information coupling." We don't see that as weird, it is the type of information coupling that survives classical logic. But there is a deeper type, never dreamt of in classical logic, where information can exist in a way that cannot be "stored locally" in the objects themselves, but instead must include a description of the objects history in order to properly include. This type of information coupling violates the Bell inequality, so that's why we know it goes outside of classical logic about how physical information works. That's all we're saying-- it's not just that QM stretches classical logic, it breaks it.
 
  • #31
The origin of the term Entanglement is due to Schrödinger, if I recall correctly the german term Verschränkung conveys the notion of connectedness, but Scrödinger was fluent in english (due to his english grandmother) so I'm sure he carefully chose the english translation

Erwin Schrödinger. “Die gegenwärtige Situation in der Quantenmechanik.” Die Naturwissenschaften 23 (1935), 807-812, 823-828, 844-849.

Erwin Schrödinger. “Discussion of Probability Relations Between Separated Systems.” Proceedings of the Cambridge Philosophical Society 31 (1935), 555-662.

Erwin Schrödinger. “Probability Relations Between Separated Systems.” Proceedings of the Cambridge Philosophical Society 32 (1936), 446-452For a non-technical history see Louisa Gilder's https://www.amazon.com/dp/1400044170/?tag=pfamazon01-20

I agree with vanesch that you can not avoid the remarkable implications of quantum superpositions, the various interpretations of QM each have unavoidable non-classical features, some really quite bizarre imho, modern approaches invoke even more novel mechanisms like hiding quantum weirdness behind event horizons or deriving non-locality from a holographic reconstruction. I prefer to just accept that nature is fundamentally probabilistic and the wave function describes probabilistic information.

btw, nice to see vanesch here, he has posted many clear and instructive explanations over the years such as in this old thread is EVERYTHING entangled?. :smile:
 
Last edited by a moderator:
  • #32
vanesch said:
Hi !
Of course, it all depends on ...

OP here. I don’t want y’all to think I’m not paying attention. I have a few conclusions.
1 – I really need to learn more about QM.
2 – I should dig into superposition a bit more.
3 – Wow! That is one impressive post by vanesch! (And nearly down to my level! :wink:)
 
  • #33
Ken G said:
Almost-- I would say that they are entangled because they were emitted in the same quantum event, and they were emitted in opposite directions because they are entangled.
Ok, we agree that they're entangled because they were emitted in the same quantum event.

Ken G said:
I would not say they are entangled because they were emitted in opposite directions, that seems to reverse the logic ...
Counterpropagating photons are a subset of all the photon pairs that are emitted via the cascade process vis Aspect 1982. Is it that they're emitted in opposite directions during the same quantum transition that makes the law of conservation of angular momentum applicable, and thus renders them entangled in polarization, or are all emitted pairs, no matter what the relative propagational directions of the individual photons that comprise each pair, entangled in polarization as well?

Ken G said:
... but I don't know if that distinction carries any importance to what you are saying.
The import, or point, of what I'm saying is that quantum entanglement is ultimately traceable to some sort of common cause -- whether via mutual interaction of, or common origin of, or the application of an identical torque to, the entangled entities.

The important point (wrt Aspect 1982 anyway) is that the entangled photons are entangled because they were emitted in the same quantum event, and therefor that the physical production of entanglement is a process that conforms to the principle of local causation -- the goal here being to make some progress wrt understanding the physical nature of quantum entanglement, and not be put off by the apparent weirdness of the formal qm account.

Ken G said:
No, there's nothing wrong about thinking about entanglement as a relationship-- but that's quite a vague characterization.
Ok, but it's a start. If I think in terms of the joint polarizer setting as measuring a relationship between the entangled photons, and that this relationship is produced via the emission process, then the observed correlation between joint settings and joint detections isn't surprising or strange, and I don't need to assume that the entangled photons are communicating with each other via some unknown flt or nonlocal process.

Ken G said:
There's nothing weird about relationships writ large-- the point about entanglement is that it is a weird type of relationship, a type that shows up nowhere else in our experience. To say it is weird is not to say that it is unusual or of secondary importance, it just means we never anticipated it from anything we observe in our daily lives.
Wrt my current understanding, the point about entanglement, ie. its salient feature, is that more can be said about a system, given knowledge (or good assumptions) about the relationship between its subsystems, than can be said about it via considering its subsystems separately. I don't see anything at all weird (in any sense) about this.

Ken G said:
That's because we never observe quanta-- everything we observe is an aggregate property of very many quanta, which loses any sense of correlation between individual events. The guts of entanglement is a very high-level information of correlations that we never even knew existed when all we saw was averaged over ensemble aggregates.
In Aspect 1982, are we observing correlation between individual events, or are we observing correlation between the relationship between joint polarizer settings and the relationship between quanta emitted by the same atom during the same transitional process -- and isn't this correlation observed via ensemble aggregates?

Ken G said:
Perhaps the key point not coming across is the need to contrast quantum entanglement with simply saying that information about one object can give us information about a correlated object, which is like saying that if I'm playing poker and have three kings I know my opponent cannot have two. That doesn't count as "entanglement", though it is certainly a type of "information coupling." We don't see that as weird, it is the type of information coupling that survives classical logic.
We agree on this.

Ken G said:
But there is a deeper type, never dreamt of in classical logic, where information can exist in a way that cannot be "stored locally" in the objects themselves, but instead must include a description of the objects history in order to properly include.
But there's nothing weird, or surprising, or new about the notion that a relationship between the motions of two objects that have interacted is the sort of global information that isn't stored by the separate motions of the individual objects considered separately, and can't be extracted via separate observations of the individual objects. It requires a global observational setup (eg., time correlated joint detections) and a global measurement or filtering parameter (eg., the angular difference between polarizer settings, time correlated to the joint detection attributes). Aspect 1982 is measuring a relationship between two relationships. What is it about this that defies understanding via classical logic?

Ken G said:
This type of information coupling violates the Bell inequality ...
Something in the rationale underlying the formulation of BIs is at odds with formal qm and experimental results. Exactly what that involves is an open question still being debated afaik.

Ken G said:
That's all we're saying-- it's not just that QM stretches classical logic, it breaks it.
Why does qm correctly model entanglement? I would say that it's because qm takes into account all of the relevant relationships that produce the joint correlations.
 
Last edited:
  • #34
ThomasT said:
The import, or point, of what I'm saying is that quantum entanglement is ultimately traceable to some sort of common cause -- whether via mutual interaction of, or common origin of, or the application of an identical torque to, the entangled entities...

The problem with this statement is that it is false. Experiments show that particles can be entangled that have never interacted. QM predicts this, but your ideas wouldn't. Also, particles can become entangled after they are detected. Hardly the kind of thing that would happen if there was a common event responsible for entanglement.

Also, I really think you should drop the "counter-propagating" lingo as it really has no place in the discussion. :smile:
 
  • #35
DrChinese said:
[...] Experiments show that particles can be entangled that have never interacted. [..]

That's really surprising... example please! :cool:
 
  • #36
DrChinese said:
Experiments show that particles can be entangled that have never interacted. ...
harrylin said:
That's really surprising... example please! :cool:

I find it surprising and fascinating too. To avoid derailing this thread, which is mostly about the history of entanglement, I have started a new thread on this question here: https://www.physicsforums.com/showthread.php?t=473822
 
  • #37
DrChinese said:
The problem with this statement is that it is false. Experiments show that particles can be entangled that have never interacted. ... Also, particles can become entangled after they are detected.
This is interesting, but whether it precludes a common cause understanding depends on how the entanglements were produced. Any readily available references you can post?

DrChinese said:
Also, I really think you should drop the "counter-propagating" lingo as it really has no place in the discussion. :smile:
Should I stop referring to the Aspect 1982 experiment(s)?

Edit: I just located your Entangled "Frankenstein" Photons paper, which includes references to the experiments you mentioned, and posted the links to them in yuiop's new thread on this.
 
Last edited:
  • #38
ThomasT said:
Counterpropagating photons are a subset of all the photon pairs that are emitted via the cascade process vis Aspect 1982. Is it that they're emitted in opposite directions during the same quantum transition that makes the law of conservation of angular momentum applicable, and thus renders them entangled in polarization, or are all emitted pairs, no matter what the relative propagational directions of the individual photons that comprise each pair, entangled in polarization as well?
I would imagine the key issue is the conservation laws-- entanglement just means that there is a constraint on the system as a whole that impacts upon the possible outcomes of its various parts. So the entanglement "comes from" the same place as whatever is imposing the conservation law, and exists whenever the conservation law does. In Newtonian physics we can show where the conservation laws come from (usually some form of action/reaction forces), but in quantum mechanics, the conservation laws come from the algebra of the matrix elements that tell us what things can happen. There's probably an even more fundamental source than that, but I don't know if quantum mechanics identifies it, beyond the usual Noether's theorem (conservation laws come from symmetries). So I guess we could say that entanglments also come from symmetries, and the difficulty in breaking them.
The import, or point, of what I'm saying is that quantum entanglement is ultimately traceable to some sort of common cause -- whether via mutual interaction of, or common origin of, or the application of an identical torque to, the entangled entities.
I wouldn't say that is or is not useful, because it's the kind of thing that works for each person or not, I would just caution that the language of causation is tricky in quantum mechanics-- things that happen tend to be because of constructive interference among all the ways they can happen, but shall we say that the happening is caused by constructive interference, or is the constructive interference just our mathematical test that it will in fact happen? I don't really know what a fundamental cause is at the elementary level.
Ok, but it's a start. If I think in terms of the joint polarizer setting as measuring a relationship between the entangled photons, and that this relationship is produced via the emission process, then the observed correlation between joint settings and joint detections isn't surprising or strange, and I don't need to assume that the entangled photons are communicating with each other via some unknown flt or nonlocal process.
I agree there, I always rejected language that entanglement involves "communication between the parts"-- instead I would tend to simply say that entanglement is an example of the breakdown of the entire concept that a system is "made of parts." A system is a system, period-- the concept of parts is an approximate notion, largely due to classical experience.
Why does qm correctly model entanglement? I would say that it's because qm takes into account all of the relevant relationships that produce the joint correlations.
I'm fine with that.
 
  • #39
Ken G said:
No, there's nothing wrong about thinking about entanglement as a relationship-- but that's quite a vague characterization. There's nothing weird about relationships writ large-- the point about entanglement is that it is a weird type of relationship, a type that shows up nowhere else in our experience. To say it is weird is not to say that it is unusual or of secondary importance, it just means we never anticipated it from anything we observe in our daily lives. That's because we never observe quanta-- everything we observe is an aggregate property of very many quanta, which loses any sense of correlation between individual events. The guts of entanglement is a very high-level information of correlations that we never even knew existed when all we saw was averaged over ensemble aggregates.

Indeed. If you allow me, I will elaborate a bit on this (and I will come back to my favourite statement that the "weirdness of entanglement" is simply the weirdness of superposition, in a dramatic setting where it is harder to sneak out from).

The weirdness of superposition comes about from the DIFFERENCE between "superposition of states" and "statistical mixture of states". If I say: "All of quantum mechanics' bizarreness comes from this single aspect" I think I'm not exaggerating. It is why I find any "information approach" to quantum mechanics pedagogically dangerous, because it is again hiding the essential part.

There is a fundamental difference between:
our system is in the quantum superposition |A> + |B>
and
our system has 50% chance to be in state A, and 50% chance to be in state B.

Very, very often, both concepts are confused, sometimes on purpose, sometimes by inadvertence, and this is a pity because you are then missing the essential part.

The reason why this confusion is so often taken, is that *IF YOU ARE GOING TO LOOK AT THE SYSTEM* and you are going to try to find out whether it is in state A or in state B, then the behaviour, the outcomes, of the two statements are identical.
*IF* you are limiting yourself to the "measurement basis" containing A and B states, then there is no observable difference between:
"our system is in quantum superposition |A> + |B>" and "our system has 50% chance to be in state A and 50% chance to be in state B".
All observations will be identical... as long as we remain in the basis (A,B...), and quantum mechanics then reduces to a fancy way of dealing with statistical ensembles of systems.
Whether we consider those probabilities to be "physical" or just due to our "ignorance" doesn't matter.

But.

The superposition |A> + |B> behaves dramatically different from the mixture 50% A and 50% B when we go to another, incompatible, observable basis. There is NO WAY in which a mixture of 50% A and 50% B can explain the statistics of observation on a superposition |A> + |B> in another basis.

And that is what the 2-slit experiment demonstrates: you cannot consider the particles to be a mixture of 2 populations, one that went through slit 1, and one that went through slit 2, when you look at the interference pattern on the screen. When you only measure directly behind the slits, you are still in the "slit basis" and you can still pretend that you have the same results as if we actually had a statistical mixture of 2 populations: 50% "slit 1" and 50% "slit 2". But when you "change basis" and you go looking at the screen, that doesn't work any more.

In other words, as long as we work in one basis, we can still confuse "superposition" with "statistical mixture". From the moment we change basis, we can't, any more and the weird properties of superposition set in. They are weird, exactly because they do NOT correspond to what we would have with a statistical mixture.

And now we come to entanglement, and the difference with statistical correlations.

The funny thing about entanglement is NOT that there are correlations between particles. There's nothing strange with having correlations between particles. Yes, interaction (classical interaction) CAN provide for correlations. If we have balls of different colours, and we cut them in 2, and send the halves to two different places, we won't be surprised that there is a correlation between the colours. That when there is half a red ball at Alice's place, that there is also half a red ball at Bob's place. We are used to statistical correlations of distant events if they have a common origin.

So the fact that the spins are opposite have nothing special.

If we consider the entangled state:

|spin z up> | spin z down> - |spin z down> |spin z up>

then there's nothing surprising that the spin at Alice is the opposite as the spin at Bob's.

The above superposition (entanglement because it is a 2-particle system) is indistinguishable from the normal, classical CORRELATED event set:

50% chance to have the couple (up down) and 50% chance to have the couple (down up).

It is only when we are going to CHANGE BASIS and when we are going to look at the spin correlations with axis in different directions (between them) that the outcomes are NOT compatible any more with a statistical ensemble. (in essence, that's Bell's theorem). Just as in the 2-slit experiment.

We are now again confronted with the fact that a superposition of states is NOT the same as a statistical ensemble of states, but that this difference is only revealed when we change observation basis from the one that served to do the superposition in.

Any process that could make classically a correlation between quantities could eventually also give rise to an entangled state. It is not the correlation of variables by itself that is surprising. We are used to have statistical correlations due to interactions. What is surprising (again) is that we have a superposition of states, which doesn't behave as a statistical ensemble, if we can measure it in a "rotated" basis.

And now the point is that the more complicated your system is, the more involved the entanglement, the harder it is to do an observation in a rotated basis. In fact, from a certain amount of complexity onwards, you do not really practically have access any more to a rotated basis. You are forced to work in a compatible basis with the original one. And when that happens, there IS no observational difference any more between a superposition (a complicated entanglement) and a statistical mixture. You can pretend, from that point onward, for all practical purposes, that your system is now in a statistical mixture. It will lead observationally to the correct results. You won't be able, practically, to do an experiment that contradicts thinking of your system as a classical statistical mixture of basis states. That's the essence of decoherence, and the reason why we are macroscopically only observing "genuine statistical mixtures" and no complicated quantum entanglements.

And why entanglement experiments that demonstrate a genuine entanglement by SHOWING that the outcomes are different than can be explained by a statistical mixture, are difficult, and usually limited to a very small set of system components.

So again: the weird thing is superposition, and its difference with potential statistical mixtures. (the fact that stochastic outcomes of measurements on superpositions cannot be explained by statistical mixtures).

Entanglement is a special kind of superposition, which involves 2 or more ("distant" for more drama) systems, and entanglement's strangeness comes about because of the difference between its results, and normal statistical correlations in a statistical mixture, difference which can only be shown when we measure in a different basis than the one we set up the entanglement in.
 
  • #40
vanesch said:
[..]
The weirdness of superposition comes about from the DIFFERENCE between "superposition of states" and "statistical mixture of states". If I say: "All of quantum mechanics' bizarreness comes from this single aspect" I think I'm not exaggerating. It is why I find any "information approach" to quantum mechanics pedagogically dangerous, because it is again hiding the essential part.

There is a fundamental difference between:
our system is in the quantum superposition |A> + |B>
and
our system has 50% chance to be in state A, and 50% chance to be in state B.
[..]
And that is what the 2-slit experiment demonstrates: you cannot consider the particles to be a mixture of 2 populations, one that went through slit 1, and one that went through slit 2, when you look at the interference pattern on the screen. When you only measure directly behind the slits, you are still in the "slit basis" and you can still pretend that you have the same results as if we actually had a statistical mixture of 2 populations: 50% "slit 1" and 50% "slit 2". But when you "change basis" and you go looking at the screen, that doesn't work any more. [..]

And now we come to entanglement, and the difference with statistical correlations.

The funny thing about entanglement is NOT that there are correlations between particles. There's nothing strange with having correlations between particles. Yes, interaction (classical interaction) CAN provide for correlations. If we have balls of different colours, and we cut them in 2, and send the halves to two different places, we won't be surprised that there is a correlation between the colours. That when there is half a red ball at Alice's place, that there is also half a red ball at Bob's place. We are used to statistical correlations of distant events if they have a common origin.
[..]
It is only when we are going to CHANGE BASIS and when we are going to look at the spin correlations with axis in different directions (between them) that the outcomes are NOT compatible any more with a statistical ensemble. (in essence, that's Bell's theorem). Just as in the 2-slit experiment.
[..]
Entanglement is a special kind of superposition, which involves 2 or more ("distant" for more drama) systems, and entanglement's strangeness comes about because of the difference between its results, and normal statistical correlations in a statistical mixture, difference which can only be shown when we measure in a different basis than the one we set up the entanglement in.

Thanks for this clear resume, I will ponder over it! :smile:
 
  • #41
vanesch said:
Any process that could make classically a correlation between quantities could eventually also give rise to an entangled state. It is not the correlation of variables by itself that is surprising. We are used to have statistical correlations due to interactions. What is surprising (again) is that we have a superposition of states, which doesn't behave as a statistical ensemble, if we can measure it in a "rotated" basis.
Yes, I think that is very profoundly correct. It could be summarized with the remark that the weirdness of dealing with individual quanta, not present when dealing with large aggregates of quanta, is the very concept of a "rotated" basis, or a "complementary" observable. Classically, we can observe everything at once, because the aggregate averages we form don't contradict each other. But individual quanta don't contain that much information projected onto each particle-- the whole point of an elementary wave concept is not that it tells us more about the particle, it's that it tells us no more about the particle than the particle seems to know about itself, when it is not being looked at. And when it is being looked at, we are not looking in some "god's eye" sense, we are doing a very particular kind of looking-- we are applying a measurement basis, as you say, and all the rotated bases we could imagine mean nothing at that point. We never dreamed that choosing an observation basis to obtain detailed information precluded a whole other class of detailed information, because information about aggregates doesn't work that way-- the aggregate average has already truncated the information present so drastically that we completely miss this little complementarity limitation.
And now the point is that the more complicated your system is, the more involved the entanglement, the harder it is to do an observation in a rotated basis.
Bang on. In another thread, we are discussing the cat paradox, and I claimed that the way the cat paradox is normally expressed is just wrong quantum mechanics, and now you have given be better words to say why: because it pretends that such a rotation of observation basis makes sense on a cat.
 
  • #42
vanesch said:
The superposition |A> + |B> behaves dramatically different from the mixture 50% A and 50% B when we go to another, incompatible, observable basis. There is NO WAY in which a mixture of 50% A and 50% B can explain the statistics of observation on a superposition |A> + |B> in another basis.
Well, there is.

You just have to allow the possibility that second measurement takes place - interference measurement.
Say you measure interference of single H polarized photon rotated to new H' polarization in respect to subensemble of V polarized photons rotated to H' polarization. And you filter out only photons with certain level of constructive interference discarding the rest. And of course you have to have phase property for photons to speak about interference.
 
  • #43
zonde said:
Well, there is.

You just have to allow the possibility that second measurement takes place - interference measurement.
Say you measure interference of single H polarized photon rotated to new H' polarization in respect to subensemble of V polarized photons rotated to H' polarization. And you filter out only photons with certain level of constructive interference discarding the rest. And of course you have to have phase property for photons to speak about interference.

?

You measure photon polarisation twice, is that what you are saying ?
 
  • #44
vanesch said:
?

You measure photon polarisation twice, is that what you are saying ?
No

You create entangled photons in H/V basis. Then you measure them in +45°/-45° base. This measurement of polarization is completely undetermined as you have 50/50 chance that photon will go +45° or -45° path. But correlations appear in interference measurements between H rotated to +45° and V rotated to +45° at two sites.
 
  • #45
zonde said:
No

You create entangled photons in H/V basis. Then you measure them in +45°/-45° base. This measurement of polarization is completely undetermined as you have 50/50 chance that photon will go +45° or -45° path. But correlations appear in interference measurements between H rotated to +45° and V rotated to +45° at two sites.

Yes, that's correct. In what way is that contradicting my claim that you cannot describe this as a statistical ensemble ?

If you were to consider that your original population of pairs of photons was a statistical ensemble, 50% (H,H) and 50% (V,V), then such a statistical ensemble will NOT give you what you actually measure in the 45/-45 basis, because such a statistical ensemble would give you a totally UNCORRELATED 45/-45 result, while according to QM (as you say), you find perfect correlation in the 45/-45 measurement.

Indeed, the statistical ensemble approach would give you the following:

50% chance that you have a (H,H) pair. The first H impinging on a 45 polarizer has 50% chance to pass, the second H impinging on a -45 polarizer has also 50% chance to pass, independently.
So we get here 25% chance to get (pass, pass), (pass, no pass), (no pass, pass), and (no pass, no pass).

Same for the 50% chance that you have a (V,V) pair.

So in total you get:

25% (pass, pass), 25% (no pass, pass), 25% (pass, no pass) and 25% (no pass, no pass).


The quantum superposition approach gives you:

50% chance to have (pass pass), and 50% chance to have (no pass, no pass).

So the statistical ensemble approach 50% (H,H) and 50% (V,V) is not explaining the QM result.
 
  • #46
thenewmans said:
To me, the concept of entanglement sounds like an epiphany. I’m sure I can’t find one specific moment for it but I’d like to get closer. And I’d like your help. So far, I see that Einstein had issues with Born’s matrix mechanics (1925) among other things, which heated up the Bohr Einstein debates. A decade passed before the EPR paradox.


Einstein around 1931.
 
Last edited:
  • #47
vanesch said:
Yes, that's correct. In what way is that contradicting my claim that you cannot describe this as a statistical ensemble?
You are right that I am not talking about statistical ensemble.
I am talking about let's say "physical ensemble".

But in this particular part of your post that I actually quoted in my post:
vanesch said:
The superposition |A> + |B> behaves dramatically different from the mixture 50% A and 50% B when we go to another, incompatible, observable basis. There is NO WAY in which a mixture of 50% A and 50% B can explain the statistics of observation on a superposition |A> + |B> in another basis.
you do not speak about statistical ensemble but rather about mixture of A and B (I suppose we can say mixture of (H,H) and (V,V) pairs). And so it sounds like you include my described "physical ensemble" case too. You have to admit that word "mixture" does sound physical and not statistical.
 
  • #48
zonde said:
You are right that I am not talking about statistical ensemble.
I am talking about let's say "physical ensemble".

I wouldn't know what is the difference. If you have a "statistical" ensemble or mixture of 50% black balls and 50% white balls, in how much is that different from a "physical ensemble" which contains well-mixed 5 million white balls and 5 million black balls ?

you do not speak about statistical ensemble but rather about mixture of A and B (I suppose we can say mixture of (H,H) and (V,V) pairs). And so it sounds like you include my described "physical ensemble" case too. You have to admit that word "mixture" does sound physical and not statistical.

Again, what's the difference ?

If we do this experiment with 10 million "events", and we say that they come from about 5 million (H,H) pairs and about 5 million (V,V) pairs (sent out randomly by the source) ; or we say that we have 10 million events drawn from a statistical mixture of (H,H) pairs and (V,V) pairs in 50% - 50% ratio, what's the difference ?
 
  • #49
In this series of Quantum Mechanics videos, the lecturer discusses entanglement and how it is an exclusive relationship. For example, for 'entangled total point zero particles', one particle must be pointing up, and the other particle down. These particles cannot be entangled with any other particles, according to the lecturer.
But what about three particle entanglement? Or is that different seeing that the particles in this case are not necessarily needed to have total point zero spin? But what about particles that have never interacted - aren't they really entangled with every other particle in the world too?

So, one particle can or cannot be entangled with every other particle in the universe?
 
  • #50
I think the lecturer must have meant "cannot be entangled with another particle if we are to use this analysis," rather than "it is impossible for further entanglements to exist." In principle, particles are vastly mutually entangled, including the fact that many are indistinguishable in the first place (like all electrons, etc.). But physics is not about what is, it is about how we can treat what is and get the right answers, to within some desired precision. In practice, we can find situations where entanglements are vastly unimportant, or we can find situations where simple entanglements matter but more complicated ones don't. Physics is very much about building up to the complex from the simple, and that it works at all says something about what a tiny fraction of the information the universe encodes is actually "active" in determining the outcomes of our experiments.
 
Back
Top