I Clarifying causes of correlations

  • Thread starter vanhees71
  • Start date

vanhees71

Science Advisor
Insights Author
Gold Member
12,220
4,581
[Moderator's note: This thread is spun off from a previous thread since it deals with a separate topic.]

Yes, but the point is that for many people the current account in QM is missing the "heated nitroglycerine explodes and the explosion collapses the wall " part.

This is even formally the case, where the QM correlations violate the Reichenbach principle of a common cause. There's no event you can condition on that removes the correlations, which is taken as typical of an "explanation" in statistics.
Hm, I'm not sure, whether there's an ab-initio calculation leading to the chemical properties of nitroglycerine, but it may well be doable if needed (although it's tough; to accurately describe the phases of water it took decades!).

I don't understand what you mean by: "There's no event you can condition on that removes the correlations". To the contrary, it's not so easy to keep the correlations, i.e., the entanglement. Pretty little disturbances "from the environment" destroy the entanglement. It's hard to prevent decoherence, to the dismay of all taking up the challenge to construct multi-q-bit quantum computers.

I've no clue about Reichenbach's ideas. The little I've read from him rather lead to further strengthening of my prejudices against the value of "philosophy of physics" ;-)). The cause for the correlations described by entanglement is the preparation of a system in the entangled state. As the many very accurate Bell tests show, it's very well possible to prepare various kinds of entangled states like biphotons from parametric downconversion, and the preparation procedure is the cause for the entanglement and the observable stronger-than-classical correlations described by it.
 

DarMM

Science Advisor
Gold Member
1,433
622
I don't understand what you mean by: "There's no event you can condition on that removes the correlations". To the contrary, it's not so easy to keep the correlations, i.e., the entanglement. Pretty little disturbances "from the environment" destroy the entanglement. It's hard to prevent decoherence, to the dismay of all taking up the challenge to construct multi-q-bit quantum computers.
Certainly, but this is not what this removal refers to. It's a notion in probability theory not a physical process. If there is an event ##E## such that conditioning on it removes the correlation between ##A## and ##B##, then ##E## can be taken as a cause of the correlation between ##A## and ##B##. If you read expositions of it it is simple enough and not really a "philosophical" notion.

Let me make this simpler. What does the violation of the CHSH inequalities mean to you? What's the physical reason for their violation?
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,220
4,581
Certainly, but this is not what this removal refers to. It's a notion in probability theory not a physical process. If there is an event ##E## such that conditioning on it removes the correlation between ##A## and ##B##, then ##E## can be taken as a cause of the correlation between ##A## and ##B##. If you read expositions of it it is simple enough and not really a "philosophical" notion.

Let me make this simpler. What does the violation of the CHSH inequalities mean to you? What's the physical reason for their violation?
The entire Bell analysis, of which the violation of the CHSH inequality is one specific manifestation, tells me that QT (in the minimal statistical interpretation) is correct and local deterministic theories are not describing nature correctly. The deterministic world view is simply a prejudice of our socalled "common sense", which is trained on the apparent deterministic behavior of macroscopic systems, which is just due to not being able to observe fine enough microscopic details of such systems.
 

DarMM

Science Advisor
Gold Member
1,433
622
Okay that makes sense. Just to push it a little further.

The deterministic world view is simply a prejudice of our socalled "common sense", which is trained on the apparent deterministic behavior of macroscopic systems, which is just due to not being able to observe fine enough microscopic details of such systems.
However intermediateness isn't quite enough. Two random particles, i.e. ones whose spin is stochastically driven with no cause, still won't obtain the CHSH inequalities. What I mean is imagine Spin vectors ##\overrightarrow{S}_1## and ##\overrightarrow{S}_2## that are generated upon measurement. No matter what kind of distribution you have over the space ##(\overrightarrow{S}_1,\overrightarrow{S}_2)## you won't replicate the CHSH inequalities.

Of course this is because in QM if we measure ##S_z## only ##S_z## is randomly generated (given your statistical view), not the whole spin vector ##\overrightarrow{S}##. So we require not just randomness, but contextuality. The CHSH inequalities come from the fact that ##S_x## and ##S_z## can't be considered to come from the even a random ##\overrightarrow{S}##, they live in two seperate probability spaces. Only the quantity I measure is randomly generated, with no requirement to be compatible with the whole Spin vector.

What do you think of this? What is the reason for it?

Would it be our "common sense" prejudice of assuming counterfactuals, i.e. that there are values for the things I didn't measure.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,220
4,581
This is not precise enough language. What do you mean by "only ##S_z## is randomly generated".

There are two distinct things you can do to a system with regard to an observable: (a) you can prepare it in a state such that the observable has a determined value. The possible determined values are the eigenvalues of the associated self-adjoint operator. If ##|\alpha,s_z \rangle## are a complete set of eigenvectors of ##\hat{S}_z## with eigenvalues ##s_z## (##s_z \in \{-s,-s+1,\ldots,s \rangle \}## for a system with spin ##s
\in \{0,1/2,1,\ldots \}##, then such a state is represented by a statistical operator of the form
$$\hat{\rho}_{s_z}=\sum_{\alpha,\alpha'} \rho_{\alpha,\alpha'} |\alpha,s_z \rangle \langle \alpha',s_z|.$$

(b) you can measure a the system's spin component ##S_z##. If it is prepared in an arbitrary state, represented by a statistical operator ##\hat{\rho}##, the propbability to find each of the possible values ##s_z## is given by the (generalized) Born rule
$$P(s_z)=\sum_{\alpha} \langle \alpha,s_z|\hat{\rho}|\alpha,s_z \rangle.$$

The value of ##S_z## is determined to have the value ##s_z## if and only if ##P(s_z)=1## which necessarily implies that the statistical operator is of the specific form ##\hat{\rho}_{s_z}##.

Observables which are not determined do not have a determined value (that's why the observables are not determined), and all you know about the outcome of measurements are the probabilities according to the above quoted Born rule.
 

DarMM

Science Advisor
Gold Member
1,433
622
This is not precise enough language. What do you mean by "only ##S_z## is randomly generated".
I'm trying to understand your view and I know how the Born rule, eigenstates and the basic operations of QM work.

What I'm saying is that fundamental randomness or lack of determinism alone is not enough to break the CHSH inequality. No probability distribution over ##(\overrightarrow{S}_1,\overrightarrow{S}_2)## can. You have to consider the distribution for ##S_z## to be separate from ##S_x## so that they don't occur in the same sample space. Then each of measurement pairs ##S_a S_b, S_b S_c, S_c S_d, S_d S_a## occur in separate sample spaces. (With ##a,b,c,d## being spin angles.)

This is what makes Quantum theory able to break the CHSH inequalities, as explicated in the classic papers of Landau and Tsirelson, the lack of a common sample space for ##S_a S_b## and ##S_b S_c##, etc. What does it mean in your view?
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,220
4,581
I'm trying to understand your view and I know how the Born rule, eigenstates and the basic operations of QM work.

What I'm saying is that fundamental randomness or lack of determinism alone is not enough to break the CHSH inequality. No probability distribution over ##(\overrightarrow{S}_1,\overrightarrow{S}_2)## can. You have to consider the distribution for ##S_z## to be separate from ##S_x## so that they don't occur in the same sample space. Then each of measurement pairs ##S_a S_b, S_b S_c, S_c S_d, S_d S_a## occur in separate sample spaces. (With ##a,b,c,d## being spin angles.)

This is what makes Quantum theory able to break the CHSH inequalities, as explicated in the classic papers of Landau and Tsirelson, the lack of a common sample space for ##S_a S_b## and ##S_b S_c##, etc. What does it mean in your view?
I'm pretty traditional in my views. I think the minimal statistical interpretation comes closest to my own views, which means QT gives probabilistic descriptions about objective observations of nature.

I guess what you mean by "lack of a common sample space" is the basic fact that within QT there are incompatible observables like the spin components ##S_x## and ##S_z## (or any other non-collinear directions). This simply implies that in general there are no common eigenstates of the associated representing operators and thus in general you cannot prepare a system such that both observables take determined values. This is what's behind the Heisenberg uncertainty relation.
 

DarMM

Science Advisor
Gold Member
1,433
622
Just to be clear vanhees71, I'm not trying to find an error in your views. Just more to find out its exact content on subtler issues, to make these kind of discussions simpler in future.

I'm pretty traditional in my views. I think the minimal statistical interpretation comes closest to my own views, which means QT gives probabilistic descriptions about objective observations of nature.
I think that would be pretty traditional, from reading your posts I think your view is a more coherent form of the typical view, e.g. you don't say things like "particle is in both places at once" or things like that.

I guess what you mean by "lack of a common sample space" is the basic fact that within QT there are incompatible observables like the spin components ##S_x## and ##S_z## (or any other non-collinear directions).
That's precisely it, it's what permits the CHSH inequalities to be broken.

This simply implies that in general there are no common eigenstates of the associated representing operators and thus in general you cannot prepare a system such that both observables take determined values. This is what's behind the Heisenberg uncertainty relation.
Okay that's clear enough. However would you say that because a system is in a state for which
$$P\left(S_z = \frac{1}{2}\right) = 1$$
that means the particle in fact already has the spin value of ##S_z = \frac{1}{2}## or does it only mean if you set up an ##S_z## measurement it's guaranteed to produce a specific result?

The answer to this would allow me to know if you're closer to Brukner-Zellinger or others like Haag.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,220
4,581
Just to be clear vanhees71, I'm not trying to find an error in your views. Just more to find out its exact content on subtler issues, to make these kind of discussions simpler in future.


I think that would be pretty traditional, from reading your posts I think your view is a more coherent form of the typical view, e.g. you don't say things like "particle is in both places at once" or things like that.


That's precisely it, it's what permits the CHSH inequalities to be broken.


Okay that's clear enough. However would you say that because a system is in a state for which
$$P\left(S_z = \frac{1}{2}\right) = 1$$
that means the particle in fact already has the spin value of ##S_z = \frac{1}{2}## or does it only mean if you set up an ##S_z## measurement it's guaranteed to produce a specific result?

The answer to this would allow me to know if you're closer to Brukner-Zellinger or others like Haag.
I think, in the scientific realm nobody would make a stupid statement like "a particle is in more than one place at once". That's gibberish from bad popular-science books. Unfortunately there are only very few really good popular-science books.

To answer your last question: For me your two alternatives are in fact no alternatives. For me an observable is determined by the preparation procedure, if there's the probability 1 to measure the corresponding value (in your example ##S_z=1/2##.
 

DarMM

Science Advisor
Gold Member
1,433
622
I think, in the scientific realm nobody would make a stupid statement like "a particle is in more than one place at once". That's gibberish from bad popular-science books. Unfortunately there are only very few really good popular-science books.
I've met plenty of people who have.

To answer your last question: For me your two alternatives are in fact no alternatives. For me an observable is determined by the preparation procedure, if there's the probability 1 to measure the corresponding value (in your example ##S_z=1/2##.
This doesn't answer the question. Let me try a different phrasing. When you prepare an electron in a state of definite ##L_z## (I'm picking orbital angular momentum for a reason) does that mean the electron really has that property, i.e. before anybody measures it it is out there floating in space spinning about the ##z##-axis with that much angular momentum.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,220
4,581
This doesn't answer the question. Let me try a different phrasing. When you prepare an electron in a state of definite ##L_z## (I'm picking orbital angular momentum for a reason) does that mean the electron really has that property, i.e. before anybody measures it it is out there floating in space spinning about the ##z##-axis with that much angular momentum.
You are within non-relativistic QT obviously since otherwise you cannot split orbital and spin angular momentum, but that doesn't change the very simple interpretation within the minimal statistical interpretation:

An observable is by definition determined, i.e., it takes a determined value (necessarily an eigenvalue of the self-adjoint operator describing this observable) if it is prepared such that an accurate measurement of this observable leads with 100% probability to this value as a result of the measurement. Let ##|\alpha,l_z \rangle## be a complete VONS of eigenvectors of ##\hat{L}_z## with eigenvalue ##l_z##. If then the observable ##L_z## is determined the prepared state is of the form
$$\hat{\rho}_{L_z=l_z} = \sum_{\alpha_1,\alpha_2} \rho_{\alpha_1 \alpha_2} |\alpha_1,l_z \rangle \langle \alpha_2,l_z \rangle,$$
where ##\hat{\rho}## fulfills the usual definitions of a statistical operator, i.e., it's a positive semidefinite self-adjoint operator with ##\mathrm{Tr} \hat{\rho}=1##.
 

DarMM

Science Advisor
Gold Member
1,433
622
An observable is by definition determined, i.e., it takes a determined value (necessarily an eigenvalue of the self-adjoint operator describing this observable) if it is prepared such that an accurate measurement of this observable leads with 100% probability to this value as a result of the measurement.
Okay I'm in no disagreement with this, it's just basic QM, but in your view what is physically going on?

When you prepare something in a definite ##L_z = 1## is the particle actually out there spinning around with that much angular momentum?

I get what the formalism says about the statistics.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,220
4,581
I'd answer "yes" to your question. If something is observed with 100% probability than this something is determined. That's what physics is, or what do you mean by "what is physically going on"? Physics is about objective quantitative observations of nature. Over a long time we have gained some knowledge about some regularities in such observations, i.e., we can with some certainty, predict what we'll observe next after having observed the current state of some object, because there are some causal laws which seem to hold true, and we are evidently even able to find mathematical models and even quite general theories bringing into some order the plethora of such observations. It's even amazing how few "fundamental laws" are necessary to understand a huge amount of phenomenology. That's what I'd call "physics" and to explain "what's physically going on" means to apply the "fundamental laws" to the phenomenon at hand.
 

DarMM

Science Advisor
Gold Member
1,433
622
I'd answer "yes" to your question
Perfect, I think your view is basically that of Brukner and Zeilinger. Thanks for the discussion.
 

Want to reply to this thread?

"Clarifying causes of correlations" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top