I How "spooky action...." may work?

  • I
  • Thread starter Thread starter Suppaman
  • Start date Start date
  • Tags Tags
    Time Work
  • #51
rubi said:
No, your construction won't recover the statistics of three non-commuting spin observables. It doesn't even work out mathematically, since there are uncountably many pure states, so your sum will diverge.
Clearly wrong. Learn how to take integrals over probability distributions, say \int \rho(x) dx, over uncountably many real numbers (hint: the result is, in this case, 1).
rubi said:
Furthermore, a density matrix doesn't constitute a simplicial state space.
As if I would have claimed this. It is an element of the quantum state space. If it is a pure density matrix |\psi\rangle\langle\psi|, then we take it and use it as a vertex of some other, simplicial state space. Ok, let's use a different denotation for this element, S_{|\psi\rangle\langle\psi|}. Better?
Then you construct the simplicial state space created by these density matrices as basic states. Each element is a formal linear combination \sum_i p_i S_{|\psi\rangle\langle\psi|}. Feel free to replace the sum (with \sum_i p_i = 1, p_i\ge 0) by an integral (with \int_\lambda \rho(\lambda) d\lambda = 1, \rho(\lambda)\ge 0).

Each element of this simplex then defines a density matrix by a straightforward projection operator \sum_i p_i S_{|\psi\rangle\langle\psi|} \to \sum_i p_i |\psi\rangle\langle\psi|. So for every element of this simplex there exists a corresponding density matrix.

rubi said:
The theorem doesn't make any additional assumptions. The only thing you need is the probability distributions that are predicted by QT for non-commuting spin observables. No ordinary probability theory can recover them. It is a fact, which you will have to accept.
No, I don't have to accept claims which I can easily prove to be wrong, and exist only because you claim it. Give a link to a paper in a peer-reviewed journal where this proof has been made, then we will see what are the additional assumptions.

Self-imposed immaturity would be to believe your claims of existence of some Theorem.
 
Physics news on Phys.org
  • #52
ddd123 said:
I thought it was always due to a conservation law, not just often, are there exceptions?

Well, entanglement doesn't have to involve conservation laws. Any time you have a pair of systems that are in a superposition of states of the form:

|\Psi\rangle = \alpha |\phi_A\rangle |\chi_A\rangle + \beta |\phi_B\rangle |\chi_B\rangle

you have entanglement, and measurement of the first system instantly tells you about the second system, no matter how far away. So the idea of entanglement doesn't have anything to do with conservation laws. However, it might be that in practice, the only way to get such an entangled state is by through conservation laws: Either the total energy, or the total momentum, or the total angular momentum, etc., is fixed, but the partitioning of the quantity between the two systems is different in the two possible elements of the superposition.
 
  • #53
Ilja said:
Clearly wrong. Learn how to take integrals over probability distributions, say \int \rho(x) dx, over uncountably many real numbers (hint: the result is, in this case, 1).
No such integral is defined on the space of operators that you want to sum up.

No, I don't have to accept claims which I can easily prove to be wrong, and exist only because you claim it. Give a link to a paper in a peer-reviewed journal where this proof has been made, then we will see what are the additional assumptions.

Self-imposed immaturity would be to believe your claims of existence of some Theorem.
The proof is really trivial high school mathematics and well-known to every researcher in quantum theory, so there is really no point to doubt it. But here you go:
We try to reproduce the statistics of a quantum state ##\left|\Psi\right>=\left|z+\right>##. As everybody knows, the statistics is given by ##P(z+)=1##, ##P(z-)=0##, ##P(x+)=\frac{1}{2}## and ##P(x-)=\frac{1}{2}##.

We want to know whether there is a joint probability distribution ##P(s_x,s_z)## such that ##P(x_\pm)=P(\pm,+)+P(\pm,-)## and ##P(z_\pm) = P(+,\pm)+P(-,\pm)##. In particular, we would have ##0=P(z-)=P(+,-)+P(-,-)##, i.e. ##P(+,-) = - P(-,-)##. Hence, either ##P(+,+)## is negative or ##P(-,-)## is negative or ##P(+,+)=P(-,-)=0##.

In the first two cases, we wouldn't have a probability distribution, since probabilities must be positive. In the third case, the system of equations reduces to ##1=P(-,+)##, ##0=P(+,-)##, ##\frac{1}{2}=P(+,-)## and ##\frac{1}{2}=P(-,+)##. Obviously, this system has no solutions (##\frac{1}{2} \neq 1##!), hence no such joint probability distribution exists and hence no observables modeled as random variables on a probability space can reproduce the statistics of the quantum state ##\left|\Psi\right>##, since if there were such observables, they would have a well-defined joint probability distribution.
 
  • #54
Ilja said:
If this would be a valid causal explanation, in agreement with Reichenbach's common cause and Einstein causality, we could prove Bell's theorem.

I disagree. You could prove Bell's theorem only if both realism and Einstein locality hold simultaneously at the time of measurement ( or at least in the period between first interaction and measurement ), implying that there are some form of local hidden variables. Clearly, this is not the case, as evidenced by experiment and observation. The point is that this does not make any reference to, nor does it require, Reichenbach's principle.

On the other hand, the interaction between the two particles in the past demonstrably is the causal explanation of entanglement, because after the interaction has taken place the total information contained in the composite system is greater than the the sum of the information carried by the two subsystems considered in isolation ( I believe the term for this is "subadditivity" ). This would not be the case without the interaction in the past, so causality seems quite obvious to me. The "extra" information contained in the composite system is precisely the correlation implied by entanglement. All of this is of course purely statistical.

Making this mathematically precise isn't so easy for me ( I'm not a scientist ), but my first impulse here would be to write down the reduced density matrices for the two subsystems, and then calculate the entropy from it. The total entropy of the composite system should be less than the sum of the entropies for each particle in isolation, which is a direct causal consequence of their past interaction ( without which we would get an equation, instead of an inequality ).
 
  • #55
Markus Hanke said:
On the other hand, the interaction between the two particles in the past demonstrably is the causal explanation of entanglement, because after the interaction has taken place the total information contained in the composite system is greater than the the sum of the information carried by the two subsystems considered in isolation ( I believe the term for this is "subadditivity" ). This would not be the case without the interaction in the past, so causality seems quite obvious to me. The "extra" information contained in the composite system is precisely the correlation implied by entanglement. All of this is of course purely statistical.

I'm not sure that entanglement necessarily requires interaction in the past, but I don't know of a counter-example.

But the issue with causality is not really about the creation of the entanglement, but about it's consequences.

Alice measures an electron to have spin-up in the z-direction. Because she knows that their particles are entangled, she immediately knows something about Bob's situation--that Bob will not measure spin-up in the z-direction. So think about that statement: "Bob will not measure spin-up in the z-direction". It seems that if the statement is true (in Many-Worlds, it's not true, or no more true than its negation), then we have two possibilities:
  1. It became true at some moment.
  2. It has always been true.
It's hard to make sense of the first statement without nonlocal effects, and it's hard to make sense of the second statement (in light of Bell's inequality) without superdeterminism.
 
  • #56
rubi said:
No such integral is defined on the space of operators that you want to sum up.
You want to tell me that such a space of operators does not allow the definition of a probability measure? Really?
rubi said:
The proof is really trivial high school mathematics and well-known to every researcher in quantum theory, so there is really no point to doubt it. But here you go:
We try to reproduce the statistics of a quantum state ##\left|\Psi\right>=\left|z+\right>##. As everybody knows, the statistics is given by ##P(z+)=1##, ##P(z-)=0##, ##P(x+)=\frac{1}{2}## and ##P(x-)=\frac{1}{2}##.
Fine. That means, this state defines a pure state ##\left|z+\right>\left<z+\right|##, thus, a vertex of the construction of the simplex.
rubi said:
We want to know whether there is a joint probability distribution ##P(s_x,s_z)## such that ...
And where do you think my construction needs any such "joint probability distribution"? You obviously have not understood the construction at all, invent some meaning of it which requires some "joint probability distribution". Sorry, no. Read again the construction. And recognize that for every state, I repeat every state of the construction there exists an image defined by the map \sum_i p_i S_{|\psi\rangle\langle\psi|} \to \sum_i p_i |\psi\rangle\langle\psi| which is a well-defined density operator of standard QM. I do not have to construct some "joint probability distributions".

Recommended reading: Holevo, Probabilistic and statistical aspects of quantum theory, North Holland, Amsterdam 1982
Holevo p.34 said:
Theorem 7.1. Any separated statistical model ... is a reduction of a classical model with a restricted class of measurements.
 
  • #57
Ilja said:
And where do you think my construction needs any such "joint probability distribution"?
No, your construction would have to imply the existence of a joint probability distribution, since this existence is automatically guaranteed by probability theory. If I can prove that this joint probability distribution cannot exist, I have also proven that no underlying probability space can exist. You have obviously not understood basic probability theory.

Theorem 7.1. Any separated statistical model ... is a reduction of a classical model with a restricted class of measurements.
This has exactly nothing to do with it.
 
  • #58
Markus Hanke said:
I disagree. You could prove Bell's theorem only if both realism and Einstein locality hold simultaneously at the time of measurement ( or at least in the period between first interaction and measurement ), implying that there are some form of local hidden variables. Clearly, this is not the case, as evidenced by experiment and observation. The point is that this does not make any reference to, nor does it require, Reichenbach's principle.
May be you know about a variant which needs what you tell us, but does not need Reichenbach's common cause. So what? I have described here already a variant which uses, instead, Reichenbach's common cause together with Einstein causality.

And once there exists such a variant of the proof, that means that once the result - Bell's inequality - is empirically false, or Reichenbach's common cause or Einstein causality is false.

Markus Hanke said:
On the other hand, the interaction between the two particles in the past demonstrably is the causal explanation of entanglement, because after the interaction has taken place the total information contained in the composite system is greater than the the sum of the information carried by the two subsystems considered in isolation ( I believe the term for this is "subadditivity" ). This would not be the case without the interaction in the past, so causality seems quite obvious to me. The "extra" information contained in the composite system is precisely the correlation implied by entanglement. All of this is of course purely statistical.
I do not doubt that it is possible to redefine "explanation" so that it becomes possible to interpret this text as an explanation. It is not a common cause explanation in agreement with Reichenbach's common cause principle, because I cannot see any evidence of type P(AB|C) = P(A|C) P(B|C).
 
  • #59
rubi said:
No, your construction would have to imply the existence of a joint probability distribution, since this existence is automatically guaranteed by probability theory. If I can prove that this joint probability distribution cannot exist, I have also proven that no underlying probability space can exist. You have obviously not understood basic probability theory.
Given that my construction is a trivial construction equivalent to quantum theory, you have proven that quantum theory does not exist. Congratulations.
rubi said:
This has exactly nothing to do with it.
It has nothing to do with your "no joint probability distribution" argument, indeed. My construction is his, I simply take all the extremal points of the statistical model, in the case of quantum theory the pure states ##|\psi\rangle\langle\psi|##, and use them as points in the classical phase space. And it has, in the same way as Holevo's construction, nothing to do with your "no joint probability distribution" argument.
 
  • #60
Ilja said:
Given that my construction is a trivial construction equivalent to quantum theory, you have proven that quantum theory does not exist. Congratulations.
Wrong. I have proven that the statistics of the quantum observables ##S_x## and ##S_z## in the state ##\left|\Psi\right>## cannot be modeled by random variables on a probability space. The argument is watertight, otherwise, you would be able to point out an error, rather than refer to your construction, of which you haven't even attempted to prove that it reproduces said statistics.

It has nothing to do with your "no joint probability distribution" argument, indeed. My construction is his, I simply take all the extremal points of the statistical model, in the case of quantum theory the pure states ##|\psi\rangle\langle\psi|##, and use them as points in the classical phase space. And it has, in the same way as Holevo's construction, nothing to do with your "no joint probability distribution" argument.
If you believe that you can reproduce the statistics of the quantum system from my post using a classical probability space, then please just define the objects that are needed, i.e. a probability space ##(\Omega, \Sigma)## with a probability measure ##\mu:\Sigma\rightarrow\mathbb R_+## and the random variables ##S_x, S_z :\Omega\rightarrow\left\{-1,1\right\}## and prove that their probability distributions reproduce the statistics. So far you haven't done this. Just carry out the calculations, please, instead of telling us that it will obviously work out.

Hint: You will not succeed, since I have proven it to be impossible. (If it were possible, I would be able to compute the joint probability distribution of ##S_x## and ##S_z##, which I have shown to not exist.)
 
  • #61
stevendaryl said:
Because she knows that their particles are entangled, she immediately knows something about Bob's situation

Ok, I think I understand your point ( at least I hope so ). However, it seems to me that the knowledge of the particles being entangled is something that has been added into the mix from the "outside". If we assume that the Alice-Bob system ( with their respective particles ) is isolated in space and time, how would Alice by herself know by performing a measurement on her particle whether it is entangled with a distant particle or not ? Only by either having been present during the initial interaction between them ( classical exchange of information across time ), or by subsequently comparing her results with those of Bob - which is a classical information exchange across space. Without either information exchange or prior interaction ( at some point along Alice's world line ), the outcome of both measurements would appear completely random to both Alice and Bob in isolation. In that sense, it is either the initial interaction that caused the correlation, or the act of comparing the measurement outcomes ( which is always a classical channel ). Without either, the concept of entanglement becomes meaningless. Both cases involve some form of non-locality - either non-locality in space, or non-locality in time, so either way Bell's inequalities will be violated, just as we empirically observe.

Or am I seeing this wrong / missing something ? I am still actively learning about this whole subject matter.
 
Last edited:
  • Like
Likes Jilang
  • #62
rubi said:
If you believe that you can reproduce the statistics of the quantum system from my post using a classical probability space, then please just define the objects that are needed, i.e. a probability space ##(\Omega, \Sigma)## with a probability measure ##\mu:\Sigma\rightarrow\mathbb R_+## and the random variables ##S_x, S_z :\Omega\rightarrow\left\{-1,1\right\}## and prove that their probability distributions reproduce the statistics. So far you haven't done this. Just carry out the calculations, please, instead of telling us that it will obviously work out.
Ok, let's define the space ##\Omega## as the space consisting of all states ##\omega = |\psi\rangle\langle\psi|##. Then, for a probability measure ##\mu## I defined on this space the expectation value of your ##S_x## or whatever will be ##\int d \mu(|\psi\rangle\langle\psi|) \langle\psi| S_x |\psi\rangle##. As I said, a triviality.

But, I see, you want something more than the space of states being a simplex. You want some ##S_x, S_z :\Omega\rightarrow\left\{-1,1\right\}##. But what does it have to do with the probability space being or not being a simplex?
 
  • #63
Ilja said:
Ok, let's define the space ##\Omega## as the space consisting of all states ##\omega = |\psi\rangle\langle\psi|##. Then, for a probability measure ##\mu## I defined on this space the expectation value of your ##S_x## or whatever will be ##\int d \mu(|\psi\rangle\langle\psi|) \langle\psi| S_x |\psi\rangle##. As I said, a triviality.
In order to have a probability space, you needn't give me an expectation value functional, but the measure ##\mu## itself. You need to tell me how to assign a number ##0\leq\mu(A)\leq 1## to any (measurable) subset ##A\subseteq\Omega## (in such a way that the axioms of probability theory are satisfied). What you have given me is just the expectation value of a mixed state ##Tr(\rho S_x)##.

Given a probability space ##(\Omega,\Sigma,\mu)## and a set of random variables, I can define an expectation value functional on the set of random variables ##E(S)=\int_\Omega S(x)\mathrm d\mu(x)##. However, given an expectation value functional on some set of observables, there isn't in general a probability space, such that the expectation value functional is given by taking the expectation values with respect to the probability space. In particular, in the case of quantum spin 1/2 particles, no expectation value functional can possibly be the expectation value functional of a classical probability theory.

The question is: Do quantum expectation values arise as expectation values of random variables on a classical probability space, i.e. is there a ##\mu_\rho## for each ##\rho## such that ##Tr(\rho X) =\int_\Omega \hat X \mathrm d\mu_\rho##, where ##\hat X## is the corresponding random variable? And the answer is negative. Quantum theory is more general than classical probability theory. It can't be reformulated as a classical probability theory. Hence, concepts from classical probability theory don't generally apply to it.

But, I see, you want something more than the space of states being a simplex. You want some ##S_x, S_z :\Omega\rightarrow\left\{-1,1\right\}##. But what does it have to do with the probability space being or not being a simplex?
You don't have a probability space in the first place! You have a state space together with an expectation value functional. A (classical) probability theory is by definition a probability space ##(\Omega,\Sigma,\mu)## together with random variables defined on it. You want to define something, which isn't a classical probability theory. If you do this, the usual laws of probabilities won't hold anymore, since they are derived from the concept of a probability space (Kolmogorov's axioms). The very formulation of Reichenbach's principle (and also Bell's theorem) crucially depends on all these concepts from probability theory, such as conditional probabilities and the rules how to combine them. And what I'm saying is that no classical probability theory can reproduce the statistics of quantum spin 1/2 particles, hence it is unreasonable to expect concepts from classical probability theory to apply to it.

Since the statistics of quantum theory (which is consistent with experiments) isn't compatible with classical probabiliy theory, classical probability as a foundational concept for physics is dead forever. It is a theorem that it can't be saved. Thus all physical concepts that depend on classical probability theory need to be modified.
 
Last edited:
  • Like
Likes bhobba
  • #64
rubi said:
You don't have a probability space in the first place! You have a state space together with an expectation value functional.
Indeed. So let's look what we are talking about, what I have objected to, your quote from #41:
Reichenbach's principle is based on ordinary probability theory, which needs a simplicial state space for its formulation. However, we know that the laws of physics are governed by a theory with a non-simplicial state space (quantum theory), so it would be unreasonable to apply concepts that only make sense in the context of ordinary probability theory to it.
My point is that the non-simplicial state space is not a problem at all. Because of the quite trivial construction from Holevo.

And then I have objected that you started to argue about something completely different.

If you accept that the remark at #41 was misguided, fine. If not, let's forget about the probability spaces and talk about the state space.

Except you start with a completely different argument about the applicability of probability theory. In this case, I would not use the Holevo construction, but, instead, an established hidden variable theory like de Broglie-Bohm, which nicely recovers quantum probabilities and has no problem with Reichenbach's common cause principle. Feel free to tell me that dBB theory uses something different than usual probability theory. Alternatively, we can start to read together Bohm 1952 to see that the physical predictions are equivalent to those of quantum theory, despite its use of classical probability theory.
 
  • #65
Ilja said:
Indeed. So let's look what we are talking about, what I have objected to, your quote from #41:

My point is that the non-simplicial state space is not a problem at all. Because of the quite trivial construction from Holevo.
First of all, the space of density matrices is not at all a simplicial state space, but rather more like a sphere. But the point is that you need a space of states of classical probability distributions, which constitutes a simplex. Do you agree that Reichenbach's principle depends crucially on the concept of conditional probability? The concept of conditional probability is only well-defined in the context of classical probability theory.

Except you start with a completely different argument about the applicability of probability theory. In this case, I would not use the Holevo construction, but, instead, an established hidden variable theory like de Broglie-Bohm, which nicely recovers quantum probabilities and has no problem with Reichenbach's common cause principle. Feel free to tell me that dBB theory uses something different than usual probability theory. Alternatively, we can start to read together Bohm 1952 to see that the physical predictions are equivalent.
No, Bohmian mechanics cannot formulate the spin observables of a particle as random variables on a classical probability space in such a way that it is consistent with the predictions of quantum mechanics. This is a mathematical theorem. If you object to this, then find the error in my argument in post #53. No theory of classical probabilities can reproduce the statistics of quantum spin 1/2 particles. Reichenbach's principle depends on concepts from classical probability theory, hence it depends on concepts that are not generally valid in any theory that describes observed phenomena of nature.
 
  • #66
Rubi, you say that no classical theory can reproduce the statistics of spin. Would that also apply to a four dimensional model of spin projected onto a three dimensional space?
 
  • #67
rubi said:
First of all, the space of density matrices is not at all a simplicial state space, but rather more like a sphere.
Of course, I have never questioned this, the point was that this can be easily modified with Holevo's construction.
rubi said:
But the point is that you need a space of states of classical probability distributions, which constitutes a simplex. Do you agree that Reichenbach's principle depends crucially on the concept of conditional probability? The concept of conditional probability is only well-defined in the context of classical probability theory.
I do not see any problem with conditional probabilities. I see probability as the logic of plausible reasoning, which I can always apply. See Jaynes. And conditional probability is part of this logic. Simplices are quite irrelevant for this.
rubi said:
No, Bohmian mechanics cannot formulate the spin observables of a particle as random variables on a classical probability space in such a way that it is consistent with the predictions of quantum mechanics. This is a mathematical theorem. If you object to this, then find the error in my argument in post #53. No theory of classical probabilities can reproduce the statistics of quantum spin 1/2 particles. Reichenbach's principle depends on concepts from classical probability theory, hence it depends on concepts that are not generally valid in any theory that describes observed phenomena of nature.
It can and does. Not in a non-contextual way, of course. But this is not obligatory.

In #53 you want some joint probability distribution - for things where we have no joint experiments. What is this? Some metaphysical idea how these evil "hidden variables" have to look like? A hidden variable theory has to recover only the results of quantum experiments, not the ideas of opponents of hidden variable theories.

So, simply an explanation how dBB works: If you have a "measurement", you have to describe the interaction between the system and the "measurement instrument". This interaction depends, in dBB theory, in general also on the hidden variables of the measurement device. So, there is no "measurement" of some inherent "property" of the system, but a result of an interaction. And this result has nothing to do with another result of a completely different interaction where something different is "measured".
 
  • #68
Jilang said:
Rubi, you say that no classical theory can reproduce the statistics of spin. Would that also apply to a four dimensional model of spin projected onto a three dimensional space?
Yes, that's right. It's completely independent of how the statistics came about. It needn't even be derived from quantum theory. Whenever a model predicts the probabilities ##0## and ##1## for one two-valued observable and ##\frac{1}{2}## and ##\frac{1}{2}## for the other, no classical probability theory is compatible with this prediction. (Of course, this also applies to other numerical values for the probabilities. I just chose one specific example in order to turn the proof into high school mathematics, which I hope is accessible to anyone.)

Ilja said:
I do not see any problem with conditional probabilities. I see probability as the logic of plausible reasoning, which I can always apply. See Jaynes. And conditional probability is part of this logic. Simplices are quite irrelevant for this.
The problem is that you can't find a concept of conditional probabilities in a non-simplicial state space. You will always violate some basic axiom of classical probability theory, like probabilities adding up to ##1##. For instance in quantum theory, the concept only makes sense for commuting observables, and this is exactly the case, where quantum probabilities are consistent with classical probabilities. If you include non-commuting observables, the concept ceases to make sense.

It can and does. Not in a non-contextual way, of course. But this is not obligatory.
It is obligatory if you want to be consistent with classical probability theory.

In #53 you want some joint probability distribution - for things where we have no joint experiments. What is this? Some metaphysical idea how these evil "hidden variables" have to look like? A hidden variable theory has to recover only the results of quantum experiments, not the ideas of opponents of hidden variable theories.
You still misunderstand the proof. I don't want joint probabilities. I get them for free by classical probability theory. You cannot possibly have a classical probability theory without joint probabilities. Hence, if I can show that no joint probability distribution can exist, I have automatically proven that the statistics is incompatible with classical probability theory. It's just basic logic. If ##A\Rightarrow B## and I can prove ##\neg B##, then I also know ##\neg A##. I don't assume anything about hidden variables except that they can be modeled on a classical probability space. The joint probabilities are just an intermediate tool, which I can assume, since their existence is guaranteed by probabilty theory.

So, simply an explanation how dBB works: If you have a "measurement", you have to describe the interaction between the system and the "measurement instrument". This interaction depends, in dBB theory, in general also on the hidden variables of the measurement device. So, there is no "measurement" of some inherent "property" of the system, but a result of an interaction. And this result has nothing to do with another result of a completely different interaction where something different is "measured".
It doesn't matter whether BM can recover the predictions of QM. The thing that matters is whether they are compatible with classical probability theory. We don't even need quantum theory at all. It can already be proven from the observed statistics. No theory that attempts to predict the observed statistics, be it ordinary QM, Bohmian mechanics, or something completely different, can predict the probabilities in such a way that they are compatible with classical probability theory. Hence, it is an experimental fact that classical probability cannot serve as the basis for the foundations of physics. Thus, all concepts that require it for their formulation must be modified.
 
Last edited:
  • #69
rubi said:
Yes, that's right. It's completely independent of how the statistics came about. It needn't even be derived from quantum theory. Whenever a model predicts the probabilities ##0## and ##1## for one two-valued observable and ##\frac{1}{2}## and ##\frac{1}{2}## for the other, no classical probability theory is compatible with this prediction. (Of course, this also applies to other numerical values for the probabilities. I just chose one specific example in order to turn the proof into high school mathematics, which I hope is accessible to anyone.)

Huh? You can set ##P(s_{\mathrm{x}}, s_{\mathrm{z}}) = P(s_{\mathrm{x}}) P(s_{\mathrm{z}})## to trivially construct the sort of joint probability distribution you describe. For the example from your post #53 this would get you $$\begin{eqnarray}
P(+_{\mathrm{x}}, +_{\mathrm{z}}) &=& 1/2 \,, \qquad P(+_{\mathrm{x}}, -_{\mathrm{z}}) &=& 0 \,, \\
P(-_{\mathrm{x}}, +_{\mathrm{z}}) &=& 1/2 \,, \qquad P(-_{\mathrm{x}}, -_{\mathrm{z}}) &=& 0 \,.
\end{eqnarray}$$ You can easily check that this reproduces the marginals ##P(+_{\mathrm{z}}) = 1##, ##P(-_{\mathrm{z}}) = 0##, and ##P(+_{\mathrm{x}}) = P(-_{\mathrm{x}}) = 1/2## from your post #53.

The problem in your proof seems to be here:
rubi said:
In particular, we would have ##0=P(z-)=P(+,-)+P(-,-)##, i.e. ##P(+,-) = - P(-,-)##. Hence, either ##P(+,+)## is negative or ##P(-,-)## is negative or ##P(+,+)=P(-,-)=0##.
It looks like ##P(+, -)## accidentally got changed to ##P(+, +)## in the second sentence.
 
  • #70
vanhees71 said:
That's right, to demonstrate the violation of Bell's inequality you need different angles between A's and B's polarizers than ##0## or ##\pi/2##, but within the minimal interpretation that doesn't either need any "spooky action at a distance" to explain the results, because it just says that the state is prepared before any measurement is done and local microcausal QFT tells you that there are only local interactions between the photons and the polarizers at A's and B's position. So the correlations, leading to the violation of Bell's inequality are there from the very beginning when the two photons were prepared and are not caused by A's or B's measurement at the other far distant respective other place.
I'm not sure I get it. To make sure we are on the same page let's refer to my post #39 on the thread https://www.physicsforums.com/account/posts/5494960/ .
It is true the QM predicts the correlations for the entangled state when it's created, but to check the validity we must step into reality and perform experiments. The results (denying local realism, i.e. the Inequality of #39) would cause people to refer to "spooky action at a distance". In spite of being used to it I still find it mysterious/spooky.

What am I missing? And BTW, what is a reference to Einstein's regret about the EPR paper?
 
Last edited by a moderator:
  • #71
wle said:
Huh? You can set ##P(s_{\mathrm{x}}, s_{\mathrm{z}}) = P(s_{\mathrm{x}}) P(s_{\mathrm{z}})## to trivially construct the sort of joint probability distribution you describe. For the example from your post #53 this would get you $$\begin{eqnarray}
P(+_{\mathrm{x}}, +_{\mathrm{z}}) &=& 1/2 \,, \qquad P(+_{\mathrm{x}}, -_{\mathrm{z}}) &=& 0 \,, \\
P(-_{\mathrm{x}}, +_{\mathrm{z}}) &=& 1/2 \,, \qquad P(-_{\mathrm{x}}, -_{\mathrm{z}}) &=& 0 \,.
\end{eqnarray}$$ You can easily check that this reproduces the marginals ##P(+_{\mathrm{z}}) = 1##, ##P(-_{\mathrm{z}}) = 0##, and ##P(+_{\mathrm{x}}) = P(-_{\mathrm{x}}) = 1/2## from your post #53.

The problem in your proof seems to be here:

It looks like ##P(+, -)## accidentally got changed to ##P(+, +)## in the second sentence.
You are right, I made a mistake. I simplified too much and it is not that easy to construct a counterexample. However, it is well known that counterexamples exist and even the very book Ilja quoted contains some of these no-go theorems in the appendix. That just means that one needs to put more effort into the construction of a counterexample. It stays true that some of the predictions of quantum theory are incompatible with classical probability theory and hence, the rest of my argument is untouched.
 
  • #72
Zafa Pi said:
I'm not sure I get it. To make sure we are on the same page let's refer to my post #39 on the thread https://www.physicsforums.com/account/posts/5494960/ .
It is true the QM predicts the correlations for the entangled state when it's created, but to check the validity we must step into reality and perform experiments. The results (denying local realism, i.e. the Inequality of #39) would cause people to refer to "spooky action at a distance". In spite of being used to it I still find it mysterious/spooky.

What am I missing? And BTW, what is a reference to Einstein's regret about the EPR paper?
In which particular experiment any "action at a distance" has been demonstrated? This would violate the very foundations of QED, the best tested theory ever and clearly physics beyond the standard model. To my knowledge all that has been observed is, however, in excellent agreement with standard QED, and the violation of Bell's inequality is also as expected (with a very high precision in some experiments). So I don't see, where I should be forced to the conclusion that there are spooky actions at a distance, contradicting local microcausal QFT.
 
Last edited by a moderator:
  • #73
rubi said:
The problem is that you can't find a concept of conditional probabilities in a non-simplicial state space. You will always violate some basic axiom of classical probability theory, like probabilities adding up to ##1##.
If there would be such a problem (I don't see any), then this is what we have already clarified with Holevo's construction how to get a simplicial state space.

But it makes no sense. The rules of classical probability theory, as you can read in Jaynes' Probability theory: the logic of science, is nothing but the rules of consistent plausible reasoning. Consistent reasoning is always possible. The only way consistent reasoning leads to contradictions is if you somewhere make the wrong assumptions. For example by assuming that the results of the experiments are measurement results, thus, do not depend on the state of the measurement device, but only on the measured system or so.
rubi said:
For instance in quantum theory, the concept only makes sense for commuting observables, and this is exactly the case, where quantum probabilities are consistent with classical probabilities. If you include non-commuting observables, the concept ceases to make sense.
Some nonsensical applications of the rules may not make sense.
rubi said:
You still misunderstand the proof. I don't want joint probabilities. I get them for free by classical probability theory. You cannot possibly have a classical probability theory without joint probabilities.
If there is no joint reality, why do you think there should be some joint probability distribution?

This is the situation in quantum theory, where "measurement results" are results of complex interactions which depend on above parts, so that if one "measurement" is made, reasoning about others, which have not been made, makes no sense.
rubi said:
It doesn't matter whether BM can recover the predictions of QM. The thing that matters is whether they are compatible with classical probability theory.
dBB is a deterministic theory, and in no way in conflict with classical probability theory.
rubi said:
We don't even need quantum theory at all. It can already be proven from the observed statistics.
No. All your considerations show is that you make some wrong assumptions.
 
  • #74
Markus Hanke said:
Ok, I think I understand your point ( at least I hope so ). However, it seems to me that the knowledge of the particles being entangled is something that has been added into the mix from the "outside". If we assume that the Alice-Bob system ( with their respective particles ) is isolated in space and time, how would Alice by herself know by performing a measurement on her particle whether it is entangled with a distant particle or not ? Only by either having been present during the initial interaction between them ( classical exchange of information across time ), or by subsequently comparing her results with those of Bob - which is a classical information exchange across space. Without either information exchange or prior interaction ( at some point along Alice's world line ), the outcome of both measurements would appear completely random to both Alice and Bob in isolation. In that sense, it is either the initial interaction that caused the correlation, or the act of comparing the measurement outcomes ( which is always a classical channel ). Without either, the concept of entanglement becomes meaningless. Both cases involve some form of non-locality - either non-locality in space, or non-locality in time, so either way Bell's inequalities will be violated, just as we empirically observe.

Or am I seeing this wrong / missing something ? I am still actively learning about this whole subject matter.

Yes, if Alice and Bob do not know that their particles are entangled, then their results will appear random. But I'm not sure what point is supposed to follow from that.
 
  • #75
It's a very good point! If A and B don't know about the entanglement they just see a stream of unpolarized particles. That's also true when they know it in fact. It doesn't do anything to the particles themselves, whether A or B or both know about their preparation in entangled pairs. Only if they take note accurately about the times of arrival of the measured particles they can later take their measurement protocols and see the correlations between the outcomes of their measurements always looking on the entangled pairs, which is possible due to the accurate time stamps ("coincidence measurement"). Only then the correlations are revealed. They have to communicate their results afterwards, i.e., there's no way for FTL communications through such entangled particle pairs.
 
  • Like
Likes Markus Hanke and Jilang
  • #76
stevendaryl said:
Yes, if Alice and Bob do not know that their particles are entangled, then their results will appear random. But I'm not sure what point is supposed to follow from that.

I don't really think I have a point to make just yet. The thing is this - the more I learn, the more I find aspects of quantum theory that remind me of the situation in classical relativity, in that it is meaningless to talk about relativistic effects at a single event. Likewise, it seems meaningless to me to talk about entanglement for a single observer. When Alice performs a measurement, the outcome is probabilistic for her; also, with respect to Bob performing an experiment, Alice knows only that he got a definite result, but she doesn't know which one. So where is the entanglement ? It is meaningless to speak of entanglement until such time when they are brought together, and their records are compared ( as vanhees71 has said ), just like it is meaningless in relativity to speak of time dilation without some convention about how to compare clocks. This leads me to wonder whether entanglement could be understood as a relationship between observers ( or events ? ) in spacetime in some way, shape, or form.

Again, I'm not trying to make any specific point, I am merely trying to look at things from a slightly different angle.
 
  • #77
Ilja said:
If there would be such a problem (I don't see any), then this is what we have already clarified with Holevo's construction how to get a simplicial state space
Holevo doesn't construct a classical probability theory. It is proven in almost every quantum mechanics textbook that this is not possible. Holevo only constructs an expectation value functional.

Some nonsensical applications of the rules may not make sense.
It it doesn't make sense to compute conditional probabilities, then the theory can't be a classical probability theory, since you can always compute conditional probabilities in classical probability theory.

If there is no joint reality, why do you think there should be some joint probability distribution?
I don't think there should be one. You think so, but you aren't aware of it. If you claim that quantum mechanics can be described by a classical probability theory, then you must also accept that joint probabilities must exist. Probability theory guarantees their existence.

dBB is a deterministic theory, and in no way in conflict with classical probability theory.
Well, dBB cannot have random variables representing spin. Hence, it cannot have probability distributions for spin and always needs to model a measurement device. Quantum mechanics can compute probability distributions for spin, even without a model of measurement. If dBB were to include probability distributions for spin, as quantum mechanics does, then it would necessarily fail to be a classical probability theory.

No. All your considerations show is that you make some wrong assumptions.
Here is a true theorem: No classical probability theory can reproduce the observed statistics of spin particles.
 
  • #78
rubi said:
You are right, I made a mistake. I simplified too much and it is not that easy to construct a counterexample. However, it is well known that counterexamples exist and even the very book Ilja quoted contains some of these no-go theorems in the appendix.

Which no-go theorems and counterexamples to what? The no-go theorems I know of are about whether or not certain types of physical model can reproduce the statistics of quantum physics, not about what type of probability theory quantum physics uses (which, as far as I'm concerned, falls under what I'd consider "ordinary probability theory").
 
Last edited:
  • #79
wle said:
Which no-go theorems and counterexamples to what? The no-go theorems I know of are about whether or not certain types of physical model can reproduce the statistics of quantum physics, not about what type of probability theory quantum physics uses (which, as far as I'm concerned, falls under what I'd consider "ordinary probability theory").
The most widely known one is the Kochen-Specker theorem. However, in order to see that QM can't be an ordinary probability theory, you just need to notice that it has a probability distribution for both ##S_x## and ##S_z##. If these observables were random variables on a probability space, then you would be able to compute the probability ##P(S_x = +\wedge S_z = +)##. However, QM can't compute this number and hence can't be an ordinary probability theory (at least if what you'd consider an "ordinary probability theory" would satisfy Kolmogorov's axioms, which is the standard definition).
 
  • #80
rubi said:
Holevo doesn't construct a classical probability theory.
There is no need for this. The purpose of using Holevo's construction is only to clarify that the non-simplicial character of the state space is irrelevant.

rubi said:
It it doesn't make sense to compute conditional probabilities, then the theory can't be a classical probability theory, since you can always compute conditional probabilities in classical probability theory.
Of course, it does not make sense to compute conditional probabilities for events which are simply incompatible with each other. Or to describe probabilities for different things (like measurement results for spin in one direction before and after a measurement in another direction) as if they would be the same event. Of course, making such errors you can prove 2+2=5 too, but this does not mean that you have proven 2+2=5.

rubi said:
I don't think there should be one. You think so, but you aren't aware of it. If you claim that quantum mechanics can be described by a classical probability theory, then you must also accept that joint probabilities must exist. Probability theory guarantees their existence.
No. Probability theory is simply the logic of plausible reasoning. Their rules are consistency rules for reasoning about reality if you have incomplete information about it. It is not at all about deriving nontrivial information about this reality - if you are mislead by the use of the "measurement" word that these "measurements" really "measure" some preexisting properties, instead of describing the results of an interaction with "measurement instruments", you can, of course, end up with contradictions. This is what has to be expected, once your theory about reality is wrong. And, without doubt, if one derives the contradiction, one will use logic, inclusive the logic of plausible reasoning known as probability theory. This does not mean the error is a contradiction in logic, or that our real world is incompatible with logic.

rubi said:
Well, dBB cannot have random variables representing spin. Hence, it cannot have probability distributions for spin and always needs to model a measurement device.
And it has no need for them. What is the problem with the need to model a device used in an experiment (misleadingly named "measurement device") if one wants to describe an experiment?
rubi said:
Quantum mechanics can compute probability distributions for spin, even without a model of measurement. If dBB were to include probability distributions for spin, as quantum mechanics does, then it would necessarily fail to be a classical probability theory.
Quantum theory does not have the aim to describe the real world, it is not a realistic interpretation. This can lead to some simplifications, like that some distributions in quantum equilibrium do not depend on the states of some devices which are part of the experiment. Fine, be happy with this. For a complete description this is no longer true. Not nice, but such is life.
rubi said:
Here is a true theorem: No classical probability theory can reproduce the observed statistics of spin particles.
You have forgotten to add: if it starts with the assumption that the results of spin-related experiments measure some inherent properties of the particle.
 
  • #81
Ilja, the whole point of my argument is to establish the fact that you cannot apply Reichenbach's principle directly to QM, since it doesn't have the necessary probabilistic structure. What you are doing is take QM and strip off all the features that make it incompatible with the principles of probability theory so you end up with a new theory. To this new theory, you can apply Reichenbach's principle. I agree. But you stil haven't applied it to the original theory, because it is not possible! Hence, Reichenbach's principle can still not make statements about QM itself, but only about different theories derived from it, by stripping off some of the information it provides! QM itself is not one of the theories to which Reichenbach's principle can be meaningfully applied.

It is a perfectly valid point of view to assume that spin is an intrinsic property of particles. You then have to give up classical probability theory. The vast majority of physicists prefer this point of view. Hence, they are using a theory, to which Reichenbach's principle cannot be applied! If you are using a different theory (dBB), then you can apply the principle, but it says nothing about the original theory, which everybody is using!
 
  • Like
Likes Markus Hanke
  • #82
And my point is that the problem is that "apply directly" is misguided, based on wrong assumptions about reality.

In some sense, indeed, dBB theory as well as other "hidden variable" interpretations are different theories. In dBB we obtain agreement with QT only for quantum equilibrium. But why do you think this is a problem? What matters is if the predictions agree with observation, and if the theory is logically consistent.

If a particular interpretation does not allow to talk about reality, ok, it may be nonetheless useful. A religion may have really beautiful music, and a nonrealistic theory can make really good empirical predictions. Does it follow that we have to become religious to have good music, or that we have to reject reality to make accurate predictions about experiments? I don't think so.

I have no problem to accept that to apply Reichenbach's common cause to Bohr's Holy Scriptures is anathema. But so what? And, no, I see no information QT provides which cannot be provided by realistic interpretations too.

If you think that it is a perfectly valid point of view to assume that spin is an intrinsic property of particles, so be it. But this is your personal theory, which makes nontrivial assumptions about reality. Once this theory is in conflict with the logic of plausible reasoning, you have to give up logic to follow it. Feel free to do so, your choice. But certainly not my choice. If the rules of logic cannot be applied to a theory, this theory is, in my opinion, not even a theory. It has, yet, to be modified to become a theory, because a theory should be reasonable, not something in conflict with logic.

Then, I do not care at all what "everybody is using" - many of the most horrible crimes have happened based on everybody was supporting them. And our actual time is in no way superior to the past about this. About the "original" Holy Scriptures I care even less. Anyway, nor the minimal interpretation, nor Bohr's Holy Scriptures make any claims in contradiction with probability theory. Claims that they do are incorrect interpretations, which can be easily traced to particular (wrong) theories about reality.
 
  • #83
Ilja said:
And my point is that the problem is that "apply directly" is misguided, based on wrong assumptions about reality.
Nobody knows what the right assumptions about reality are. It's your personal opinion that QM doesn't describe reality.

In some sense, indeed, dBB theory as well as other "hidden variable" interpretations are different theories. In dBB we obtain agreement with QT only for quantum equilibrium. But why do you think this is a problem? What matters is if the predictions agree with observation, and if the theory is logically consistent.
I don't think it is a problem. I don't even care about dBB theory. I was just countering your claim that Reichenbach's principle can be used to deduce that there is no common cause in QM. It can't, since it can't even be applied to QM. Sure, it makes statements about dBB, but those statements won't automatically hold for QM.

If a particular interpretation does not allow to talk about reality, ok, it may be nonetheless useful. A religion may have really beautiful music, and a nonrealistic theory can make really good empirical predictions. Does it follow that we have to become religious to have good music, or that we have to reject reality to make accurate predictions about experiments? I don't think so.
You don't have to reject reality. You just have to realize that reality can be different from what one might naively assume. Nature is the ultimate judge. She doesn't care about our philosophical preferences.

I have no problem to accept that to apply Reichenbach's common cause to Bohr's Holy Scriptures is anathema. But so what? And, no, I see no information QT provides which cannot be provided by realistic interpretations too.
You were claiming that there cannot be a common cause explanation for the Bell correlations in QM. This is wrong. The true statement would be that hidden variable theories are incompatible with a common cause explanation. Theories that reject hidden variables might still allow for a common cause.

If you think that it is a perfectly valid point of view to assume that spin is an intrinsic property of particles, so be it. But this is your personal theory, which makes nontrivial assumptions about reality.
Right, it is my personal theory (and also the personal theory of many others). It is also your personal theory that the world must be described by hidden variables, which is also a non-trivial assumption about reality.

Once this theory is in conflict with the logic of plausible reasoning, you have to give up logic to follow it. Feel free to do so, your choice. But certainly not my choice. If the rules of logic cannot be applied to a theory, this theory is, in my opinion, not even a theory. It has, yet, to be modified to become a theory, because a theory should be reasonable, not something in conflict with logic.
QM is not in conflict with logic. It is built using standard mathematics, which uses nothing but classical logic. Hence, we can use classical logic to talk about QM. Also, reasonability isn't a necessity for a physical theory. A physical theory must describe nature. If nature contradicts our intuition, then we have to adjust our intuition. Millions of physicists have learned quantum theory and have acquired a good intuition for quantum phenomena.
 
  • Like
Likes vanhees71 and AlexCaledin
  • #84
rubi said:
Nobody knows what the right assumptions about reality are. It's your personal opinion that QM doesn't describe reality.
In the minimal interpretation, QM does not pretend to describe reality. If an interpretation claims that no reality exists, I reject it as nonsensical. But the minimal interpretation does not make such claims, it simply does not give a description of reality.
rubi said:
I was just countering your claim that Reichenbach's principle can be used to deduce that there is no common cause in QM.
I never made such a claim. Reichenbach's principle claims the existence of causal explanations, like common causes. It also specifies what a common cause is.

There are rules of reasoning, which cannot be proven to be false by any observation, because to derive some nontrivial predictions - something which could be falsified by observation - has to use them. So, claiming that these rules are wrong would be simply the end of science as we know it. If we would take such a solution seriously, we would simply stop doing science, because it would be well-known that the methods we use are inconsistent. (Ok, also not a decisive argument - we do a lot of inconsistent things anyway.)

Whatever, there is a hierarchy, we have rules, hypotheses or so which make science possible, to reject them would make science meaningless. They are, of course, only human inventions too, but if they are wrong, doing science becomes meaningless. We would probably continue doing science, because humans like to continue to do things even if they have recognized that doing them is meaningless, which is what is named culture. But this culture named science would not be really science as it is today, an endeavor to understand reality, to find explanations, but like the atheist going to Church as part of his living in a formerly religious culture.

But this has not happened yet, at least for me doing science has yet some of its original meaning, and is an endeavor to understand reality, to find explanations which are consistent with the rules of logic, of consistent reasoning. And this requires that some ideas, like the rules of logic, of consistent reasoning, the existence of some external reality, and the existence of explanations, have to be true.

It is not only the point that giving them up would make science meaningless. It is also that there is no imaginable evidence which would motivate it. Because, whatever the conflict with observation, this would be always only an open scientific problem. And giving up science because there are open scientific problems? Sorry, this makes no sense. Science without open scientific problems would be boring.

rubi said:
You don't have to reject reality. You just have to realize that reality can be different from what one might naively assume. Nature is the ultimate judge. She doesn't care about our philosophical preferences.
Of course, one could imagine a Nature so that some beings in this Nature would be unable in principle to invent a theory about it without logical contradictions.

rubi said:
You were claiming that there cannot be a common cause explanation for the Bell correlations in QM. This is wrong. The true statement would be that hidden variable theories are incompatible with a common cause explanation. Theories that reject hidden variables might still allow for a common cause.
Simply wrong. There are causal explanations, they are even quite simple and straightforward, but violate Einstein causality. This is not really a big problem. Anyway, the other appearances of a similar symmetry (like for acoustic wave equations, where also Lorentz transformation with the speed of sound allow to transform solutions into other solutions of the wave equation) are known to be not fundamental.

rubi said:
Right, it is my personal theory (and also the personal theory of many others). It is also your personal theory that the world must be described by hidden variables, which is also a non-trivial assumption about reality.
Fine.

rubi said:
QM is not in conflict with logic. It is built using standard mathematics, which uses nothing but classical logic. Hence, we can use classical logic to talk about QM. Also, reasonability isn't a necessity for a physical theory. A physical theory must describe nature. If nature contradicts our intuition, then we have to adjust our intuition. Millions of physicists have learned quantum theory and have acquired a good intuition for quantum phenomena.
It is you who claims QM is in conflict with logic, namely with the rules of probability theory, which are, following Jaynes, Probability theory - the logic of science, the rules of consistent plausible reasoning. Consistent reasoning is not at all about intuition.
 
  • #85
rubi said:
The most widely known one is the Kochen-Specker theorem.

The Kochen-Specker theorem is about hidden variable models and uses assumptions beyond only probability theory. (Also, it only applies to Hilbert spaces of dimension three or more, so it says nothing about spin 1/2.) I think Kochen and Specker themselves pointed out that you can always construct a joint probability distribution for different measurements just by taking the probabilities given by the Born rule and multiplying them, like I pointed out in my earlier post. I think their stance was that this sort of thing didn't make a very satisfactory hidden variable model, but if the exercise is just to invent a joint probability distribution in order to express quantum physics in some axiomatic language that requires it then it looks to me like you could do it this way.

However, in order to see that QM can't be an ordinary probability theory

I wouldn't consider QM a probability theory in the first place. It's a physics theory that uses elements from probability theory as well as other areas of mathematics in its formulation. You're acting as if you think all of QM should be seen as a special case of Kolmogorov probability theory. Why would we want to do this, independently of whether it is even possible? We don't try to express all of electromagnetism or Newtonian physics or general relativity in the language of Kolmogorov probability theory, and yet this doesn't prevent us from being able to reason about and discuss and contrast the causal structure of these theories, so why would you insist it should be done for QM?
 
  • #86
wle said:
We don't try to express all of electromagnetism or Newtonian physics or general relativity in the language of Kolmogorov probability theory, and yet this doesn't prevent us from being able to reason about and discuss and contrast the causal structure of these theories, so why would you insist it should be done for QM?

I think it is trivial to express all of electromagnetism, GR etc in the language of Kolmogorov probability theory. For example, for classical mechanics one uses Liouville time evolution.
 
  • Like
Likes Ilja
  • #87
rubi said:
[..]
Here is a true theorem: No classical probability theory can reproduce the observed statistics of spin particles.

Likely untrue. Apparently debunked in this paper, which was linked from a recent post here (could not find back the post, but did find back the paper):

http://arxiv.org/abs/1305.1280 "The Pilot-Wave Perspective on Spin" -Norsen

rubi said:
The most widely known one is the Kochen-Specker theorem. However, in order to see that QM can't be an ordinary probability theory, you just need to notice that it has a probability distribution for both ##S_x## and ##S_z##. If these observables were random variables on a probability space, then you would be able to compute the probability ##P(S_x = +\wedge S_z = +)##. However, QM can't compute this number and hence can't be an ordinary probability theory (at least if what you'd consider an "ordinary probability theory" would satisfy Kolmogorov's axioms, which is the standard definition).

See elaborate discussion in the link here above. :smile:
 
  • Like
Likes Ilja
  • #88
rubi said:
The most widely known one is the Kochen-Specker theorem.
Oh, I haven't seen this. This clarifies the issue. Kochen-Specker presumes non-contextuality, while the known hidden variable theories like dBB are contextual.

This is, translated into layman language, the point I have made many times: In a contextual theory, what is named "measurement" is something different, an interaction, and its result depends as of the state of the "measured system", as of the state of the "measurment device", so that it is not a property of the system which is "measured".
 
  • #89
Well, a measurement of course always depends on the state of the measured object and the measurement apparatus, and to measure something implies that the measured object must interact with the measurement apparatus independent of the religion you follow in your metaphysical worldview ;-)).
 
  • #90
Its the same problem as what hidden variable is contained in two particles that tells them that they have collided with each other.
The particles contain no property like temperature, mass etc that gives them their location in space. So how do they know when they collide?
 
  • #91
LaserMind said:
Its the same problem as what hidden variable is contained in two particles that tells them that they have collided with each other.
The particles contain no property like temperature, mass etc that gives them their location in space. So how do they know when they collide?

Are you talking about entanglement of observables? Particles do not need to "directly" interact (collide) to become entangled on a basis.
 
  • #92
vanhees71 said:
Well, a measurement of course always depends on the state of the measured object and the measurement apparatus, and to measure something implies that the measured object must interact with the measurement apparatus independent of the religion you follow in your metaphysical worldview ;-)).
Fine. So you accept that Kochen-Specker is a theorem about theories where the result does not depend on the state of the measurement apparatus, but has to be predefined by the measured object, and is therefore not relevant for hidden variable theories at all?
 
  • #93
Ilja said:
In the minimal interpretation, QM does not pretend to describe reality. If an interpretation claims that no reality exists, I reject it as nonsensical. But the minimal interpretation does not make such claims, it simply does not give a description of reality.
Of course, QM describes reality. It's just that our naive picture of reality needs to be modified. Open minded people without philosophical prejudices about the world have no problem with that.

I never made such a claim. Reichenbach's principle claims the existence of causal explanations, like common causes. It also specifies what a common cause is.

There are rules of reasoning, which cannot be proven to be false by any observation, because to derive some nontrivial predictions - something which could be falsified by observation - has to use them. So, claiming that these rules are wrong would be simply the end of science as we know it. If we would take such a solution seriously, we would simply stop doing science, because it would be well-known that the methods we use are inconsistent. (Ok, also not a decisive argument - we do a lot of inconsistent things anyway.)

Whatever, there is a hierarchy, we have rules, hypotheses or so which make science possible, to reject them would make science meaningless. They are, of course, only human inventions too, but if they are wrong, doing science becomes meaningless. We would probably continue doing science, because humans like to continue to do things even if they have recognized that doing them is meaningless, which is what is named culture. But this culture named science would not be really science as it is today, an endeavor to understand reality, to find explanations, but like the atheist going to Church as part of his living in a formerly religious culture.

But this has not happened yet, at least for me doing science has yet some of its original meaning, and is an endeavor to understand reality, to find explanations which are consistent with the rules of logic, of consistent reasoning. And this requires that some ideas, like the rules of logic, of consistent reasoning, the existence of some external reality, and the existence of explanations, have to be true.

It is not only the point that giving them up would make science meaningless. It is also that there is no imaginable evidence which would motivate it. Because, whatever the conflict with observation, this would be always only an open scientific problem. And giving up science because there are open scientific problems? Sorry, this makes no sense. Science without open scientific problems would be boring.
None of this makes sense. Science doesn't depend on some any of this. We're making progress almost on a daily basis.

Of course, one could imagine a Nature so that some beings in this Nature would be unable in principle to invent a theory about it without logical contradictions.
There are no logical contradictions in QM. QM is fully consistent with classical logic. If you don't agree, provide a counterexample.

Simply wrong. There are causal explanations, they are even quite simple and straightforward, but violate Einstein causality. This is not really a big problem. Anyway, the other appearances of a similar symmetry (like for acoustic wave equations, where also Lorentz transformation with the speed of sound allow to transform solutions into other solutions of the wave equation) are known to be not fundamental.
Simply wrong. The absence of common cause explanations cannot be proven for QM.

It is you who claims QM is in conflict with logic, namely with the rules of probability theory, which are, following Jaynes, Probability theory - the logic of science, the rules of consistent plausible reasoning. Consistent reasoning is not at all about intuition.
No, I claim that QM is fully consistent with logic. What you call logic isn't actually logic, but rather a formalization of classical intuition. It is unreasonable to expect nature to work according to classical intuition.

wle said:
The Kochen-Specker theorem is about hidden variable models and uses assumptions beyond only probability theory.
No, the assumptions actually formalize some concepts that must be obeyed by a classical probability theory (non-contextuality). No non-classical probability theory will violate them.

I think Kochen and Specker themselves pointed out that you can always construct a joint probability distribution for different measurements just by taking the probabilities given by the Born rule and multiplying them, like I pointed out in my earlier post. I think their stance was that this sort of thing didn't make a very satisfactory hidden variable model, but if the exercise is just to invent a joint probability distribution in order to express quantum physics in some axiomatic language that requires it then it looks to me like you could do it this way.
The probability distribution you get by taking the product measure will be inconsistent with certain functional relationships between random variables that must hold in a classical probability theory. No probability distributions of random variables can be consistent with certain QM statistics. That is the theorem.

I wouldn't consider QM a probability theory in the first place. It's a physics theory that uses elements from probability theory as well as other areas of mathematics in its formulation.
QM is a theory that assigns probabilities to certain events. It does this in a fashion, which is incompatible with classical probability theory. If you don't want to call it a generalized probability theory, fine. That doesn't change the mathematical content of my statement.

You're acting as if you think all of QM should be seen as a special case of Kolmogorov probability theory. Why would we want to do this, independently of whether it is even possible?
I don't, but Ilja needs to, if he wants to apply Reichenbach's principle to QM. Reichenbach's principle requires a classical probability theory to be applied.

harrylin said:
Likely untrue. Apparently debunked in this paper, which was linked from a recent post here (could not find back the post, but did find back the paper):

http://arxiv.org/abs/1305.1280 "The Pilot-Wave Perspective on Spin" -Norsen
This paper doesn't construct a classical probability theory with spin observables modeled as random variables. (By the way, it is even possible for a single isolated spin 1/2 particle, but that is pretty much the only exception.)

Ilja said:
Oh, I haven't seen this. This clarifies the issue. Kochen-Specker presumes non-contextuality, while the known hidden variable theories like dBB are contextual.
Non-contextuality is exactly the assumption that observables can be modeled by classical random variables on a probability space, hence it proves my point. Of course dBB must be contextual, since Kochen-Specker proved that it cannot be non-contextual if it wants to reproduce QM.

This is, translated into layman language, the point I have made many times: In a contextual theory, what is named "measurement" is something different, an interaction, and its result depends as of the state of the "measured system", as of the state of the "measurment device", so that it is not a property of the system which is "measured".
I know that hidden variables must be contextual. That is exactly my point. You cannot cook up a classical probability theory that reproduces all statistics that can be computed from quantum mechanics.
 
  • #94
rubi said:
Of course, QM describes reality. It's just that our naive picture of reality needs to be modified. Open minded people without philosophical prejudices about the world have no problem with that.
"Open minded" people without prejudices have also no problem to accept Buddhism as a description of reality. Sorry for being closed minded on this, but for me a realistic theory has to describe all what is supposed to exist in reality, and this should include all the things around us which nobody doubts really exist. Including some equations how they change their states.

rubi said:
None of this makes sense. Science doesn't depend on some any of this. We're making progress almost on a daily basis.
Of course we make progress - because we do not give up the search for realistic causal explanations. Everywhere except in fundamental physics.
rubi said:
Simply wrong. The absence of common cause explanations cannot be proven for QM.
It can be. We observe 100% correlations if A and B measure the same direction. The common cause explanation would be that some common cause ##\lambda## defines this measurement result. This common cause exists in the past, thus, with some probability distribution ##\rho(\lambda)## independent of a and b. And it defines the measurement results A and B. So that we have the functions ##A (a,'lambda), B(b,\lambda)## we need to prove Bell's inequality. Once Bell's inequality is violated, the common cause explanation is excluded.
rubi said:
No, I claim that QM is fully consistent with logic. What you call logic isn't actually logic, but rather a formalization of classical intuition.
No. The essential argument used by Jaynes is consistency, which is a sufficiently precise requirement, and not some diffuse intuition.

rubi said:
No, the assumptions actually formalize some concepts that must be obeyed by a classical probability theory (non-contextuality). No non-classical probability theory will violate them.
dBB violates them, it is contextual, despite the fact that it is a completely classical, consistent, realistic, causal, deterministic theory. So, with non-contextuality you add some philosophical prejudice to probability theory which is not part of it.
rubi said:
QM is a theory that assigns probabilities to certain events. It does this in a fashion, which is incompatible with classical probability theory. If you don't want to call it a generalized probability theory, fine. That doesn't change the mathematical content of my statement.
It is incompatible with non-contextuality. Not with classical probability theory. Don't mingle a particular confusion about what happens (naming interactions "measurements" and results of the interactions "measurement results" even if nothing indicates that the result depends only on one part of the interaction) with the fundamental laws of consistent plausible reasoning known as probability theory.
rubi said:
I don't, but Ilja needs to, if he wants to apply Reichenbach's principle to QM. Reichenbach's principle requires a classical probability theory to be applied.
There is no problem with this. I can always use consistent reasoning. And I know that the laws of consistent plausible reasoning are those of probability theory. Read Jaynes.
rubi said:
This paper doesn't construct a classical probability theory with spin observables modeled as random variables. (By the way, it is even possible for a single isolated spin 1/2 particle, but that is pretty much the only exception.)
It does not construct what you name "classical probability theory", and what other people name a non-contextual model, because nobody needs it and nobody thinks that it correctly describes reality.
rubi said:
Non-contextuality is exactly the assumption that observables can be modeled by classical random variables on a probability space, hence it proves my point.
Maybe I will start to name my ether theory "classical logic"? This would allow me to accuse everybody who rejects the ether of making logical errors, and prove that ether theory follows from logic alone? That would be similar to your attempt to give non-contextuality (a very strange assumption about results of interactions, which would be appropriate only for a very special subclass of interactions named measurements) the status of of an axiom of consistent plausible reasoning.
rubi said:
I know that hidden variables must be contextual. That is exactly my point. You cannot cook up a classical probability theory that reproduces all statistics that can be computed from quantum mechanics.
I can. Take dBB theory in quantum equilibrium. Read Bohm 1952 for the proof.
 
  • #95
Ilja said:
It can be. We observe 100% correlations if A and B measure the same direction. The common cause explanation would be that some common cause ##\lambda## defines this measurement result. This common cause exists in the past, thus, with some probability distribution ##\rho(\lambda)## independent of a and b. And it defines the measurement results A and B. So that we have the functions ##A (a,'lambda), B(b,\lambda)## we need to prove Bell's inequality. Once Bell's inequality is violated, the common cause explanation is excluded.
No, your probability distribution needn't exist. In a quantum world, the common cause ##\lambda## might not commute with the observables of ##A## and ##B##, hence your joint probability distribution might not exist and so none of the remaining reasoning can be carried out without a common cause principle that works for non-commuting observables. It's that simple. Reichenbach's principle can't be applied in this situation. It's just not general enough.

Ilja said:
I can. Take dBB theory in quantum equilibrium. Read Bohm 1952 for the proof.
dBB theory doesn't reproduce the statistics of spin independent of measurement contexts. QM predicts probability distributions for spin independent of a measurement context.
 
Last edited:
  • #96
rubi said:
No, the assumptions actually formalize some concepts that must be obeyed by a classical probability theory (non-contextuality). No non-classical probability theory will violate them.

How so? The Kochen-Specker theorem includes an assumption that (deterministic) values v associated with quantum observables (Hermitian operators) satisfy conditions like ##v(A + B) = v(A) + v(B)## and ##v(AB) = v(A) v(B)## for all commuting ##A## and ##B##. Quantum observables are a concept specific to quantum physics that doesn't appear at all in probability theory.
No probability distributions of random variables can be consistent with certain QM statistics. That is the theorem.

I don't think you've justified that.

But let's say you're correct, and it's impossible to fully embed quantum physics in the language of Kolmogorov probability theory. In practice you may as well be correct anyway since quantum physics is not normally expressed in that language (whether there is a way to do it or not). So what? However you classify the type of probability theory quantum physics uses, we've been using it in physics since QM was first formulated back in the 1920s and 1930s and for the most part we don't think anything special of it. In particular, when we talk about quantum behaviour and correlations and we contrast this with various types of "classical" behaviour, this is not what we are talking about.
I don't, but Ilja needs to, if he wants to apply Reichenbach's principle to QM. Reichenbach's principle requires a classical probability theory to be applied.

I wouldn't agree with this. The main reason QM doesn't fit neatly into Kolmogorov probability theory is that we treat measurement choice as a free variable. I wouldn't consider this a good reason to shut down a discussion about whether certain types of causal explanation for quantum correlations are possible or not.

An example: if you allow sufficiently fast classical communication then it's possible to simulate arbitrary (even Bell-violating) quantum correlations. If you gave me a "magic" ethernet cable that could transmit data instantaneously then I could program two computers, communicating with each other using this cable and accepting measurement choices as inputs, to generate outputs in accord with correlations predicted by QM. I would in principle consider this sort of thing a valid candidate causal explanation for QM correlations.
 
  • #97
wle said:
How so? The Kochen-Specker theorem includes an assumption that (deterministic) values v associated with quantum observables (Hermitian operators) satisfy conditions like ##v(A + B) = v(A) + v(B)## and ##v(AB) = v(A) v(B)## for all commuting ##A## and ##B##. Quantum observables are a concept specific to quantum physics that doesn't appear at all in probability theory.
Because random variables satisfy these properties by definition: ##(AB)(x) = A(x)B(x)##, because this is how the ##AB## is defined. Same for addition. It has nothing to do with quantum observables, it's just the functional relationships between the valuations. Kochen-Specker assumes that the random variables that must represent the quantum observables in the classical probability theory must satisfy the usual functional relationshipts that classical random variables obey by definition. Kochen-Specker essentially says that you cannot embedd the quantum probabilities into classical probability theory without having to readjust the very definitions of multiplication and addition of random variables.

I don't think you've justified that.
The Kochen-Specker theorem proves it.

But let's say you're correct, and it's impossible to fully embed quantum physics in the language of Kolmogorov probability theory. In practice you may as well be correct anyway since quantum physics is not normally expressed in that language (whether there is a way to do it or not). So what? However you classify the type of probability theory quantum physics uses, we've been using it in physics since QM was first formulated back in the 1920s and 1930s and for the most part we don't think anything special of it. In particular, when we talk about quantum behaviour and correlations and we contrast this with various types of "classical" behaviour, this is not what we are talking about.
It is not problematic that QM uses a generalized way for computing probabilities. It just means that certain concepts that used to work in classical probability theory cannot be carried over to the quantum framework without modification (such as Reichenbach's principle). This doesn't change how we use quantum theory in any way. Most physicist even understand intuitively how to correctly use quantum mechanics without understanding mathematically what is different about it.

I wouldn't agree with this. The main reason QM doesn't fit neatly into Kolmogorov probability theory is that we treat measurement choice as a free variable
No, the reason for why QM doesn't fit into Kolmogorov probability theory is that the event algebra is a certain orthomodular lattice, rather than a sigma algebra. The probability "measure" assigns probabilities to elements of these algebras. While in a sigma algebra, there always exists a third element ##A\wedge B## (the "meet"), given the events ##A## and ##B## (namely ##A\cap B##), this is no longer true for an orthomodular lattice. However, Kolmogorov's axioms of probability theory depend on the existence of ##A\wedge B##. Hence, all theorems that are derived from Kolmogorov's axioms and all concepts that depend on this need to be adjusted to the new situation.

I wouldn't consider this a good reason to shut down a discussion about whether certain types of causal explanation for quantum correlations are possible or not.

An example: if you allow sufficiently fast classical communication then it's possible to simulate arbitrary (even Bell-violating) quantum correlations. If you gave me a "magic" ethernet cable that could transmit data instantaneously then I could program two computers, communicating with each other using this cable and accepting measurement choices as inputs, to generate outputs in accord with correlations predicted by QM. I would in principle consider this sort of thing a valid candidate causal explanation for QM correlations.
This may be a valid causal explanation. However, the point is whether there can be causal explanations that don't violate the speed of light. Ilja wants to exclude those by naively using concepts of classical probability theory, which are known to not even be well-defined in the context of quantum theory. Certainly, this is not valid reasoning.
 
Last edited:
  • #98
rubi said:
This may be a valid causal explanation. However, the point is whether there can be causal explanations that don't violate the speed of light. Ilja wants to exclude those by naively using concepts of classical probability theory, which are known to not even be well-defined in the context of quantum theory. Certainly, this is not valid reasoning.

But that is what Bell's theorem says. As we have discussed, you can escape it by using a more general notion of cause than that used in Bell's theorem, which is fine. But in that case you should simply clarify your terminology.
 
  • #99
atyy said:
But that is what Bell's theorem says.
Bell's theorem is a theorem about theories formulated in the language of classical probability theory. It's not a theorem about quantum theory. We just use it to conclude that quantum theory is different from classical probability by noting that QM can violate an inequality that is not violated by certain classical theories. Bell says that there is a class of theories (local hidden variable theories) that satisfy a certain inequality. QM is not in that class. Bell's theorem can't be used to conclude anything about theories that are not both local and hidden variable theories.

As we have discussed, you can escape it by using a more general notion of cause than that used in Bell's theorem, which is fine. But in that case you should simply clarify your terminology.
No, it's not a more general notion of cause. Cause means the same as always. What changes is how we can tell what consistutes a cause just by looking at statistics. However, if the statistics is not compatible with the classical probability theory anymore, then it is to be expected that one needs to adjust the method which is used to tell whether some statistics hints at a causal relationship or not. Specifically, using Reichenbach's principle makes no sense in the context of quantum statistics. It's just not applicable here. That just means that we have no clear criterion that tells us what consitutes a common cause. The notion of cause itself is not modified.
 
Last edited:
  • #100
rubi said:
Nobody knows what the right assumptions about reality are. It's your personal opinion that QM doesn't describe reality.

Indeed.

:smile::smile::smile::smile::smile::smile:

Thanks
Bill
 
Back
Top