A Assumptions of the Bell theorem

  • #51
stevendaryl said:
I don't understand how we're counting the "number of measurements". The claim that (for a particular setting of the three detectors), the product of the spin results is guaranteed to be a certain value can be falsified by a single experiment. But the claim that the measurement results are due to hidden variables cannot be falsified by a single experiment. So it seems to me incorrect to say that a single measurement, or even 4 measurements, can disprove local hidden variables.

[added]
In the GHZ experiment, you have the predictions of QM. You can prove, without doing any experiments at all, that these predictions are inconsistent with a local hidden variable theory. But that still leaves two possibilities open:
  1. QM is wrong.
  2. Local hidden variables is wrong.
In case 1, a single measurement could show it. In case 2, it seems that it can only be shown statistically. Which is the same situation as the usual EPR paradox. In both cases, disproving QM can be done with a single measurement, but disproving local hidden variables requires many measurements.
Yes, the "single measurement" issue is really a side note of little importance. I just wanted to make clear that it's really more than one (at least 4) measurements that are required due to the contextual nature of the experiment.

The central issue is: What are we actually trying to arrive a contradiction with? And my answer would be: The notion of local causality (which requires probabilities to formulate and thus statistics to falsify). So far, Demystifier has dodged that question, but if he was right, he would have to specify another notion of local causality that doesn't require probabilities for its formulation.
 
Physics news on Phys.org
  • #52
Nullstein said:
It seems that you disagree that the contradiction we're looking for is the contradiciton with the local causality condition (which is probabilistic). So what other notion of causality do you have in mind that the GHZ experiment could contradict?
I disagree that local causality condition is necessarily probabilistic. In classical relativistic theory, for instance, it is deterministic. GHZ could contradict deterministic local causality condition.
 
  • #53
Demystifier said:
I disagree that local causality condition is necessarily probabilistic. In classical relativistic theory, for instance, it is deterministic. GHZ could contradict deterministic local causality condition.
So what is the local causality condition in your opinion? Can you write down a formula?
 
  • #54
Demystifier said:
I don't understand this. If nonlocality is not weird, then what exactly is weird about candidate deeper theories?
You know the answer: There's nothing weird about quantum theory. It's just our classical prejudice that's weird!
 
  • #55
Nullstein said:
So what is the local causality condition in your opinion? Can you write down a formula?
I explained it qualitatively in my lectures (that I linked in another post) at page 13, but I have to think how to write it down with a formula.
 
  • #56
vanhees71 said:
You know the answer: There's nothing weird about quantum theory. It's just our classical prejudice that's weird!
I know your answer, but I wanted to see his answer.
 
  • Like
Likes vanhees71
  • #57
martinbn said:
Which candidate theories do you have in mind?
Bohmian, many worlds, GRW, ...
 
  • #58
vanhees71 said:
It's just our classical prejudice that's weird!
In the first post I made a list of necessary assumptions. Which of them, in your opinion, is wrong?
 
  • #59
Demystifier said:
I explained it qualitatively in my lectures (that I linked in another post) at page 13, but I have to think how to write it down with a formula.
Well, the problem is that the argument in your lecture is very informal, which is fine for an overview, but because of the informal nature, it misses lots of important details. For example, you haven't considered the possibility of superdeterminism or retrocausality and so on. That's what Bell gave us: He turned the informal EPR argument into a falsifiable formula with clearly defined assumptions.

I think the reason for why people have settled for probabilistic causality is much deeper: How can you test causality even in principle, if you aren't allowed to repeat experiments and draw conclusions from the statistics? If B follows A just once, it could always be just a coincidence. In order to infer causality, you need something like: The occurrence of A increases the probability of the occurrence of B, while the occurrence of (not A) decreases the probability of the occurrence of B. (In fact, this is not enough. That's where Reichenbach enters the discussion.) This research has culminated today into the causal Markov condition.
 
  • #60
Demystifier said:
In the first post I made a list of necessary assumptions. Which of them, in your opinion, is wrong?
The problem with that list is that it excludes the classicality assumption. I think the answer by Copenhagenists would just be that classicality, in the form of Kolmogorov probability, would have to be rejected, i.e. the world is non-classical and therefore classical probability doesn't apply anymore. You can find research about this in the works of Redei and Hofer-Szabo. They generalize the common cause principle to the case of quantum probability. It reduces to the classical notion in the case of commuting observables.
 
  • #61
Demystifier said:
In the first post I made a list of necessary assumptions. Which of them, in your opinion, is wrong?
I don't know about these philosophical assumptions, but classical physics is obviously wrong outside its validity range. It's an approximation of quantum physics with a limited validity range, while for QT so far the validity range is unknown (except the fact that it cannot satisfactorily describe gravitation). That's why QT has been discovered in 1925 (after 25 years of struggling with "old quantum theory").
 
  • #62
Nullstein said:
How can you test causality even in principle, if you aren't allowed to repeat experiments and draw conclusions from the statistics? If B follows A just once, it could always be just a coincidence.
Fine, but then probability is even needed to test classical deterministic theories.
 
  • #63
Nullstein said:
The problem with that list is that it excludes the classicality assumption.
The notion of "classicality" is too ambiguous to be mentioned on the list. In fact, any assumption on the list could be interpreted as a kind of "classicality".
Nullstein said:
I think the answer by Copenhagenists would just be that classicality, in the form of Kolmogorov probability, would have to be rejected,
That assumption is on the list, at least on the unnecessary one. But I see now that it is controversial whether it should be on the necessary or unnecessary list in the GHZ case, which is what we are discussing here.
 
  • #64
vanhees71 said:
I don't know about these philosophical assumptions
So you don't know what's the problem, but you know what's the solution. Lucky you! :-p
 
  • #65
Yes, the solution is to accept that Nature behaves as she behaves and that with the natural sciences we are able to find this out with astonishing detail and accuracy. The discovery of QT simply shows that a naive macroscopic worldview doesn't apply in the microscopic realm. That's not so surprising, is it? You don't need to make physics more complicated than it is by inventing some philosophical problems which cannot be unambiguously answered by the scientific method.

Bell's great achievement was to make such an ambiguous philosophical pseudo-problem decidable by the scientific method, i.e., by experiment, and all such Bell tests are in overwhelming favor for the predictions of quantum theory. So this problem is indeed solved and one can move on to apply the solution for 21st-century technology.
 
  • Like
Likes dextercioby
  • #66
gentzen said:
I cannot believe that the meaning of "ontological theories" can be reduced to a mere instrumentalist "capacity to have effect"

Agree fully.

an oversimplistitic functionalist criteria.

.
 
  • #67
vanhees71 said:
You know the answer: There's nothing weird about quantum theory. It's just our classical prejudice that's weird!
I think you are wrong about that. Unless you consider it prejudice to want a consistent theory.

Now, I don’t think that QM is inconsistent. But I think QM + no FTL influences + no backwards in time influences + no superdeterminism + no classical/quantum cut together are inconsistent.
 
  • Like
Likes Demystifier
  • #68
No QM is consistent without backwards in time influences (by construction there are none of this kind in the theory) and without classical/quantum cut (there's none known; quantum effects can be observed for ever larger systems). I don't know what superdeterminism is. For sure QM is causal but indeterministic.
 
  • #69
Demystifier said:
Fine, but then probability is even needed to test classical deterministic theories.
Yes, I would argue so. At least, if you're interested in uncovering causal relationships. Even in a deterministic theory, the answer to the question whether a specific circumstance if the cause of an effect in the future, requires the analysis of multiple alternatives and evaluating their chance to influence the future.

For example, planetary motion is (in an idealized setting) completely deterministic. Now you could ask the question: Does the presence of the moon cause Earth to orbit the sun or would Earth also orbit the sun if the moon wasn't there? You then have to analyze the deterministic equations for different initial conditions and you might then come to the conclusion that the presence of the moon doesn't significantly alter the probability of Earth orbiting sun. (There is even a celebrated result in this direction, known as the KAM theorem.)

Demystifier said:
The notion of "classicality" is too ambiguous to be mentioned on the list. In fact, any assumption on the list could be interpreted as a kind of "classicality".
That assumption is on the list, at least on the unnecessary one. But I see now that it is controversial whether it should be on the necessary or unnecessary list in the GHZ case, which is what we are discussing here.
I was thinking about classicality in the sense of Kolmogorovian probability in contrast to e.g. quantum probability. That means that a classical state is just characterized by an element of a set of states and probabilities just arise as probability distributions on this set.
 
  • Like
Likes dextercioby
  • #70
Nullstein said:
Kolmogorovian probability in contrast to e.g. quantum probability.
What do you mean by quantum probability? The Born rule satisfies Kolmogorov axioms of probability.
 
  • Like
Likes vanhees71
  • #71
Demystifier said:
What do you mean by quantum probability? The Born rule satisfies Kolmogorov axioms of probability.
Only for commuting observables. But in QM, it is possible to have observables in the past not commute with observables in the present. And then conditional quantum probabilities, conditioned on non-commuting past observables, behave differently than conditional classical probabilities (see e.g. Ishams book on quantum theory, sec. 8.3.2). That also led some people to study the possibility of non-commuting common causes in the past, as mentioned earlier.
 
  • #72
Nullstein said:
Only for commuting observables. But in QM, it is possible to have observables in the past not commute with observables in the present. And then conditional quantum probabilities, conditioned on non-commuting past observables, behave differently than conditional classical probabilities (see e.g. Ishams book on quantum theory, sec. 8.3.2). That also led some people to study the possibility of non-commuting common causes in the past, as mentioned earlier.
Perhaps I missed something, but as far as I can see, Isham distinguishes classical from quantum probability only by assuming that the former is not contextual. I think it's misleading. First, probability in classical physics can be contextual too (it's just not so common as in QM and cannot violate locality). Second, the Kolmogorov axioms do not contain anything like a no-contextuality axiom.
 
  • #73
Demystifier said:
Perhaps I missed something, but as far as I can see, Isham distinguishes classical from quantum probability only by assuming that the former is not contextual. I think it's misleading. First, probability in classical physics can be contextual too (it's just not so common as in QM and cannot violate locality). Second, the Kolmogorov axioms do not contain anything like a no-contextuality axiom.
No, that's not the essence of that section. What I was referring to was that the inequality of eqs. (8.18) and (8.20) shows that quantum conditional probability behaves differently than classical conditional probability:

"Thus quantum-mechanical conditional probabilities behave in a very
different way from classical ones. In particular, the classical probabilistic
distribution of the values of B does not depend on the fact that A has some
value, but the quantum-mechanical predictions of results of measuring B
do depend on the fact that an ideal measurement of A was performed
immediately beforehand."
 
  • #74
Nullstein said:
"Thus quantum-mechanical conditional probabilities behave in a very
different way from classical ones. In particular, the classical probabilistic
distribution of the values of B does not depend on the fact that A has some
value, but the quantum-mechanical predictions of results of measuring B
do depend on the fact that an ideal measurement of A was performed
immediately beforehand."
But that's just contextuality, as I said. And it does have classical analogs. For instance, in psychology measurement of intelligence (IQ) is contextual. The result may depend on many factors, e.g. how well the subject slept that night or whether his emotional intelligence (EQ) has been measured immediately before. Similar examples exist in medicine. Even in pure physics, if measurement of position is performed by looking with the help of strong classical light, the momentum transferred by recoil of classical light slightly changes the momentum of the observed classical object.

You may object that those classical contextual measurements are not ideal, while in QM contextuality manifests even for ideal measurements. But what's the definition of "ideal measurement"? I would say that in such an objection one uses different definitions for classical and quantum measurements. With a reasonable single definition applicable to both classical and quantum, one would probably conclude that in quantum physics no measurement is ideal.
 
  • #75
I was going to argue that QM probabilities do violate the Kolmogorov axioms, but then I convinced myself that they don't. Certainly, the "collapse" interpretation of QM, although it is weird for other reasons, doesn't have weird probabilities. For any measurement you might perform, there is a well-defined probability for each possible measurement result. Noncommuting observables don't interfere with this.

So for example, if an electron is prepared to be spin-up in the z-direction, then the following conditional probabilities are well-defined:

  • The probability of getting spin-up given that I choose to measure spin in the z-direction.
  • The probability of getting spin-down given that I choose to measure spin in the x-direction.
  • Etc.

As long as all the base events are of the form "The probability that I will get result R given that I perform measurement M" then there is nothing weird about quantum mechanical probabilities.

Where quantum probabilities get weird is if you take the events to be of the form "The probability that the particle has spin-up in the x-direction, given that it has spin-up in the z-direction". The collapse interpretation doesn't give a meaning to such statements.
 
  • Like
Likes Killtech, PeterDonis and vanhees71
  • #76
There is a certain position on quantum mechanics that amounts to rejecting all the weird possibilities:
  1. Rejecting nonlocality (FTL influences)
  2. Rejecting back-in-time influences.
  3. Rejecting the idea that reality is subjective.
  4. Rejecting the idea that measurement causes wave function collapse (or minds or observation or anything)
  5. Rejecting the idea of many-worlds.
  6. Rejecting the idea of a classical/quantum cut. (Rejecting the idea of laws that apply only to macroscopic or only to microscopic phenomena)
I consider this the "no-nonsense interpretation of quantum mechanics". It's admirable in many ways, but I really believe that it is inconsistent. Which I can summarize as "no-nonsense is nonsense".
 
Last edited:
  • Like
Likes Demystifier
  • #77
stevendaryl said:
Where quantum probabilities get weird is if you take the events to be of the form "The probability that the particle has spin-up in the x-direction, given that it has spin-up in the z-direction". The collapse interpretation doesn't give a meaning to such statements.
But isn't this exactly what the quantum probabilities are about? You have states and observables. The operational definition of a state is a preparation procedure (or an equivalence class of preparation procedures). E.g., you use an accelerator to accelerate a bunch of protons to 7 TeV energy (within some range of accuracy). This determines within the range of accuracy the momentum of these protons and you let collide them with a bunch of other such prepared protons running in the opposite direction (aka LHC @ CERN). The observables are operationally defined by some measurement devices to measure these observables. At CERN these are the detectors which collect data on all the many particles produced in the pp collision. This you repeat very many times and you get some statistics about the outcome of measuring observables of the produced particles. So indeed what you do is to measure probabilities for a given preparation of your measured system, and that's what a well-defined probability description should do.

Of course this holds for your Stern-Gerlach experiment (SGE) too: What you describe is that you prepare with one SG magnet a particle beam with spin-up wrt. the ##z##-component of the spin and then measure the probability to get spin up or down wrt. the ##x##-component of the spin. There's indeed no need for a collapse or anything else outside the physical core of the theory. You just prepare particles (defining its pure or mixed state; usually it's the latter) and measure any observables on an ensemble of such prepared particles to be able to compare the predicted probabilities for finding the possible values of the measured observables by the usual statistical means (applying of course standard probability theory a la Kolmogorov).
 
  • #78
I have said this before, but the reason I say that the "no-nonsense interpretation" is inconsistent is really about measurements and probability. On the one hand, according to the no-nonsense interpretation, there is nothing special about measurements. They are just physical interactions that are in principle describable by quantum mechanics. On the other hand, according to the no-nonsense interpretation, the only meaning to the wave function is to give probabilities for measurement results. To me, those two statements are just contradictory. The first denies a special role for measurement, and the second affirms it.
 
  • #79
stevendaryl said:
There is a certain position on quantum mechanics that amounts to rejecting all the weird possibilities:
  1. Rejecting nonlocality (FTL influences)
  2. Rejecting back-in-time influences.
  3. Rejecting the idea that reality is subjective.
  4. Rejecting the idea that measurement causes wave function collapse (or minds or observation or anything)
  5. Rejecting the idea of many-worlds.
  6. Rejecting the idea of a classical/quantum cut. (Rejecting the idea of laws that apply only to macroscopic or only to microscopic phenomena)
I consider this the "no-nonsense interpretation of quantum mechanics". It's admirable in many ways, but I really believe that it is inconsistent. Which I can summarize as "no-nonsense is nonsense".
I agree it's inconsistent, but I think @vanhees71 thinks it isn't. How does he avoid the inconsistency? By adding one more principle to the list:
7. Rejecting the idea that philosophical arguments (which are needed to derive a contradiction) are relevant to science.
 
  • #80
Demystifier said:
I agree it's inconsistent, but I think @vanhees71 thinks it isn't. How does he avoid the inconsistency? By adding one more principle to the list:
7. Rejecting the idea that philosophical arguments (which are needed to derive a contradiction) are relevant to science.
Philosophers treat contradictions as these terrible things: "Oh no! Your system is inconsistent! From an inconsistency, you can derive anything! That makes your system worthless!"

But no-nonsense people are not really bothered by contradictions. That's because they don't just apply what they hold to be true willy-nilly. They have informally demarcated domains of applicability for anything that they claim to be true. The contradictions that philosophers worry about only are relevant when you apply something that is true in one domain to an inappropriate domain.

The big contradiction in the no-nonsense view is the attitude toward measurements. On the one hand, measurements are considered ordinary interactions, described by quantum mechanics. On the other hand, measurements play a special role in quantum mechanics, in that quantum mechanics is taken to only give probabilities for measurement results. I think these two beliefs are contradictory. But they don't cause any problems for the practical-minded physicist. Depending on the problem at hand, you either treat measurements as special, or you treat them as ordinary interaction. You don't do both at the same time.

Bohr understood this, I think.
 
  • Like
Likes eloheim, Fra and Demystifier
  • #81
Demystifier said:
I agree it's inconsistent, but I think @vanhees71 thinks it isn't. How does he avoid the inconsistency? By adding one more principle to the list:
7. Rejecting the idea that philosophical arguments (which are needed to derive a contradiction) are relevant to science.
What's concretely inconsistent? I don't see anything inconsistent when just ignoring all the metaphysical balast as mentioned in the list. Indeed, science works best when taking out all the "weirdness" and concentrate on what's observed in the real world/lab (operational definitions of preparation and measurement procedures) and and how this is described within the theory (mathematical formulation) and not to mix up the one with the other side.
 
  • #82
Demystifier said:
But that's just contextuality, as I said. And it does have classical analogs. For instance, in psychology measurement of intelligence (IQ) is contextual. The result may depend on many factors, e.g. how well the subject slept that night or whether his emotional intelligence (EQ) has been measured immediately before. Similar examples exist in medicine. Even in pure physics, if measurement of position is performed by looking with the help of strong classical light, the momentum transferred by recoil of classical light slightly changes the momentum of the observed classical object.

You may object that those classical contextual measurements are not ideal, while in QM contextuality manifests even for ideal measurements. But what's the definition of "ideal measurement"? I would say that in such an objection one uses different definitions for classical and quantum measurements. With a reasonable single definition applicable to both classical and quantum, one would probably conclude that in quantum physics no measurement is ideal.
Well, it's a related, but different thing here. Usually, when you have two non-commuting observables, you can't measure both of them, so the fact that they cannot both have well-defined values due to their non-commutativity, doesn't cause any problems. Think of the SG experiment, where you can align the detector only along one axis, which makes it impossible to measure the spin along another axis at the same time. QM also doesn't assign probabilities to such joint observations of incompatible observables.

However, Isham discusses a situation where you make measurements of incompatible observables at different times. Since they are incompatible, they can't in general both have well-defined values. However, it's perfectly possible to measure incompatible observables one after another and obtain these values. In this case, QM does assign probabilities to this joint observation and this is where a counterexample to the classical behavior of conditional probabilities can be constructed.
 
  • #83
Nullstein said:
Well, it's a related, but different thing here. Usually, when you have two non-commuting observables, you can't measure both of them, so the fact that they cannot both have well-defined values due to their non-commutativity, doesn't cause any problems. Think of the SG experiment, where you can align the detector only along one axis, which makes it impossible to measure the spin along another axis at the same time. QM also doesn't assign probabilities to such joint observations of incompatible observables.

However, Isham discusses a situation where you make measurements of incompatible observables at different times. Since they are incompatible, they can't in general both have well-defined values. However, it's perfectly possible to measure incompatible observables one after another and obtain these values. In this case, QM does assign probabilities to this joint observation and this is where a counterexample to the classical behavior of conditional probabilities can be constructed.
Can you describe Isham's situation? Or post a link?
 
  • #84
stevendaryl said:
Can you describe Isham's situation? Or post a link?
It's in Ishams book "Lectures on Quantum Theory", section 8.3.2. It's a few pages long, but I can try to condense it:

He discusses the situation where you perform a measurement of the observable ##A## and later the measurement of the observable ##B##. He computes the probabilities ##P(B=b_n|A=a_m)##, ##P(A=a_m)## and ##P(B=b_n)## using quantum mechanics. Classically, you would expect that ##P(B=b_n)=\sum_m P(B=b_n|A=a_m)P(A=a_m)##, but in quantum mechanics, this relation is violated if ##A## and ##B## don't commute.

Normally, it's not a problem if ##A## and ##B## don't commute, because you can't measure them simultaneously anyway, so a probability like ##P(B=b_n|A=a_m)## is meaningless. However, we don't have that excuse if ##A## and ##B## are observables at different times.

As an example, consider some Hamiltonian ##H## with time evolution ##U(t)##. Then take ##\hat A=\hat x## and ##\hat B=\hat x(t)=U^\dagger(t) \hat x U(t)##.
 
  • #85
Nullstein said:
Classically, you would expect that ##P(B=b_n)=\sum_m P(B=b_n|A=a_m)P(A=a_m)##, but in quantum mechanics, this relation is violated if ##A## and ##B## don't commute.

There is a subtlety here, which is that the formula ##P(B=b_n)=\sum_m P(B=b_n|A=a_m)P(A=a_m)## implicitly assumes that it is possible to measure ##A## without disturbing the system. If you include the possibility that measuring ##A## disturbs the system, then you really have to make a distinction between the system having value ##a_m## for observable ##A## and the result of an ##A## measurement producing result ##a_m##.

So we don't really have ##P(A = a_m)## but something like ##P(A = a_m | M_A)##. The probability that the measurement result is ##a_m## given that the experimenter performed measurement ##M_A##. Similarly for observable ##B##. Then we have two different probabilities:

##P(B=b_n | M_B \wedge M_A)## (if there was a previous measurement of ##A##)
##P(B=b_n | M_B \wedge \neg M_A)## (if there was not).

The classical probability rules would tell you that:

##P(B=b_n | M_B \wedge M_A) = \sum_m P(B=b_n | M_B \wedge M_A \wedge A = a_m) P(A = a_m | M_A)##

But you would not necessarily have ##P(B=b_n | M_B \wedge M_A) = P(B=b_n | M_B \wedge \neg M_A)##

Of course, the "disturbance" interpretation of noncommuting observables is itself problematic, because it implies nonlocality, but it doesn't by itself violate the rules of probability.
 
  • Like
Likes Demystifier
  • #86
stevendaryl said:
There is a subtlety here, which is that the formula ##P(B=b_n)=\sum_m P(B=b_n|A=a_m)P(A=a_m)## implicitly assumes that it is possible to measure ##A## without disturbing the system. [...]

Of course, the "disturbance" interpretation of noncommuting observables is itself problematic, because it implies nonlocality, but it doesn't by itself violate the rules of probability.
Yes, I agree that disturbances are one possibility to interpret it. But then you need to keep track of the measurements by hand, it doesn't come out of the formalism or the Born rule. Moreover, one can't cure it by describing the disturbances quantum mechanically, because the more detailed description will suffer from the same problem, just at a different level. Another way to interpret it is that quantum probabilies are generalizations of classical probabilities that don't always respect the classical rules.

Basically, you are faced with the following choice:
  1. Either you accept that measurements disturbe the system. But disturbance should be a physical process, so it should be described by some physical theory. This theory cannot be quantum mechanics itself, because a more detailed quantum theory will not fix the problem once and for all. It will reappear at a different level.
  2. Or you want to stick to the idea that quantum theory describes everything. But then you must give up classical probabilities.
 
Last edited:
  • Like
Likes akvadrako and Demystifier
  • #87
Nullstein said:
Yes, I agree that disturbances are one possibility to interpret it. But then you need to keep track of the measurements by hand, it doesn't come out of the formalism or the Born rule. Moreover, one can't cure it by describing the disturbances quantum mechanically, because the more detailed description will suffer from the same problem, just at a different level. Another way to interpret it is that quantum probabilies are generalizations of classical probabilities that don't always respect the classical rules.

Basically, you are faced with the following choice:
  1. Either you accept that measurements disturbe the system. But disturbance should be a physical process, so it should be described by some physical theory. This theory cannot be quantum mechanics itself, because a more detailed quantum theory will not fix the problem once and for all. It will reappear at a different level.
  2. Or you want to stick to the idea that quantum theory describes everything. But then you must give up classical probabilities.
Then I choose 1. Quantum theory in its minimal form is incomplete, one should add something, like objective collapse (a'la GRW) or additional variables (a'la Bohm).

But for this thread, option 2. is more interesting.
 
  • #88
stevendaryl said:
Philosophers treat contradictions as these terrible things: "Oh no! Your system is inconsistent! From an inconsistency, you can derive anything! That makes your system worthless!"
Would you then count mathematical logicians as philosophers?
 
  • #89
Demystifier said:
Would you then count mathematical logicians as philosophers?
Sure. My point is that physicists, unlike mathematicians, don’t consider inconsistencies fatal. Probably our best theories are inconsistent, but work well enough in a limited domain.
 
  • Like
Likes Demystifier
  • #90
stevendaryl said:
The big contradiction in the no-nonsense view is the attitude toward measurements. On the one hand, measurements are considered ordinary interactions, described by quantum mechanics. On the other hand, measurements play a special role in quantum mechanics, in that quantum mechanics is taken to only give probabilities for measurement results. I think these two beliefs are contradictory.
I agree with putting this in focus.

stevendaryl said:
But they don't cause any problems for the practical-minded physicist. Depending on the problem at hand, you either treat measurements as special, or you treat them as ordinary interaction. You don't do both at the same time.

Bohr understood this, I think.
I suspect a lot of theoretical physicists can't be accused for beeing practical-minded though. So it remains a problem.

I fully agree with the idea that inconsitencies are not fatal.

But I see hope for using this to make progress in the context of ideas of evolution of law. For any agent learning, an inconsistency can be understood in a way where it's own inference system is challenged, and that current observations are not consistent with the a priori expectations, this presents a challenge to the agent. It must revise, or risk destabilisiation. One can probably define a measure of this inconsistency, and as long as its small, progress is made.

This also connects to the issue Popper faced, where he to make science clean and make it a deductive process, no fuzzy induction etc. The inconsistency in that idea is that while falsification is deductive, hypothesis generation is not. Applied to foundations of QM, I like to ask for example, how does nature itself, resolve "inconsistencies" in the evolutionary perspective? We like to think that inconsistences are nto stable, and inconsistent agents are not likely do dominate, so they are removed by darwinian selection. The result is a approximately mutually consistent system.

I think the "inconsistencies" we partly agree one, are not something a responsible theorist should let pass. That's not to suggest however, that one can't simultaneously acknowledge that inconsistencies in themselves can not be avoided, and are not terminal failures in any way. They are rather just food for us who are less practically minded.

/Fredrik
 
  • #91
stevendaryl said:
Sure. My point is that physicists, unlike mathematicians, don’t consider inconsistencies fatal. Probably our best theories are inconsistent, but work well enough in a limited domain.

For a mathematician, his subject is defined by the theory. So if that theory is inconsistent, what are they talking about? But for a physicist, the ultimate subject matter is defined by observations. Theories are just tools. If the tools are flawed (inconsistent), then you just learn to work with their flaws.
 
  • Like
Likes Fra and Demystifier
  • #92
stevendaryl said:
Theories are just tools. If the tools are flawed (inconsistent), then you just learn to work with their flaws.
And I hope, work to improve them?
/Fredrik
 
  • #93
stevendaryl said:
Theories are just tools. If the tools are flawed (inconsistent), then you just learn to work with their flaws.
But tools for what? If they are tools for making predictions, then it's OK. But physicists also use theories for conceptual understanding. So if the theory is inconsistent, then conceptual understanding can also be inconsistent. And if conceptual understanding is inconsistent and the physicist is aware of it, then the tool does not fulfill its purpose.
 
  • Like
Likes Fra
  • #94
stevendaryl said:
I'm not sure I understand the point you're making, but I think I agree that the weirdness of quantum mechanics is not its nonlocality. The only relevance of nonlocality is that it shows that that weirdness can't easily be explained in terms of a nonweird "deeper theory".
I realize was kind of redundant, I agree with what you write

I just meant to say that the it's the application of Reichenbach's Common Cause Principle that seems out of place. It does not follow that the probabiltiy spaces that Alice and Bob alone and together can represent via ensembles consistuted the common probability space that is presumed in the RCC? In here, there is the old pathological expectaions of how causation "must work", which apparently doesn't work so in nature.

/Fredrik
 
  • #95
lugita15 said:
Anyway, what would you say about counterfactual definiteness?
In the meantime, I have changed my mind on that. I think it is a vague term with at least 3 different meanings: micro realism, macro realism and determinism. Since all 3 are already included in my two lists, counterfactual definiteness should not be added to the list.
 
  • #96
Demystifier said:
Then I choose 1. Quantum theory in its minimal form is incomplete, one should add something, like objective collapse (a'la GRW) or additional variables (a'la Bohm).

But for this thread, option 2. is more interesting.
So the relevance for this thread would then be that the law of total probability should be among the necessary assumptions.
 
  • Like
Likes Demystifier
  • #97
The total law of probability assumes that the summation index/integration variable (the hidden variable parameter space) constitutes a proper partition of the event space. If the "hidden variables" isn't a pairwise disjoint set, then the law of total probability does not hold. As the law of total probability is key construct in a "sum over paths" approach, assumptions on this partition, is equivalent to assumptions of the intermediated interactions and thus in the conceptual extension the hamiltonian. And this assumption/ansatz of Bellts theorem that seems to me, hard to defend even a priori. Then only vauge defense, is that in the case where the hidden variable is note strictly isolated from the environment, but rather known, and its' merely hidden from the experiemnter, then this form makes more sense. Ie. nothing prevents a "hidden variable" mechanism, that does not constitute a partition, and thus doesn't obey the premise of bells theorem. This is because transitions in QM seems to make use "several paths at once", not just one at a time - simply weigthed.

This has been my point in previous posts as well, to distinguish between a really HIDDEN variable, and simply ignorance of the experimenter.

I think that the solipsist HV of Demystifiers, is one example of a possibiilty. As I think of it, then the hidden variable is hidden simply because its subjective to the agent. But it nevertheless rules the action of that agent, but in a way that looks "weird" to an outside observe, and this contributes to the total "interaction mechanism" in QM.

/Fredrik
 
  • Like
Likes Demystifier
  • #98
Fifty years of Bell’s theorem | CERN (home.cern)

Fifty years of Bell’s theorem

A paper by John Bell published on 4 November 1964 laid the foundations for the modern field of quantum-information science

4 NOVEMBER, 2014

By Christine Sutton

On 4 November 1964, a journal called Physics received a paper written by John Bell, a theoretician from CERN. The journal was short-lived, but the paper became famous, laying the foundations for the modern field of quantum-information science.

The background to Bell’s paper goes back to the late 1930s and Albert Einstein’s dislike of quantum mechanics, which he argued required “spooky actions at a distance”. In other words, according to quantum theory, a measurement in one location can influence the state of a system in another location. Einstein believed this appeared to occur only because the quantum description was not complete.

The argument as to whether quantum mechanics is indeed a complete description of reality continued for several decades until Bell, who was on leave from CERN at the time, wrote down what has become known as Bell’s theorem. Importantly, he provided an experimentally testable relationship between measurements that would confirm or refute the disliked “action at a distance”.

In the 1970s, experiments began to test Bell’s theorem, confirming quantum theory and at the same time establishing the base for what has now become major area of science and technology, with important applications, for example, in quantum cryptography.

Bell was born in Belfast, where he attended Queen’s University, eventually coming to CERN in 1960. For much of November the university is hosting a series of events in celebration of his work, including an exhibition Action at a Distance: The Life and Legacy of John Stewart Bell with photographs, objects and papers relating to Bell’s work alongside videos exploring his science and legacy. He sadly died suddenly in 1990. Now the Royal Irish Academy is calling for 4 November to be named “http://www.ria.ie/john-bell-day.aspx”.

For more

The original paper: “On the Einstein Podolsky Rosen Paradox” by J S Bell [PDF]

Fifty years of Bell’s theorem | CERN (home.cern)
 
Back
Top