QM Uncertainty: Uncovering Hidden Variables

In summary, the conversation discusses an experiment involving a geiger counter and a computer that generates a binary output based on random electron decays. The experiment is well known and produces a binomial distribution of 0's and 1's. Some participants question why this happens instead of a completely random output, and suggest that the Lorentz invariance of Quantum Electro-Dynamics may play a role in the results. Others suggest that the sample size may also influence the outcome. The possibility of hidden variables is also mentioned, but it is argued that they would not affect the outcome in this experiment.
  • #1
DMuitW
26
0
Up to today no underlying hidden variables that determine what is called the quantum uncertainty have been demonstrated.

Consider :

We have a setting that includes a geiger counter and a computer.
This computer has a button, which, when we press it, starts a clock. This computer is attached to a geiger counter, which if it detects a (random) electron decay, signs a signal to the computer to stop the clock. Let's say if the clock stops at an even 1/1000th second number, it generates a 0, if its stopped at an uneven 1/1000th second number, it generates a 1.

This experiment is well known, and will generate if repeated long enough, on the long run almost as much 0's as 1's (Binomial distribution). There is room for deviations, but usually within a small degree of freedom.

My Question now is, WHY does this experiment, which should be dependent on really RANDOM quantum mechanic occassions (QM as total chaos) generate a quite clear binomial distribution and in the end in theory almost as much 0's as 1's instead of a total random output?
 
Physics news on Phys.org
  • #2
I think that to best appreciate this phenomenon, you need to do a brute-force analysis of a small version of it.

Each decay has an equal probability of producing a 0 or a 1. Therefore, if you consider six consecutive decays, you get a binary number ranging from 000000 to 111111, with an equal probability for each number within that range (000000, 000001, 000010, etc.).

Write down all 64 six-bit binary numbers, and count how many of them have one 1, two 1's, three 1's, etc. How many 1's are most likely?

Note that there's nothing particularly quantum-mechanical about this phenomenon at all! We could just as well toss an evenly-balanced coin to decide between 0 and 1.
 
  • #3
I am not sure that this has anything to do with hidden variables -- the statistics quoted are just normal statistics . Hidden variables states that something is aready perdetermined even if YOU do not know the outcome .
What quantum mechanics says is that only when you make your choice is the outcome THEN determined - until then there is NO determined outcome .
I do not know if the Bell test is totally accepted but the contention is sure -- the past does not effect the future , it says that causality is at risk , whereas the normal interpretation is 'you may not know ' but nevertheless effect follows cause .
The result is effects do not follow cause but are indeterminate . With some probability . If you think about this you will see that this is only the possible answer to free will -- if not your choices are predetermined -- even if you do not know .
How about this scenario --- your ideas are random ( totally ) but they get fed into a machine which has learned from past experience and is capable of weighing odds of following the new idea cf the old idea -- that is with all sorts of weighted emotions -- I have a feeling that this would quite closely mimic Human response ( BUT with A billion yrs of experience ) .
Ray ( I have no real clue -- but I believe in measurements especially when repeated by independant groups ) there is no reason to think that Human normal experience has educated us as to what to expect of physics -- thts' a few hundred yrs cf 14 billion .
Yours Ray .
 
  • #4
i would think that the result that is found, (that there is very close to the same number of 0's as 1's if the experiment is run long enough) is just a result of the Lorentz invariance of Quantum Electro-Dynamics!

In other words, all other variables the same, detector type, sample type, etc., it should not make any difference WHEN the experiment is conducted, you would have to expect the nearly the same result, indeed perhaps the same number of 1's and 0's if the sample was ALL decayed. But I would not know how to prove this, this is just my guess.

Again, my guess is that if there would not be the relatively the same number of 0's and 1's found in conducting such an experiment then somehow the Lorentz invariance of Quantum Electro-Dynamics would fail somehow. But this is just my "Hunch!"
what does anybody think of this?
love and peace,
and,
peace and love,
(kirk) kirk gregory czuhai
http://www.altelco.net/~lovekgc/kl.htm [Broken]
 
Last edited by a moderator:
  • #5
of course, there's more to Beta decay than that is not there?
oh i know so little physics! the spontaneous symetry breaking involved in beta decay.
how this, QED, and Quantum Statistics, to possibly prove that one would get the same number of 1's and 0's in this type of experiment IS WAY BEYOND ME!
peace and love,
and,
love and peace,
kirk
 
  • #6
maybe what i have said so far has nothing to do with the situation at all except in the case where one has many radioactive atoms of an isotope as one generally has.
i wonder if the experiment has ever been done for a really small sample size, say of several thousand atoms? or hundred? is this possible and then does one still get the same result as the same number of 1's and 0's?
peace and love,
and,
love and peace,
kirk
http://www.cosmicfingerprints.com/audio/newevidence.htm
 
  • #7
A total random output with only 2 possibilities will give you equal amounts of each possibility if totally random and continued long enough.

Each event does not have the same probability set, but the probability super sets for each outcome are the same over the long term.

The possibility of hidden variables occurs in the exact temporal structure of each superset. They will not in this case affect the outcome.

juju
 
Last edited:
  • #8
DMuitW said:
My Question now is, WHY does this experiment, which should be dependent on really RANDOM quantum mechanic occassions (QM as total chaos) generate a quite clear binomial distribution and in the end in theory almost as much 0's as 1's instead of a total random output?

Not sure I understand the paradox you are trying to identify. You take randomly occurring phenomena and map to numbers, giving a distribution consistent with random output?

Where is the difference between the expected values and the observed values?
 
  • #9
let's see what i remember about nuclear decay!
imagine the following experiment. we have an radioactive isotope that has a half-life of 4 days. if we have a lot of atoms of them very close to 1/2 would be decayed in exactly four days.

but if we have only one we cannot say that! from quantum physics a single atom could decay in a tenth of a second or four years or some other time from the beginning of the experiment.

IT IS FROM THIS FACT OF QUANTUM MECHNICS I THINK; that even though the half life of a radio-isotope can be known, the time of an individual atom's decay is NOT that an equal number of 0's and 1's will appear if the experiment is performed for a long enough time in the case where one has many atoms of the radioactive isotope.

plus i think there may be some effect that the experimental apparatus is placing the data in histogram bins of 1/1000 seconds also.

all the stuff i wrote before about QED, etc. i do not feel is necessary to prove the results found in experiments, just what is stated here in this reply. I ALMOST think if i could have almost done it at one time? using what i knew about the half life formulae and considering an experiment with a large number of radioactive atoms of an isotope run for a time period much longer than the isotope's half life.

well! i am probably just proving to everybody how little i know! I thought maybe? something? i have written on this topic might help in a tiny way, sorry if it has not.
peace and love,
and,
love and peace,
kirk
http://www.altelco.net/~lovekgc/PrincessLittleHoney.htm [Broken]
 
Last edited by a moderator:
  • #10
rayjohn01 said:
What quantum mechanics says is that only when you make your choice is the outcome THEN determined - until then there is NO determined outcome .
I do not know if the Bell test is totally accepted but the contention is sure -- the past does not effect the future , it says that causality is at risk , whereas the normal interpretation is 'you may not know ' but nevertheless effect follows cause
Incorrect. This is a common fallacy in interpreting the results of QM. What (the results of) QM “says” is that the outcomes of experiments are epistemically indeterminable. Only certain strange interpretations of QM (notably the Copenhagen interpretation) equate this with ontic indeterminism. It is important to understand that epistemic indeterminability does not necessarily imply ontic indeterminism.

rayjohn01 said:
If you think about this you will see that this is only the possible answer to free will -- if not your choices are predetermined -- even if you do not know
(1) As explained above, QM is not necessarily indeterministic, hence this is incorrect.
(2) Even if QM were ontically indeterministic, this would be a source of randomness, but how could this be a source of “naïve” free will? Think about it (and see below).

rayjohn01 said:
How about this scenario --- your ideas are random ( totally ) but they get fed into a machine which has learned from past experience and is capable of weighing odds of following the new idea cf the old idea -- that is with all sorts of weighted emotions -- I have a feeling that this would quite closely mimic Human response.
You are incorrect in thinking this somehow generates “naïve free will”.
Think about it. Why do the “ideas” in this model need to be random? Feed the same machine with a selection of “non-random” ideas, and the same machine (which has learned from past experience and is capable of weighing odds etc) can still decide which “ideas” to follow-up. The introduction of a random element into the generation of the “ideas” in the first place adds nothing to the “free will” of the machine (whatever "free will" might be).

MF
:smile:
 
Last edited:
  • #11
Thanks for the replies, i'll try to put my question a little less vague.

Lets say we use a quantum effect, ie the decay of an electron, which should according to the uncertainty principle be totally random. Use this effect to map it to a number generator, so that with this random effect you can generate 0's or 1's. WHY is it that on the long run chances will be distributed 0.5 for 0 and 0.5 for 1? Something that occurs thus totally random produces an output that is still random (the actual experiment has fluctuations according to the theorethical distribution) but to a far less degree of freedom?

Also, in answer to rayjohn01's response, if it is like you say, everything is undetermined (due to quantumhyperposition that says that the outcome is actually only coming into exist when the experiment is conducted), WHY and HOW is it that something so utterly uncertain generates all of this in such way as we know everything (from matter to...)

Use this last in the experiment given, WHAT determines , if using the superposition quantum uncertainty principle, that what is produced will always generate a consistent distribution??

Hope i cleared some things out about my question,

Thanks!
 
  • #12
DMuitW said:
Lets say we use a quantum effect, ie the decay of an electron, which should according to the uncertainty principle be totally random. Use this effect to map it to a number generator, so that with this random effect you can generate 0's or 1's. WHY is it that on the long run chances will be distributed 0.5 for 0 and 0.5 for 1? Something that occurs thus totally random produces an output that is still random (the actual experiment has fluctuations according to the theorethical distribution) but to a far less degree of freedom?

It is a little bit difficult to understand what you want to say/know. First of all, I think you need to say if you understand classical probabilities (frequentist view) and the experimental results they give (as the binomial law of the independent trials on random variable).
If you understand this topic then you can view, formally, the statistical results of a quantum measurement experiment (the satsitics of one observable, e.g. the energy of the electron, P(E=e)) as the statistical
result of a classical experiment (P=P(E=e), 1-P) and therefore recover binomial laws (of n trial sequences) or simply the law P (n --> +oO).

Seratend.
 
  • #13
moving finger said:
Incorrect. This is a common fallacy in interpreting the results of QM. What (the results of) QM “says” is that the outcomes of experiments are epistemically indeterminable. Only certain strange interpretations of QM (notably the Copenhagen interpretation) equate this with ontic indeterminism. It is important to understand that epistemic indeterminability does not necessarily imply ontic indeterminism.

Well, you can say this... a lot of philosopher-types do...

But the experiments speak strongly against this view. As a general rule, a single counter-example is sufficient to disprove any theory. The results of EPR/Bell tests are a counter-example to the hypothesis that particle attributes have determinate (ontic) values independent of their (epistemic) observation.

There are "some" interpretations of these results in which ontic determinism is still viable, such as Bohmian mechanics. However, such interpretations are not generally accepted at this time.
 
  • #14
DrChinese said:
Well, you can say this... a lot of philosopher-types do...

But the experiments speak strongly against this view. As a general rule, a single counter-example is sufficient to disprove any theory. The results of EPR/Bell tests are a counter-example to the hypothesis that particle attributes have determinate (ontic) values independent of their (epistemic) observation.

There are "some" interpretations of these results in which ontic determinism is still viable, such as Bohmian mechanics. However, such interpretations are not generally accepted at this time.


Oh, the irony. If "a single counter-example is sufficient to disprove any theory", then wouldn't the example of Bohmian mechanics -- which you are obviously aware of -- be a counter-example to the "theory" that "experiments speak strongly against" the failure of determinism?

Oh, right, Bohmian mechanics isn't "generally accepted at this time" so I guess we can just ignore this counterexample. Everyone else is doing it...

I think it would be much more accurate to simply state the truth: for some bizarre philosophical or historical reasons, the founding fathers of quantum mechanics loved and latched onto the idea that determinism failed. And many, many people have followed them. But there is an explicit counter-example to the claim that this was necessitated by the evidence. This proves one thing and one thing only -- whatever reasons people had for believing in the failure of determinism were not based on conclusive physical evidence, but something else.

On this point, I would highly recommend the book "Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony" by (the late) Jim Cushing.

ttn
 
  • #15
ttn said:
Oh, the irony. If "a single counter-example is sufficient to disprove any theory", then wouldn't the example of Bohmian mechanics -- which you are obviously aware of -- be a counter-example to the "theory" that "experiments speak strongly against" the failure of determinism?

Oh, right, Bohmian mechanics isn't "generally accepted at this time" so I guess we can just ignore this counterexample. Everyone else is doing it...

Bohmian Mechanics is not a counter-example to QM/CI. It is an alternative in which causality "may" be restored at the cost of locality. Just as Bell discovered that local reality and QM are incompatible in some respects, perhaps in the future someone will figure out how to distinguish between BM and CI.

But I appreciate your point. *Perhaps* if things had been discovered in a different order, we would consider CI to be fringe and BM to be mainstream.
 
  • #16
DrChinese said:
Bohmian Mechanics is not a counter-example to QM/CI. It is an alternative in which causality "may" be restored at the cost of locality.

That's not correct. The price paid for determinism is not locality, but... nothing. Orthodox Copenhagen QM is non-local too, in precisely the same way that Bohmian mechanics is -- namely, it violates Bell's locality ("factorizability") condition.

No local theory is consistent with the experimentally observed EPR type correlations. That's just a fact. You can't have a local theory. There is no choice about that. But there is a choice about whether to have a theory that is deterministic and clear or a theory that is non-deterministic, fuzzy, subjective, "unprofessionally vague and ambiguous."

Other than sheer, unthinking inertia, is there actually any reason to believe in Copenhagen and not take the Bohmian option? I don't know of any.
 
  • #17
DmuitW, omg, have i been out of the mainstream for so long that i have missed the "news" about lepton, i.e. electron decay?
what is the half life determined for the "buggers"?
i will have to go to work now quantizing my KingFranklin theory which I thought could remain a CLASSICAL theory.
http://www.altelco.net/~lovekgc/KingFranklin.htm [Broken]
any help from you all would be appreciated in answering these questions or this endevour! THANKS! (:-O) !
love and peace,
and,
peace and love,
(kirk) kirk gregory czuhai
 
Last edited by a moderator:
  • #18
is not it true Dr. Chinese, that non-local theories run into causality problems.

Quantum Electro-Dynamics for example at least in the Domain for which it applies is about the BEST, MOST ACCURATE verified theory there is for is for at least a number of experimental parameters, implying strict Lorentzian invariance and thus locality in its type.

Although some esoteric experiments have somewhat recently been performed such as "freezing" photons, or "trapping" them, or slowing them down, and various, "tunneling" experiments performed, I know of no experiment performed yet where it has been shown that actual information can be tranmitted at a velocity faster than the speed of light in a vacuum.

please inform me if I am incorrect!

love and peace,
and,
peace and love,
(kirk) kirk gregory czuhai
http://www.altelco.net/~lovekgc/brainwash.htm [Broken]
 
Last edited by a moderator:
  • #19
ttn said:
Other than sheer, unthinking inertia, is there actually any reason to believe in Copenhagen and not take the Bohmian option? I don't know of any.

Some points can be made, but in essence, I agree with you.
Copenhagen, with explicit collapse, is just as ugly as Bohm and moreover must introduce a paradigm shift concerning "reality". Both commit - in my opinion - the same sin: while very powerfull symmetries led to the right dynamics of the wavefunction (which is shared by both theories), they introduce a blunt violation of it in another part: in Copenhagen, it is the "collapse", and in Bohmian mechanics, it is the guiding equation.

MWI-like views trade in something else: they trade in "intuitive ontology" for "respect of symmetry" (of which locality is a part). That's why I like them: they respect fully the same symmetries (and locality) as those that lead us to the theory in the first place (lorentz invariance, gauge invariance...).

So Bohmians, and Copenhagians must solve the following puzzle: how come that the symmetries which led us to the right theory concerning the wave function are put under the carpet in the guiding equation/collapse ?
Copenhagians moreover introduce a paradigm shift from determinism to complementarity.

MWI-ers have to solve the issue: why is the ontology of the world so very different from what we intuitively observe ? Personally, I think that this is a difficulty of lower order, because intuition is something psychologically rooted into humans and does not need to be related to any ontology.

And working scientists have to solve the issue: what is the simplest formalism that allows me to compare my experimental results to my calculations ? And that's the place were Copenhagen wins usually by several lengths. And, it is in fact the most important point.

cheers,
Patrick.
 
  • #20
ttn said:
No local theory is consistent with the experimentally observed EPR type correlations. That's just a fact. You can't have a local theory. There is no choice about that.

This, however, is not true, as MWI shows you. So don't fall in the same trap as CI proponents: do not deny a possibility of which there exists an example :-)

Some CI proponents deny that a deterministic model can make the same predictions as QM/CI, while Bohmian mechanics does exactly that. In the same way, MWI fully respects Lorentz invariance (= "locality"), while you claim that this cannot be done.

Apparently, what doesn't go together is the following set:
{Lorentz invariance, deterministic ontology, experimentally confirmed QM EPR predictions}

QM/CI blows the first two ;
Bohm blows the first one ;
MWI blows the second one ;
Original Thinkers blow the third one.

cheers,
Patrick.
 
  • #21
DrChinese said:
Well, you can say this... a lot of philosopher-types do...
you and I can say anything we like, this is not the point. The point is what is true and what is untrue. (BTW, I am a scientist before I am a philosopher).

DrChinese said:
But the experiments speak strongly against this view.
No they do not, and this is your error. Show me an experiment which proves the world is ontically indeterministic? There is no such experiment.

DrChinese said:
The results of EPR/Bell tests are a counter-example to the hypothesis that particle attributes have determinate (ontic) values independent of their (epistemic) observation.
This is a common fallacy that some myopic scientists continue to believe is true (and what is even worse, that teachers continue to teach as being true). The EPR/Bell tests you refer to show that the quantum world is non-local. They do NOT show that the quantum world is indeterministic. I challenge any rational open-minded scientist to show that the results of these tests prove that the quantum world is indeterministic.

DrChinese said:
There are "some" interpretations of these results in which ontic determinism is still viable, such as Bohmian mechanics. However, such interpretations are not generally accepted at this time.
to be "generally accepted" is not a prerequisite for truth, only for popularity. Any free-thinking scientist should be interested in truth, not in being "generally accepted".

Niels Bohr brainwashed a whole generation of physicists into believing that the problem had been solved
Murray Gell-Mann


MF
:smile:
 
  • #22
DrChinese said:
Bohmian Mechanics is not a counter-example to QM/CI. It is an alternative in which causality "may" be restored at the cost of locality. Just as Bell discovered that local reality and QM are incompatible in some respects, perhaps in the future someone will figure out how to distinguish between BM and CI.
The quantum world IS non-local, no matter what "interpretation" you favour! You cannot get rid of that fact.

DrChinese said:
But I appreciate your point. *Perhaps* if things had been discovered in a different order, we would consider CI to be fringe and BM to be mainstream.
it matters not what is "fringe" nor what is "mainstream".

What matters is only the truth. And a good teacher of science should not be promoting only the mainstream, they should be promoting the truth.

The simple truth is that the world is epistemically indeterminable, but nobody has shown that it is also ontically indeterministic.

MF

:smile:
 
  • #23
ttn said:
No local theory is consistent with the experimentally observed EPR type correlations. That's just a fact. You can't have a local theory. There is no choice about that. But there is a choice about whether to have a theory that is deterministic and clear or a theory that is non-deterministic, fuzzy, subjective, "unprofessionally vague and ambiguous."
I agree 100%!

ttn said:
Other than sheer, unthinking inertia, is there actually any reason to believe in Copenhagen and not take the Bohmian option? I don't know of any.
There is the (poor) reason of wanting to be "mainstream" or "popular" (which should not matter to a free-thinking, objective scientist)

MF
:smile:
 
  • #24
moving finger said:
The quantum world IS non-local, no matter what "interpretation" you favour! You cannot get rid of that fact.

This is not true, but discussions about this usually show that people understand different things under "local".
It is only in a deterministic ontology that "local" has an unambiguous meaning, namely that there is no causal influence beyond the light cone.
In a probabilitic setting, things are more complicated. There is "information locality" and "Bell Locality". Information locality somehow needs "free will" and "observation" in order to be able to set up a communication channel between two points. Bell locality is a definition of a requirement on correlations. I have shown here that a probabilitic theory that obeys Bell locality is always equivalent to an underlying deterministic theory that obeys information locality and vice versa, so we have:

Bell locality <=> determinism AND information locality

Bell locality is not a requirement by anybody per se ; it is not a requirement by relativity. It is just a written-down definition. Lorentz invariance + "free will" needs information locality in order to avoid a logical paradox.

"free will" just stands for the proposition that an experimenter can make choices in doing certain experiments or not, which are somehow not part of the physics of the set up (eg. Alice has the free choice of setting up her analyser according to an angle of her choice, and this is not somehow forced upon her by the set up). Without this "free will", most of experimental science doesn't have any meaning because all observations are the result of conspiracies.

QM predictions violate Bell locality and respect information locality.

So the only thing that we can conclude from this, is that QM cannot respect at the same time Lorentz invariance, underlying determinism and "free will".

Taking the third part for granted, QM cannot at the same time respect lorentz invariance and underlying determinism. That's all EPR says.
Now, if you stick to underlying determinism, then Lorentz invariance has to go (that's what Bohm does) ; if you stick to lorentz invariance, determinism has to go (that's what MWI does and it is also what you obtain when you consider QM just as a generator for probability functions of observations without underlying ontology "shut up and calculate") ; you can also give up on both (that's what Copenhagen does).

cheers,
patrick.
 
Last edited:
  • #25
' "free will" just stands for the proposition that an experimenter can make choices in doing certain experiments or not, which are somehow not part of the physics of the set up (eg. Alice has the free choice of setting up her analyser according to an angle of her choice, and this is not somehow forced upon her by the set up). Without this "free will", most of experimental science doesn't have any meaning because all observations are the result of conspiracies '

i would rather say that most of experimental science doesn't have DEFINITE last word because of man's STATISTICAL interpretation AND statistics if axiomatically FUNDATMENTALLY FLAWED to say nothing of always leaving some experimental and statistical error!

I for one am not holding my breath waiting for the finding of "hidden variables" in
some deterministic quantum field theory or ever expecting that one could be found by MAN and expecting that it could be usful as an aid for calculations! On the other hand, it is my belife that the universe IS deterministic but man has been given free will but this last statement "shudder" will probably get me another warning and probably belongs elsewhere in Physics Forums.

With this I give up on some of you guys in your arguements, as far as some are concerned some belief in a flat Earth do they NOT everyone will agree to that RIGHT?
love and peace,
and,
peace and love,
(kirk) kirk gregory czuhai
 
  • #26
Kirk Gregory Czuhai said:
is not it true Dr. Chinese, that non-local theories run into causality problems.

Quantum Electro-Dynamics for example at least in the Domain for which it applies is about the BEST, MOST ACCURATE verified theory there is for is for at least a number of experimental parameters, implying strict Lorentzian invariance and thus locality in its type.

Although some esoteric experiments have somewhat recently been performed such as "freezing" photons, or "trapping" them, or slowing them down, and various, "tunneling" experiments performed, I know of no experiment performed yet where it has been shown that actual information can be tranmitted at a velocity faster than the speed of light in a vacuum.

please inform me if I am incorrect!

Locality can be considered preserved in some sense in QM/CI (Copenhagen Interpretation) as you describe. ttn argued exactly the opposite in his post above, that even CI is non-local. So we can see that there is some ambiguity to the meaning of non-local.

For causality to be preserved, I would presume that the future cannot influence the past. It seems to me that the Aspect experiments demonstrate that the future does influence the past; and by my definition, causality fails.

Although you and ttn disagree on what locality is, you both agree that causality stands. Hmmm...
 
  • #27
DrChinese said:
ttn argued exactly the opposite in his post above, that even CI is non-local.

As far as you assign any ontology to the wave function, it is clear that CI is non-local, no ? I mean, you collapse the wavefunction instantaneously and everywhere !
Of course, if you consider the wavefunction just as a mathematical tool to calculate probabilities (but that's not Copenhagen, no ?) the only thing you can talk about are probabilities for outcomes of experiments. These probabilities satisfy information locality but they do not satisfy Bell locality. The only way to talk about *causality* in a probabilistic environment is to talk about information flow, and as we know, QM respects this.
It is very difficult to give up on causality without compromising the entire scientific undertaking.

For causality to be preserved, I would presume that the future cannot influence the past. It seems to me that the Aspect experiments demonstrate that the future does influence the past; and by my definition, causality fails.

I would say that the only meaningful way to say that the future (or whatever) influences the past, is that you make a choice in the future, and that you can find out that choice in the past. This, to me, is the meaning of causality (which, as I said, could be named "information causality" in a probabilistic setting). This leads to a paradox, because, informed by your choice you will make in the future, in your past, you can now decide NOT to make that choice. So it is almost impossible to violate causality and not run into paradoxes ; the only way out is to avoid "free will": namely not to allow you somehow to decide to make that other choice.
But if there is no "free will" then all our experiments, in the past and in the future have been erroneously analysed. All "correlations" we found, ever, didn't need to be correlations, but could have been the result of conspiracies. So in essence, science falls on its face if you deny some form of "free will".
And if you accept it, and you want to avoid paradoxes, then (information) causality is a logical must. So I wouldn't give up on it :-)

cheers,
Patrick.
 
  • #28
vanesch said:
This, however, is not true, as MWI shows you. So don't fall in the same trap as CI proponents: do not deny a possibility of which there exists an example :-)

Touche'. But what I said originally is true. No local theory can account for the observed
EPR/Bell correlations. Even granting that MWI is a "local theory", it simply isn't true that
it accounts for the observed correlations. Rather, it denies that what we *took* to be the
outcomes of those experiments are the *real* outcomes of the experiments. That is, MWI
holds that our beliefs about what those experiments showed are *delusions*.

It's only in a very stretched, dubious sense that this counts as "accounting for the observed
correlations." If someone publishes a paper showing that drug X cures disease Y, one way of
"accounting for that data" is to say that the author simply faked the data. But that's not normally
what we mean by explaining the observed facts. This is explaining them away, not explaining
them.


Apparently, what doesn't go together is the following set:
{Lorentz invariance, deterministic ontology, experimentally confirmed QM EPR predictions}

QM/CI blows the first two ;
Bohm blows the first one ;
MWI blows the second one ;
Original Thinkers blow the third one.

I don't follow you here. You're saying MWI rejects determinism? I thought the whole point of MWI was to get rid of the pesky collapse postulate, leaving only the deterministic evolution? Or maybe you're making the higher-level point that this doesn't really work and MWI needs to "smuggle in" some notion of probability.

But whatever you meant, I don't think it's the fundamental objection to MWI. MWI forces us to hold that we have been deluded about the outcomes of all the science experiments that have ever been done -- including the ones that led to QM in the first place. That is a very uncomfortable position for a theory to be in... totally unprecedented in the history of science (or, more broadly, rational thought).
 
  • #29
vanesch said:
I would say that the only meaningful way to say that the future (or whatever) influences the past, is that you make a choice in the future, and that you can find out that choice in the past. This, to me, is the meaning of causality (which, as I said, could be named "information causality" in a probabilistic setting). This leads to a paradox, because, informed by your choice you will make in the future, in your past, you can now decide NOT to make that choice. So it is almost impossible to violate causality and not run into paradoxes ; the only way out is to avoid "free will": namely not to allow you somehow to decide to make that other choice.

No, I wouldn't agree with that picture at all. That is but one possibility! Why should the future be any more modifiable than the past? In other words, even the future is in the past to some even further off future. The following scenario is just as matched to the facts as anything (times are arbitrary and exaggerated):

a. I measure one of an entangled pair at an angle I choose at 3:30pm.
b. That causes it to have a definite spin orientation at the time of emission, 3:00pm. (However, note that there is still a random element present.)
c. That forces the other of the pair to have a definite spin orientation when it is created slightly later, at 3:01pm, so that total spin is conserved. Recall that in many experiments, the particle pair is not created exactly simultaneously (for example, in the Aspect experiments).
d. Bob measures that second particle at 3:45 and finds its spin orientation relative to the polarizer angle he chooses freely.

Regardless of whether you have a. before d. or not, the above description is perfectly valid and fits with all observables! There is no paradox in which a choice at one point in time interferes with a choice at another point in time. And yet, no information transfer occurs faster than c.

If you really believe in 4 space, this makes perfect sense.
 
  • #30
Adding to my post above after some reflection... What I am trying to say: we generally *think* we know what causality should be, how it should work, etc. But we don't quite know yet, do we?

After all, the laws of physics are build on symmetry and here we are denying one of the most fundamental symmetries of all: time reversibility. Perhaps the time dimension is fully symmetric, in which case c (+ or -) could still be respected. If most matter was moving in a particular and similar direction due to initial conditions, that could explain the world we see today.

This is speculation, of course. The point is we preserve causality at all costs and I ask why? Maybe causality exists but abides by different rules.
 
Last edited:
  • #31
vanesch said:
the only way out is to avoid "free will": namely not to allow you somehow to decide to make that other choice.
what exactly do you mean by "free will"?
How can I make a choice "other than" the choice that I did make?
MF
:smile:
 
  • #32
DrChinese said:
a. I measure one of an entangled pair at an angle I choose at 3:30pm.
b. That causes it to have a definite spin orientation at the time of emission, 3:00pm. (However, note that there is still a random element present.)
[...]

This is indeed a possibility, but as I outlined somewhere, it simply means that experimental science has no meaning. Indeed, if what I'm going to measure in the future influences what is happening in the past, I cannot conclude much.

Imagine the following "experiment": I have 2 wires, and on a panel, there's a light bulb. I want to find out whether the light bulb is connected to the two wires, so I take a battery, and connect them to the light bulb: it lights up. I disconnect the battery: it goes out. I reconnect it: it goes on again.
Conclusion of my experiment: yes, these wires somehow are connected (or pilot) the lightbulb.

But maybe not at all ! Something else is making this lightbulb light up and go out, and this influences me, connecting exactly a few nanoseconds earlier, each time, the wires to the battery and not.

So I cannot even determine from my experiment that the wires have anything to do with the lightbulb ! You have to leave aside this possibility if you are going to consider the experimental scientific method at all, no ?

cheers,
Patrick.
 
  • #33
ttn said:
I don't follow you here. You're saying MWI rejects determinism? I thought the whole point of MWI was to get rid of the pesky collapse postulate, leaving only the deterministic evolution? Or maybe you're making the higher-level point that this doesn't really work and MWI needs to "smuggle in" some notion of probability.

Absolutely. I think "pure MWI" doesn't work for the probabilititic aspect, and something extra is needed.

But whatever you meant, I don't think it's the fundamental objection to MWI. MWI forces us to hold that we have been deluded about the outcomes of all the science experiments that have ever been done -- including the ones that led to QM in the first place. That is a very uncomfortable position for a theory to be in... totally unprecedented in the history of science (or, more broadly, rational thought).

No, we are not completely deluded, only partially :biggrin:

As I've been advocating a few times already here, you can have a view where there is indeed a quantum world out there evolving deterministically out there, respecting all symmetries (unitary evolution), and an observer, evolving through all those "branches", making LOCALLY each time his choice for the next branch upon his local "observations" (= entanglement with other stuff, and "choice" to be in one of those branches).

This last part is indeed an extra postulate, outside of strict MWI, (as said above), is probabilistic, and is local (to the observer). It also explains fully his measurement history. Now, you can think of it what you want, but it IS a working explanation. There's no delusion in it, not any more than the wave function in Bohm. It is the same wave function.

In Bohm, there is a *physical token* which obeys non-local laws, in my version, there is a "mental token" which obeys local laws...

cheers,
Patrick.
 
  • #34
vanesch said:
This is indeed a possibility, but as I outlined somewhere, it simply means that experimental science has no meaning. Indeed, if what I'm going to measure in the future influences what is happening in the past, I cannot conclude much.
If the world is completely deterministic then it is just as correct to say the future influences the past as it is to say the past influences the future. In fact both past and future are fully determined.

It is possible that the universe operates this way.

Are you suggesting that experimental science would have no meaning in such a deterministic world?

I think not.

MF
:smile:
 
  • #35
moving finger said:
Are you suggesting that experimental science would have no meaning in such a deterministic world?

Yes, that's what I'm saying. In a fully deterministic world, indeed, experimental science doesn't make sense, because an essential ingredient in experimental science is the detection of a correlation of an effect with a free choice, which is assumed to have some "statistical independence" from the causal relationship under effect.
Let us look at the example of a new drug, that has to be tested. The idea is that somehow the mapping in which certain patients receive the drug, and others receive a placebo, or an old drug, is determined "independently". If however, this mapping is deterministically fixed, nothing can stop you from thinking that the order in which the patients come in is ALSO deterministically fixed, so that the patients with a serious disease happen to get the placebo, and those with lighter problems, the new drug. So the wrong conclusion will be that the new drug works very well. There is nothing "improbable" about it, because, by definition, in a deterministic view, probabilities don't make sense.

I see determinism as a structure in time, which resembles a structure in space, like, say, a pen. Let's say that the pen is lying along the z-axis, our "time" axis. And our "current time" is a slice at 2 cm from the back of the pen. Moving along the z-axis, we have a circular section of plastic and ink, which remains there for a while ; suddenly the radius of the circular section gets narrower, then we have a small metal disk which grows and then diminishes, and then there's nothing anymore. We can go back, and forward in z, we will have a "deterministic" section. You cannot conclude anything about these observations. Any "law" you could find based upon those observations is completely arbitrary.
You need to be able to assume "statistical independence" of your choices, and the mechanism under study ; and this "statistical independence" is impossible in a strictly deterministic universe, because all possible conspiracies are a possibility.

Of course, you could have a deterministic universe, in which everything behaves AS IF you had statistical independence between your choices and the process under study. But that, by itself, would be an enormous conspiracy !
A bit like the mice were guiding human science in the Hitchhiker's guide to the Galaxy :-)

cheers,
Patrick.
 
<h2>1. What is the concept of uncertainty in quantum mechanics?</h2><p>In quantum mechanics, uncertainty refers to the idea that certain properties of a particle, such as its position and momentum, cannot be measured simultaneously with complete accuracy. This is known as the Heisenberg uncertainty principle.</p><h2>2. How does the uncertainty principle relate to hidden variables?</h2><p>The uncertainty principle suggests that there are inherent limitations to our ability to measure certain properties of a particle. Hidden variables are hypothetical properties that could potentially explain the behavior of particles in a deterministic way, but they are not observable and therefore cannot be used to overcome the uncertainty principle.</p><h2>3. What are some examples of hidden variables in quantum mechanics?</h2><p>Some examples of hidden variables that have been proposed include the spin of a particle, its exact position and momentum, and its wave function. However, these variables cannot be observed directly and their existence is still a topic of debate in the scientific community.</p><h2>4. How do scientists study and uncover hidden variables in quantum mechanics?</h2><p>Scientists use various experimental techniques, such as quantum entanglement and Bell tests, to study the behavior of particles and try to uncover any hidden variables that may be influencing their behavior. These experiments aim to test the predictions of different theories and determine which one best explains the behavior of particles.</p><h2>5. What are the implications of uncovering hidden variables in quantum mechanics?</h2><p>If hidden variables were to be discovered and proven to exist, it would have significant implications for our understanding of the fundamental laws of nature. It could potentially lead to a more deterministic view of the universe and challenge the current probabilistic interpretation of quantum mechanics. However, the existence of hidden variables is still a topic of ongoing research and debate.</p>

1. What is the concept of uncertainty in quantum mechanics?

In quantum mechanics, uncertainty refers to the idea that certain properties of a particle, such as its position and momentum, cannot be measured simultaneously with complete accuracy. This is known as the Heisenberg uncertainty principle.

2. How does the uncertainty principle relate to hidden variables?

The uncertainty principle suggests that there are inherent limitations to our ability to measure certain properties of a particle. Hidden variables are hypothetical properties that could potentially explain the behavior of particles in a deterministic way, but they are not observable and therefore cannot be used to overcome the uncertainty principle.

3. What are some examples of hidden variables in quantum mechanics?

Some examples of hidden variables that have been proposed include the spin of a particle, its exact position and momentum, and its wave function. However, these variables cannot be observed directly and their existence is still a topic of debate in the scientific community.

4. How do scientists study and uncover hidden variables in quantum mechanics?

Scientists use various experimental techniques, such as quantum entanglement and Bell tests, to study the behavior of particles and try to uncover any hidden variables that may be influencing their behavior. These experiments aim to test the predictions of different theories and determine which one best explains the behavior of particles.

5. What are the implications of uncovering hidden variables in quantum mechanics?

If hidden variables were to be discovered and proven to exist, it would have significant implications for our understanding of the fundamental laws of nature. It could potentially lead to a more deterministic view of the universe and challenge the current probabilistic interpretation of quantum mechanics. However, the existence of hidden variables is still a topic of ongoing research and debate.

Similar threads

Replies
80
Views
3K
  • Quantum Physics
Replies
7
Views
985
Replies
7
Views
995
Replies
5
Views
1K
Replies
21
Views
3K
  • Quantum Interpretations and Foundations
2
Replies
45
Views
3K
Replies
17
Views
2K
  • Quantum Physics
Replies
9
Views
1K
  • Quantum Physics
Replies
12
Views
2K
  • Quantum Physics
Replies
9
Views
2K
Back
Top