Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

QM hidden variables

  1. Apr 10, 2005 #1
    Up to today no underlying hidden variables that determine what is called the quantum uncertainty have been demonstrated.

    Consider :

    We have a setting that includes a geiger counter and a computer.
    This computer has a button, which, when we press it, starts a clock. This computer is attached to a geiger counter, which if it detects a (random) electron decay, signs a signal to the computer to stop the clock. Lets say if the clock stops at an even 1/1000th second number, it generates a 0, if its stopped at an uneven 1/1000th second number, it generates a 1.

    This experiment is well known, and will generate if repeated long enough, on the long run almost as much 0's as 1's (Binomial distribution). There is room for deviations, but usually within a small degree of freedom.

    My Question now is, WHY does this experiment, which should be dependent on really RANDOM quantum mechanic occassions (QM as total chaos) generate a quite clear binomial distribution and in the end in theory almost as much 0's as 1's instead of a total random output???
     
  2. jcsd
  3. Apr 10, 2005 #2

    jtbell

    User Avatar

    Staff: Mentor

    I think that to best appreciate this phenomenon, you need to do a brute-force analysis of a small version of it.

    Each decay has an equal probability of producing a 0 or a 1. Therefore, if you consider six consecutive decays, you get a binary number ranging from 000000 to 111111, with an equal probability for each number within that range (000000, 000001, 000010, etc.).

    Write down all 64 six-bit binary numbers, and count how many of them have one 1, two 1's, three 1's, etc. How many 1's are most likely?

    Note that there's nothing particularly quantum-mechanical about this phenomenon at all! We could just as well toss an evenly-balanced coin to decide between 0 and 1.
     
  4. Apr 10, 2005 #3
    I am not sure that this has anything to do with hidden variables -- the statistics quoted are just normal statistics . Hidden variables states that something is aready perdetermined even if YOU do not know the outcome .
    What quantum mechanics says is that only when you make your choice is the outcome THEN determined - until then there is NO determined outcome .
    I do not know if the Bell test is totally accepted but the contention is sure -- the past does not effect the future , it says that causality is at risk , whereas the normal interpretation is 'you may not know ' but nevertheless effect follows cause .
    The result is effects do not follow cause but are indeterminate . With some probability . If you think about this you will see that this is only the possible answer to free will -- if not your choices are predetermined -- even if you do not know .
    How about this scenario --- your ideas are random ( totally ) but they get fed into a machine which has learned from past experience and is capable of weighing odds of following the new idea cf the old idea -- that is with all sorts of weighted emotions -- I have a feeling that this would quite closely mimic Human response ( BUT with A billion yrs of experience ) .
    Ray ( I have no real clue -- but I believe in measurements especially when repeated by independant groups ) there is no reason to think that Human normal experience has educated us as to what to expect of physics -- thts' a few hundred yrs cf 14 billion .
    Yours Ray .
     
  5. Apr 10, 2005 #4
    i would think that the result that is found, (that there is very close to the same number of 0's as 1's if the experiment is run long enough) is just a result of the Lorentz invariance of Quantum Electro-Dynamics!

    In other words, all other variables the same, detector type, sample type, etc., it should not make any difference WHEN the experiment is conducted, you would have to expect the nearly the same result, indeed perhaps the same number of 1's and 0's if the sample was ALL decayed. But I would not know how to prove this, this is just my guess.

    Again, my guess is that if there would not be the relatively the same number of 0's and 1's found in conducting such an experiment then somehow the Lorentz invariance of Quantum Electro-Dynamics would fail somehow. But this is just my "Hunch!"
    what does anybody think of this?
    love and peace,
    and,
    peace and love,
    (kirk) kirk gregory czuhai
    http://www.altelco.net/~lovekgc/kl.htm
     
  6. Apr 10, 2005 #5
    of course, there's more to Beta decay than that is not there?
    oh i know so little physics! the spontaneous symetry breaking involved in beta decay.
    how this, QED, and Quantum Statistics, to possibly prove that one would get the same number of 1's and 0's in this type of experiment IS WAY BEYOND ME!!!
    peace and love,
    and,
    love and peace,
    kirk
     
  7. Apr 10, 2005 #6
    maybe what i have said so far has nothing to do with the situation at all except in the case where one has many radioactive atoms of an isotope as one generally has.
    i wonder if the experiment has ever been done for a really small sample size, say of several thousand atoms? or hundred? is this possible and then does one still get the same result as the same number of 1's and 0's?
    peace and love,
    and,
    love and peace,
    kirk
    http://www.cosmicfingerprints.com/audio/newevidence.htm
     
  8. Apr 10, 2005 #7
    A total random output with only 2 possibilities will give you equal amounts of each possibility if totally random and continued long enough.

    Each event does not have the same probability set, but the probability super sets for each outcome are the same over the long term.

    The possibility of hidden variables occurs in the exact temporal structure of each superset. They will not in this case affect the outcome.

    juju
     
    Last edited: Apr 10, 2005
  9. Apr 10, 2005 #8

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    Not sure I understand the paradox you are trying to identify. You take randomly occurring phenomena and map to numbers, giving a distribution consistent with random output?

    Where is the difference between the expected values and the observed values?
     
  10. Apr 11, 2005 #9
    let's see what i remember about nuclear decay!
    imagine the following experiment. we have an radioactive isotope that has a half-life of 4 days. if we have a lot of atoms of them very close to 1/2 would be decayed in exactly four days.

    but if we have only one we cannot say that! from quantum physics a single atom could decay in a tenth of a second or four years or some other time from the beginning of the experiment.

    IT IS FROM THIS FACT OF QUANTUM MECHNICS I THINK; that even though the half life of a radio-isotope can be known, the time of an individual atom's decay is NOT that an equal number of 0's and 1's will appear if the experiment is performed for a long enough time in the case where one has many atoms of the radioactive isotope.

    plus i think there may be some effect that the experimental apparatus is placing the data in histogram bins of 1/1000 seconds also.

    all the stuff i wrote before about QED, etc. i do not feel is necessary to prove the results found in experiments, just what is stated here in this reply. I ALMOST think if i could have almost done it at one time? using what i knew about the half life formulae and considering an experiment with a large number of radioactive atoms of an isotope run for a time period much longer than the isotope's half life.

    well! i am probably just proving to everybody how little i know! I thought maybe? something? i have written on this topic might help in a tiny way, sorry if it has not.
    peace and love,
    and,
    love and peace,
    kirk
    http://www.altelco.net/~lovekgc/PrincessLittleHoney.htm
     
  11. Apr 11, 2005 #10
    Incorrect. This is a common fallacy in interpreting the results of QM. What (the results of) QM “says” is that the outcomes of experiments are epistemically indeterminable. Only certain strange interpretations of QM (notably the Copenhagen interpretation) equate this with ontic indeterminism. It is important to understand that epistemic indeterminability does not necessarily imply ontic indeterminism.

    (1) As explained above, QM is not necessarily indeterministic, hence this is incorrect.
    (2) Even if QM were ontically indeterministic, this would be a source of randomness, but how could this be a source of “naïve” free will? Think about it (and see below).

    You are incorrect in thinking this somehow generates “naïve free will”.
    Think about it. Why do the “ideas” in this model need to be random? Feed the same machine with a selection of “non-random” ideas, and the same machine (which has learned from past experience and is capable of weighing odds etc) can still decide which “ideas” to follow-up. The introduction of a random element into the generation of the “ideas” in the first place adds nothing to the “free will” of the machine (whatever "free will" might be).

    MF
    :smile:
     
    Last edited: Apr 11, 2005
  12. Apr 11, 2005 #11
    Thanks for the replies, i'll try to put my question a little less vague.

    Lets say we use a quantum effect, ie the decay of an electron, which should according to the uncertainty principle be totally random. Use this effect to map it to a number generator, so that with this random effect you can generate 0's or 1's. WHY is it that on the long run chances will be distributed 0.5 for 0 and 0.5 for 1? Something that occurs thus totally random produces an output that is still random (the actual experiment has fluctuations according to the theorethical distribution) but to a far less degree of freedom?

    Also, in answer to rayjohn01's response, if it is like you say, everything is undetermined (due to quantumhyperposition that says that the outcome is actually only coming into exist when the experiment is conducted), WHY and HOW is it that something so utterly uncertain generates all of this in such way as we know everything (from matter to...)

    Use this last in the experiment given, WHAT determines , if using the superposition quantum uncertainty principle, that what is produced will always generate a consistent distribution??

    Hope i cleared some things out about my question,

    Thanks!
     
  13. Apr 11, 2005 #12
    It is a little bit difficult to understand what you want to say/know. First of all, I think you need to say if you understand classical probabilities (frequentist view) and the experimental results they give (as the binomial law of the independent trials on random variable).
    If you understand this topic then you can view, formally, the statistical results of a quantum measurement experiment (the satsitics of one observable, e.g. the energy of the electron, P(E=e)) as the statistical
    result of a classical experiment (P=P(E=e), 1-P) and therefore recover binomial laws (of n trial sequences) or simply the law P (n --> +oO).

    Seratend.
     
  14. Apr 11, 2005 #13

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    Well, you can say this... a lot of philosopher-types do...

    But the experiments speak strongly against this view. As a general rule, a single counter-example is sufficient to disprove any theory. The results of EPR/Bell tests are a counter-example to the hypothesis that particle attributes have determinate (ontic) values independent of their (epistemic) observation.

    There are "some" interpretations of these results in which ontic determinism is still viable, such as Bohmian mechanics. However, such interpretations are not generally accepted at this time.
     
  15. Apr 11, 2005 #14

    ttn

    User Avatar


    Oh, the irony. If "a single counter-example is sufficient to disprove any theory", then wouldn't the example of Bohmian mechanics -- which you are obviously aware of -- be a counter-example to the "theory" that "experiments speak strongly against" the failure of determinism?

    Oh, right, Bohmian mechanics isn't "generally accepted at this time" so I guess we can just ignore this counterexample. Everyone else is doing it...

    I think it would be much more accurate to simply state the truth: for some bizarre philosophical or historical reasons, the founding fathers of quantum mechanics loved and latched onto the idea that determinism failed. And many, many people have followed them. But there is an explicit counter-example to the claim that this was necessitated by the evidence. This proves one thing and one thing only -- whatever reasons people had for believing in the failure of determinism were not based on conclusive physical evidence, but something else.

    On this point, I would highly recommend the book "Quantum Mechanics: Historical Contingency and the Copenhagen Hegemony" by (the late) Jim Cushing.

    ttn
     
  16. Apr 11, 2005 #15

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    Bohmian Mechanics is not a counter-example to QM/CI. It is an alternative in which causality "may" be restored at the cost of locality. Just as Bell discovered that local reality and QM are incompatible in some respects, perhaps in the future someone will figure out how to distinguish between BM and CI.

    But I appreciate your point. *Perhaps* if things had been discovered in a different order, we would consider CI to be fringe and BM to be mainstream.
     
  17. Apr 11, 2005 #16

    ttn

    User Avatar

    That's not correct. The price paid for determinism is not locality, but... nothing. Orthodox Copenhagen QM is non-local too, in precisely the same way that Bohmian mechanics is -- namely, it violates Bell's locality ("factorizability") condition.

    No local theory is consistent with the experimentally observed EPR type correlations. That's just a fact. You can't have a local theory. There is no choice about that. But there is a choice about whether to have a theory that is deterministic and clear or a theory that is non-deterministic, fuzzy, subjective, "unprofessionally vague and ambiguous."

    Other than sheer, unthinking inertia, is there actually any reason to believe in Copenhagen and not take the Bohmian option? I don't know of any.
     
  18. Apr 11, 2005 #17
    DmuitW, omg, have i been out of the mainstream for so long that i have missed the "news" about lepton, i.e. electron decay?
    what is the half life determined for the "buggers"?
    i will have to go to work now quantizing my KingFranklin theory which I thought could remain a CLASSICAL theory.
    http://www.altelco.net/~lovekgc/KingFranklin.htm
    any help from you all would be appreciated in answering these questions or this endevour! THANKS!!! (:-O) !!!
    love and peace,
    and,
    peace and love,
    (kirk) kirk gregory czuhai
     
  19. Apr 11, 2005 #18
    is not it true Dr. Chinese, that non-local theories run into causality problems.

    Quantum Electro-Dynamics for example at least in the Domain for which it applies is about the BEST, MOST ACCURATE verified theory there is for is for at least a number of experimental parameters, implying strict Lorentzian invariance and thus locality in its type.

    Although some esoteric experiments have somewhat recently been performed such as "freezing" photons, or "trapping" them, or slowing them down, and various, "tunneling" experiments performed, I know of no experiment performed yet where it has been shown that actual information can be tranmitted at a velocity faster than the speed of light in a vacuum.

    please inform me if I am incorrect!

    love and peace,
    and,
    peace and love,
    (kirk) kirk gregory czuhai
    http://www.altelco.net/~lovekgc/brainwash.htm
     
  20. Apr 12, 2005 #19

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Some points can be made, but in essence, I agree with you.
    Copenhagen, with explicit collapse, is just as ugly as Bohm and moreover must introduce a paradigm shift concerning "reality". Both commit - in my opinion - the same sin: while very powerfull symmetries led to the right dynamics of the wavefunction (which is shared by both theories), they introduce a blunt violation of it in another part: in Copenhagen, it is the "collapse", and in Bohmian mechanics, it is the guiding equation.

    MWI-like views trade in something else: they trade in "intuitive ontology" for "respect of symmetry" (of which locality is a part). That's why I like them: they respect fully the same symmetries (and locality) as those that lead us to the theory in the first place (lorentz invariance, gauge invariance...).

    So Bohmians, and Copenhagians must solve the following puzzle: how come that the symmetries which led us to the right theory concerning the wave function are put under the carpet in the guiding equation/collapse ?
    Copenhagians moreover introduce a paradigm shift from determinism to complementarity.

    MWI-ers have to solve the issue: why is the ontology of the world so very different from what we intuitively observe ? Personally, I think that this is a difficulty of lower order, because intuition is something psychologically rooted into humans and does not need to be related to any ontology.

    And working scientists have to solve the issue: what is the simplest formalism that allows me to compare my experimental results to my calculations ? And that's the place were Copenhagen wins usually by several lengths. And, it is in fact the most important point.

    cheers,
    Patrick.
     
  21. Apr 12, 2005 #20

    vanesch

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    This, however, is not true, as MWI shows you. So don't fall in the same trap as CI proponents: do not deny a possibility of which there exists an example :-)

    Some CI proponents deny that a deterministic model can make the same predictions as QM/CI, while Bohmian mechanics does exactly that. In the same way, MWI fully respects Lorentz invariance (= "locality"), while you claim that this cannot be done.

    Apparently, what doesn't go together is the following set:
    {Lorentz invariance, deterministic ontology, experimentally confirmed QM EPR predictions}

    QM/CI blows the first two ;
    Bohm blows the first one ;
    MWI blows the second one ;
    Original Thinkers blow the third one.

    cheers,
    Patrick.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?