Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Is quantum indeterminacy a result of experiment design?

  1. Jan 18, 2006 #1
    Is "quantum indeterminacy" a result of experiment design?

    Stemming from a debate about free will, I, a total lay person, have been doing some reading on quantum indeterminacy (QI). I have come up with some questions, I hope this forum is an appropriate place for them.

    Basically the question is this: where does the idea of QI come from. The two scenarios that seem to pop up most often are Schrödingers cat and the two-slit electron experiment. But in both cases it seems to me that the indeterminacy that shows up is exactly what was designed into the experiment.

    In case of the cat the design incorporates a known indeterminate element: the decay of an atom. We do not have a model that will predict the exact time when a given atom will decay, but we do have a statistical model, and that model is incorporated into the cat experiment. That model predicts that, if we open the box at t=Lambda, we have a 50% chance of a dead cat. Before we open the box we of course don't know the state of the cat, but that is what was designed into the experiment. How does this show any indeterminacy beyond that which was designed into the experiment?

    With the double slit experiment we seem to have a similar situation. The indeterminacy designed into the experiment is that we don't know exactly which slit a given electron will go through. But we do know that 50% of the electrons will go through one, 50% through the other slit. The actual probability distribution turns out to be a wave function, which is perhaps a bit unusual, but no more than that. As a result the distribution of electrons on the detector screen displays an interference pattern. So far I fail to see any indeterminacy beyond what was designed into the experiment.

    We can then change the experiment by removing the indeterminacy: we add some sort of apparatus that tells us through which slit each electron passed. This apparently changes the probability distribution from a wave to a classic particle yes-or-no distribution. OK, so that is maybe strange, but I still see no indeterminacy here.

    In both cases the outcome of the experiment is completely predictable. In the first case we have a probabilistic experiment (we don't know which slit each electron goes through), hence we get a probabilistic outcome. Indeterminacy would only pop up if the interference pattern was not consistent. When we change the experiment such that we remove the indeterminacy, we get a result that is equally predictable and compatible with the new experimental set-up.

    So where does this term "quantum indeterminacy" come from? I must be missing something here.
     
    Last edited: Jan 18, 2006
  2. jcsd
  3. Jan 18, 2006 #2

    JesseM

    User Avatar
    Science Advisor

    Your question is a little confusing--"indeterminacy" usually just means the fact that QM is probabilistic, ie non-deterministic, but you seem to accept this in your question. By "indeterminacy" are you talking about something more like the idea that quantum particles and systems don't have well-defined properties when they aren't being measured, such as the idea that the cat in the Schroedinger's cat thought-experiment is not in a definite state until the box is opened, or the idea that the particles in the double-slit experiment don't go through one slit or the other when their path is not measured? If that's what you mean, I don't think "indeterminacy" would really be the right word for it, although I'm not sure what term should be used, maybe "no local hidden variables".

    In any case, the difficulty with imagining particles have definite properties even when not measured is probably best illustrated by the weird correlations between measurements of different members of entangled pairs of particles, what Einstein called "spooky action at a distance". Here's something I wrote about this on another thread:
    Technically, the prediction that the photons must have opposite polarizations at least 1/3 of the time when measured along different axes here is predicted by something called "Bell's theorem", which you can read more about here or here--any system that obeys "local realism" (particles having definite preexisting states before measurement, where the state of one cannot affect the state of another faster-than-light) must obey something called the "Bell inequality", but quantum mechanics violates the Bell inequality, thus disproving local realism.
     
  4. Jan 18, 2006 #3
    you dont see the interference anymore because you dont have coherent wave anymore (you got all the range of frequecies and directions for the wave vector)-meaning that by measuring the possition for definite value you created uncertainty in the momentum...

    thats the uncertainty priciple, you cant know for sure the position and the momentum, or the time duration of a reaction and its energy, or the spin of a particle at more then one axis simultaniuosly...

    when you measure one of these properties you change your system and get an uncertainty at the other property.
     
  5. Jan 18, 2006 #4
    Yes, there are lots of models that have a probabilistic component, e.g. weather predictions, so assuming that isn't that strange. This generally does not let us to assume that it is "fundamentally" impossible to come up with a non-probabilistic model. Usually the conclusion is that we don't know enough and/or don't have enough computational resources.

    As I said, it was a philosophic discussion--about free will, determinism, that kind of thing--that led me to think about this. In (some) philosophic circles it is quite usual to loosely remark that QM has introduced a fundamental randomness, or non-determinism if you will, into the universe. This may be so, but if so neither the cat nor the slits seem to indicate this.

    Maybe the only part of QM that does introduce a fundamental "unknowability" is, as fargoth mentioned, Heisenberg's uncertainty principle.

    Thanks for the explanation of and the pointers to Bell's inequality. As far as I can tell from Harrison's article, this experiment throws doubt on at least one of the following three assumptions:
    1. Logic is valid (to be precise the assumption that (P Or Not P) is always true.)
    2. There is a reality separate from its observation (hidden variables)
    3. Locality (c is the speed limit).

    Interesting as this is (my personal favorite is that (P Or Not P) may not always hold :-), this again does not say a lot about predictability. As far as I can tell, the outcome of a Bell experiment is predictable, the distribution is exactly as predicted by QM (rather than classic theory).
     
  6. Jan 18, 2006 #5

    JesseM

    User Avatar
    Science Advisor

    Sure, in classical physics there may be situations where the exact details of the position and velocity of every particle at the microscopic level cannot be know, so you can only make statistical predictions, but it's assumed that if you could have complete information the situation would actually be completely deterministic. There actually is an "interpretation of QM" (not a theory, since it makes no new predictions) like this, invented by David Bohm and sometimes called "Bohmian mechanics", in which each particle has a well-defined position and trajectory at all times, and they are guided by a "pilot wave" which can transmit information instantaneously (thus sacrificing locality), and whose behavior is totally deterministic. In this theory, the perceived randomness is due to our lack of information about the precise state of each particle and the pilot wave, so you make an assumption called the quantum equilibrium hypothesis which treats every possible state that's compatible with your knowledge as equally likely for the purposes of making statistical predictions. See this article on Bohmian mechanics, or this explanation of how the pilot wave works. Aside from the fact that, like all interpretations, it's impossible to test, a lot of people object to the fact that Bohmian mechanics is a bit contrived and inelegant, not to mention the fact that it must violate relativity at a fundamental level even if not at an empirical level (but Bohmian mechanics was only developed to explain nonrelativistic quantum mechanics, I don't think there have been any fully successful attempts to extend it to quantum field theory, which is relativistic). This and other objections to Bohmian mechanics, along with answers from its supporters, are discussed in this paper:

    http://arxiv.org/abs/quant-ph/0412119

    There is also the "many-worlds interpretation" (see here and here), which is supposed to be deterministic at the level of the entire multiverse, with the apparent randomness due to the experimenters splitting into multiple copies who get different results; but there seem to be a lot of problems showing how to derive the correct probabilities from the many-worlds interpretation without making additional assumptions.
    But this is a limitation on our ability to measure noncommuting variables like position and momentum simultaneously, so it's only "unknowability" about hidden variables, if you believe there's some objective truth about the value of these variables at all times.
    Well, wouldn't any theory that attempts to explain the apparent randomness of QM in terms of lack of information about some underlying deterministic reality, like with the apparent "randomness" of classical systems when you don't have all the necessary information about every particle's position and velocity, necessarily involve "hidden variables" of some sort?
     
    Last edited: Jan 18, 2006
  7. Jan 18, 2006 #6

    the uncertainty principle stems from the fact that we are handling a wave packet in the standard QM.
    a pulse is constructed from many frequencies, the shorter it is, the more frequencies it has.
    if you say you'd ignore the fact that things are represented by waves of probability amplitude, and say that in the double slit experiment there is a path this particle will go through which is already determined (hidden var), but you just cant know it without changing the momentum,
    you cant explain the dark spots, in which the particle cant be.
    you dont have interference if the particle behaves classically, it must move through both slits and interfer (even one particle will interfere with itself), once the wavefunction collapse to one of the slits (if the particle moves through just one slit), there will be no interference.
     
    Last edited: Jan 18, 2006
  8. Jan 18, 2006 #7

    JesseM

    User Avatar
    Science Advisor

    Sure, but the analogous version of this for waves in classical mechanics doesn't mean there is any information about the wave we are lacking, it just means you have to sum multiple wavelengths of pure sine waves to get a localized wave packet, so you can't say the wave packet has a single wavelength. It's only because the wavefunction in QM represents the probability of getting different values for properties like position and momentum that we call it "uncertainty", so if you make the assumption that these properties must all have well-defined values at all times (hidden variables), then this means a limit on your ability to know the values of noncommuting properties simultaneously.
    But in the Bohmian interpretation of QM, the particle does always travel through one slit or another, it's just that it's guided by a nonlocal "pilot wave" which in some sense is exploring both paths, and the pilot wave will guide it differently depending on whether the other slit is open or closed, and depending on whether there's a detector there. I think the Bohmian interpretation seems too inelegant to be plausible, but it nevertheless serves as a proof-of-principle that you can have a deterministic nonlocal hidden variables theory that replicates all the same predictions as ordinary QM.
     
  9. Jan 19, 2006 #8
    correct me if im wrong, i havent read that interpretation yet, but if you claim a particle has a definite place in the space every time even though we cant know it, and this place is dependant on the wave function, the route the particle will go through will not be a straight one, but rathar a complicated direction-changing one.... doesnt it mean a charged particle will radiate on such a path? or does the particle just teleports itself each moment to a different place? is so, how does it help determinism?
     
  10. Jan 19, 2006 #9
    Predicting probability distributions isn't the same as predicting individual outcomes. QM predicts a limit to predictability...and provides the techniques for making the best predictions possible.
     
    Last edited: Jan 19, 2006
  11. Jan 19, 2006 #10

    reilly

    User Avatar
    Science Advisor

    Probability deals with events. While a fundamental, rock solid basis for probability theory does not exist, what does exist covers QM probabilities, which, of course, deal with events.

    Without QM, the two slit experiment for electrons will not have a discernable interference pattern; the electrons will go in a straight line unless blocked by a screen. But, unlike electrons, photons/radiation "always interfere when involved in a two-slit experiment -- within appropriate frequency ranges and so forth. If the photon is energetic enough to melt the screen, all bets are off. You never know, so to speak where the next photon will fall.

    You never know what, if any stocks on the New York Stock exchange will appreciate more than 5% during a day's trading, or..... Given enough history, statisticians can estimate the probability of such events -- theories in this area are not anywhere near as powerful as in physics, but some have had success with models. So, data analysis, data mining and so forth are key elements. But, the financial modelers and the physicist are doing exactly the same thing: they are, in one way or another, estimating and deriving probabilities.

    They have, among other things, a common allegiance to probabilistic methods, and recognize the importance of empirical data to their success. In fact, it can be argued that they both use the same fundamental, formal equation to track probabilities over time. In physics, the equation is called the Schrödinger Esq. for the density matrix, in finance, economics, and other social sciences, and in engineering, the equation is often called a stochastic probability equation, or many things, usually with stochastic tossed in.What they both do, is give the values of probabilities now, based on yesterday, or last minute, or the last year, or.....or.......


    But of course the drivers of dynamics differ greatly between the various fields. If we use classical and quantum approaches to sending electrons though slits, then only the QM approach will explain the interference pattern, which has nothing to do with experimental design -- the sample space actually covers probably thousands of experiments, many many slit configurations, and the conclusions are invariant; QM is right.

    It's worth remembering that the experimental finding of electron interference came before modern QM. Thus, QM was, out of necessity, required to explain electron diffraction. To a substantial degree this core phenomenon, in my opinion, drives almost all of QM, the Heisenberg Uncertainty principal, the superposition principal, tunneling, and so forth. Interference first, theory second.

    When talking QM, it is, I think, important to recall QM's origins, and to indicate the immense coverage and success of the theory. My conservative guess is that there are at least 100,000 pages of material on the successful testing of QM -- this covers atomic physics, solid state physics, superconductivity, super fluidity, semiconductors and transistors, and you name it. There is no indication of the influence of experimental design on QM, other than that normally expected. (I did a lot of work on this issue, in regard to electron scattering experiments, a few years ago, and never found any indication that different designs led to different conclusions. (if you know about things like ANOVA, etc, then you will recognize a bit of oversimplification on my part -- to save writing another paragraph.)

    It's a virtual certainty, at least to me, that eventually QM will be found wanting -- the march of history and all that. But, I'd say that there's no chance -- better, slim to none -- to find the problem in the 100,000 pages. All of this work has been steam-cleaned by the most intense scrutiny imaginable, and QM always has done the job. The QM edifice, probability and all, is huge, stable and solid -- very unlikely to come tumbling down; it will crumble at the edges instead.
    Regards,
    Reilly Atkinson
    QM is weird because nature is weird.
     
    Last edited: Jan 19, 2006
  12. Jan 19, 2006 #11
    Jesse,

    thanks for the links to the articles about Bohmian mechanics, that was very interesting. I think that that may have been precisely what I was after. The article by Goldstein at one point says "...if at some time (say the initial time) the configuration Q of our system is random, with distribution given by Abs(Psi)^2 = psi*psi, this will be true at all times..." That, I think, is like what I was trying to say about the randomness that the results show being the same as the randomness put into the experiment in the first place.

    Yes, except that determined non-determinists like to think that QM has somehow "proved" that there "are" (no doubt in some Clintonian sense :-) no hidden variables. As you point out elsewhere, Bohmian mechanics, whether correct or not, does away with any arguments that rely on the assumption that QM is necessarily non-deterministic: Bohmian mechanics "serves as a proof-of-principle that you can have a deterministic nonlocal hidden variables theory that replicates all the same predictions as ordinary QM," thus providing a counter example to the assumption of necessary nondeterminism.
     
  13. Jan 19, 2006 #12
    Reilly,

    Thanks for your reply. I never intended to say that QM was wrong or didn't work, just that I didn't see any reason to assume fundamental nondeterminism. In e.g. economics randomness is introduced because we don't know enough, or don't have enough computational resources to completely describe the system (as in chaos theory).

    In QM we seem to have something similar, and following Bohm the source of the randomness there seems to be that we cannot establish anything beyond what the quantum equilibrium hypothesis allows, thus preserving e.g. the uncertainty principle.

    In both cases there is no need to posit a fundamental indeterminacy, we just run into practical limits. At least, that is what I understand from Bohmian mechanics so far.
     
  14. Jan 19, 2006 #13

    jtbell

    User Avatar

    Staff: Mentor

    However, nonlocal theories are incompatible with relativity, as far as I know. In order for BM (or some other nonlocal hidden variables theory) to be taken more seriously as a replacement for orthodox QM, someone is going to have to come up with a replacement for relativity that both allows for nonlocality and satisfies all the experimental tests that relativity has passed so far. Rather a tall order! :bugeye:
     
  15. Jan 19, 2006 #14

    reilly

    User Avatar
    Science Advisor

    As a pragmatist in economics and physics, I really don't care much about the origins of randomness.It's there -- let me explain: My version is, for whatever reasons, we cannot predict the precise result of virtually any experiment -- for sure for anything measured by a real number. Always there's experimental error -- maybe not in simple counting, oranges or change or books ....... the sources of which we do not fully understand, but we do know how to manage the error, and estimate it. One posits the experimental error to be a random variable, Gaussian is preferable but not necessary. In fact systems, control, communication and electronics engineers use a very sophisticated approach to errors, one that goes well beyond elementary statistics. And, all this effort works.


    Why should something be causal rather than random? Where is it written?
    And, reflect upon the following curiosity: it's possible to build models, based on random variables, that are fully deterministic -- this shows up in the subject of Time Series Analysis.

    So, to avoid brain crunching contradictions, and tortured thoughts, I go with: Random is as random does. If random works, it must be random.

    Probability and statistics provide great tools for solving lot's of important problems -- even if we are not sure why, these tools work. But we say that it's random stuff that is explicated by probability and statistics.

    Around these parts, Newton's mechanics and gravity work just fine. Frankly, I find this as mysterious as the accuracy of the standard
    probability model for a fair coin.

    By your criteria, how could you ever judge that an apparent random process was in fact causal?
    Good questions.
    Regards,
    Reilly Atkinson
     
  16. Jan 19, 2006 #15
    Well... isn't orthodox QM equally non-local? This article by Goldstein states:

    It should be emphasized that the nonlocality of Bohmian mechanics derives solely from the nonlocality built into the structure of standard quantum theory, as provided by a wave function on configuration space, an abstraction which, roughly speaking, combines -- or binds -- distant particles into a single irreducible reality.​
    It seems to me that under these circumstances to complain that you can't take BM seriously as a replacement for the equally non-local orthodox QM is, to make a link with a quarkian property, strange :smile: .
     
  17. Jan 19, 2006 #16
    The reason for assuming causality is perhaps not so much physical as methodological. When you incorporate a stochastic element in a model, you are in fact saying something like "we have a rough description of how this variable works, but for the purposes of this model we are not going to bother explaining the detailed workings." In other words, you are explicitly putting the causation of a certain behaviour outside your model.

    That is just fine for pragmatic solutions. It may simply not be possible to incorporate all causilities into the model (think chaos theory), or we may not yet know enough. In that case you take what you have or what you can do. No problem.

    But once fundamental indeterminacy is posited for a "theory of life, the universe and everything" you are saying that you have given up looking for explanations for at least some observed behaviours. You just stick with the rough description, never mind "what makes it tick." And that is equivalent to saying that you have given up on the scientific endeavour, at least for those behaviours. Now it may be that at some point we will have to acknowledge that for some phenomena we have been getting nowhere for a long time and it doesn't look as if we'll be getting any further any time soon. But one shouldn't assume that as a matter of principle, which is what non-determinism amounts to.

    And as a practical matter, one probably won't raise too much grant money for proposing to not look any further into a certain phenomenon :smile:.
     
  18. Jan 19, 2006 #17
    well, as i see it the "non-locality" of orthodox QM is due to particle not having definite size, i mean they are everywhere, its just less probable to find them far away from a certain spot... so the particle is not local, and theres no problem of it "knowing" things that are far away from it.

    the non-locality problem only occures when you insist the particle has definite place at all times, and it just "knows" non-local conditions.
     
  19. Jan 19, 2006 #18

    selfAdjoint

    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    Totally incorrect. Size, as ZapperZ remarked on another thread, is simply not defined for these particles, and you have completely missed the point of entanglement.

    Unlike some people, physicists don't just make up this stuff off the top of their heads.
     
  20. Jan 20, 2006 #19
    i dont just make up this stuff off the top of my head, i can accept that you dont like the term size, and im not going to argue about it, because in the end its the same as saying the particle doesnt have a defined size or is point sized or whatever, what matters is where it can interact with other matter, and that is everywhere (though maybe with almost zero probability when its far away...).

    if a particle is described as a field of probability amplitudes, it exists everywhe, thats what i meant.

    there's no need to go all sensitive about it :tongue2:

    i wasnt talking about entanglement... i was talking about the particle "knowing" where it could exist in space according to the surrounding enviorement, which would have been a non-locality problem if you thought of this particle as existing at a certain point for a certain time, like BM claims.

    isnt the uncertainty principle also related to indeteminacy?
    you cant know where you particle is not because you cant measure it, but because if QM is right, it doesnt have a well defined position.
    you can confine it using a potential well, (though as long as the well isnt infinite it could still tunnel everywhere with very little probability)
    tunneling, for example, is a non-locality problem if you think of this particle as having a defined place which you just cant know, but if you say the particle is everywhere its not a problem anymore.

    maybe i miss-understood the hidden variable stuff, but i think it should be applied to every unknowable variable, and if it is, then a problem of non-locality is present at the most simple level of position, and you dont have to go and look for it in entanglement.
     
    Last edited: Jan 20, 2006
  21. Jan 20, 2006 #20

    reilly

    User Avatar
    Science Advisor

    Bohm

    Bohm did his work on his QM some 60 years ago; roughly around the same time that Feynman, Schwinger, and Tomonaga (F, S, T) figured out how to make QED work.

    Even if the Bohm approach had lead to a mere 1 percent of the progress in fundamental issues of the FST approach, it would have a certain, if small. level of credibility. But, as far as I know, the Bohm approach has lead only to controversy, conferences, heated and passionate arguments, and soforth -- and only in a limited part of the physics community. No Bohm interpretation-based physics of any consequence means it's a path not likely to bring any success -- orthodox physics has left Bohmian physics way behind.

    By the way, there are sound theoretical reasons why QM and QFT are local--it's all in the structure of the interactions. And we use, almost exclusively so-called point interactions, guaranteed to be local.

    Regards,
    Reilly Atkinson
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Is quantum indeterminacy a result of experiment design?
Loading...