Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Is quantum mechanics a complete theory of nature?

  1. Mar 6, 2012 #1
    The wave function represents all that can be known about a quantum system, but that usually means that we only know the energy. In the case of entanglement we know the energy but not the momentum (e.g. angular momentum) of its components. When one component of an entangled system (one spin up and one down) is measured the wave function collapses and we immediately know the spin of the other particle with a speed exceeding that of light. However, if we knew how the momentum of the quantum system was distributed to begin with we could describe the system without a need for measurement and entanglement would not be an issue. So based on the inability of quantum theory to specify momentum it seems to me that quantum theory is incomplete. And due to the uncertainty principle a complete theory is impossible.
     
  2. jcsd
  3. Mar 6, 2012 #2

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    This question has been considered. Have you already read this?

    A. Einstein, B. Podolsky, N. Rosen: "Can quantum-mechanical description of physical reality be considered complete?" Physical Review 41, 777 (15 May 1935)

    http://www.drchinese.com/David/EPR.pdf
     
    Last edited by a moderator: Mar 6, 2012
  4. Mar 6, 2012 #3
    You might find this information of use - http://www.perimeterinstitute.ca/News/In_The_Media/Fair_Dice:_new_research_shows_quantum_theory_complete/ [Broken]
     
    Last edited by a moderator: May 5, 2017
  5. Mar 6, 2012 #4
    Thanks for that link. I had of course read about the EPR experiment but never seen the original.

    I also followed the link to the philosophical discussion of the same question which I think is a reasonable way to formulate an answer.
    1. quantum mechanics is the most complete theory/description of nature that we have.
    2. nature itself is the only complete description

    IOW our descriptions of nature will always be inadequate, and understandably so
     
  6. Mar 6, 2012 #5
    nortonian, you may already know this but EPR is by no means the end of the story. Long after the EPR paper, J.S. Bell proved a theorem in quantum mehanics that poses some challenges to Einstein's view. "quantumtantra.com/bell2.html" [Broken] is a good explanation of Bell's proof which is relatively easy to understand. Once you understand Bell's theorem, you can try to puzzle out the philosophical implications concerning quantum mechanics.
     
    Last edited by a moderator: May 5, 2017
  7. Mar 6, 2012 #6
    N. Herbert's description is excellent. The best I've seen. Thanks.

    He concludes: After almost a century of contact with nature's peculiar quantum way of doing business we are still lacking a quantum world view that does justice to our new knowledge of the way the world really works.
     
  8. Mar 6, 2012 #7
    I always wonder if physicists aren't repeating Lord Kelvin's "predictions" that never materialized:
     
  9. Mar 6, 2012 #8
    Kelvin gave a caveat to this statement, however:
    The first one was the difficulties of the aether, which led to Einstein's theory of relativity. The second was the ultraviolet catastrophe, which led to quantum mechanics. We can only hope to be that prescient!
     
  10. Mar 6, 2012 #9
    funny how those two clouds obscured a vast mountain range.
     
  11. Mar 7, 2012 #10
    Lugita, I have had time to ponder on Nick Herbert's description of Bell's Theorem in your link and I have some ideas I would like to share with anyone out there whose interested to see if they make sense. In the example he uses a calcite crystal to separate a beam of light into two beams of oppositely polarized light. Photodetectors are then used for two purposes: to “count” the photons in each beam and to detect the polarization of the beam. Since you are already familiar with it I won't go into detail. I don't think the thought experiment he uses is a good one. Photons are bosons meaning that more than one can occupy the same state. One of the consequences is that photon bunching occurs in light beams and they are detected as coincidences when separated by beam splitters (Brown-Twiss effect). According to the Brown-Twiss effect when Herbert uses a calcite crystal to divide a light beam into two beams polarized at 90 degrees and measures photon coincidences he is actually dividing bunches into smaller bunches and is detecting and comparing bunches not photons. When you change the polarization of the detector (its angle) whether you detect a photon bunch may depend partially upon the size of the bunch. I also question his interpretation of detection properties. How can you define a photon to be a detection event without looking at the properties of a detector? The time required to register a single detection event by a photodetector is on the order of 10-9 seconds, and single photons have periods on the order of 10-12 seconds. By that measure there could be thousands even hundreds of thousands of "photons" in a single event.
     
  12. Mar 7, 2012 #11

    zonde

    User Avatar
    Gold Member

    There are reasons to believe that there are just so many photons as we think.

    I will try to explain. Let's say you place two detectors right after PDC source in two outputs. Now you measure how many single detections you have and how much of them are paired with detections in other detector. For detector you have parameter called quantum efficiency (QE) that says (in %) how many photons you can detect with this detector. If you calculate rate between single detections and paired detections using this QE parameter it agrees very well with observed rate. And second thing is that if you increase detector's QE than rate of paired detections increases as well so that for QE=100% you would have ~100% paired detections and practically no unpaired single detections.

    You might want to look at this as well:
    Single-photon detector characterization using correlated photons: the march from feasibility to metrology
     
  13. Mar 8, 2012 #12
    Zonde you refer to "measure and detection" as though they can be equated with "photon" even though you don't say that. How do you know that a detection is a photon? Can anyone verify that without violating the uncertainty principle? The idea has also been disputed before. "arxiv.org/pdf/quant-ph/9711046" [Broken] where they say that "The down conversion is, more accurately, a correlated amplification of certain modes of the zeropoint field." I am not sufficiently acquainted with the theory to understand all their arguments but I have not seen an answer to their objections. It seems that everyone wants to jump on the quantum band wagon before considering all the evidence.

    In the bunching model you can keep on splitting a beam until it can't be detected and you will still have coincidences in the beams because you can never detect all of the bosons in an energy state. A detection event includes all the photons in an energy state, not just one.
     
    Last edited by a moderator: May 5, 2017
  14. Mar 8, 2012 #13
    Going back to my original post I think that quantum theory can include a description of its own incompleteness if we will only recognize that.
     
    Last edited: Mar 8, 2012
  15. Mar 8, 2012 #14

    zonde

    User Avatar
    Gold Member

    You mean, how do I know that detection is caused by photon and it is only one and undividable?
    If so then I guess my answer is something like that: I do not know but any viable alternative makes no difference (at this time).

    Santos says in abstract of this paper:
    "It also requires us to recognize that there is a payoff between detector efficiency and signal-noise discrimination."
    This indeed seems to be the case for SPAD detectors. But it turns out this is not a general rule for any detector:
    NIST Detector Counts Photons With 99 Percent Efficiency:
    “When these detectors indicate they’ve spotted a photon, they’re trustworthy. They don’t give false positives,” says Nam, a physicist with NIST’s Optoelectronics division. “Other types of detectors have really high gain so they can measure a single photon, but their noise levels are such that occasionally a noise glitch is mistakenly identified as a photon. This causes an error in the measurement. Reducing these errors is really important for those who are doing calculations or communications.”

    I am trying to consider evidence as much as I can. And I do not want to jump anywhere.
    Always ready to explain why I think that quantum entanglement has local realistic explanation. :wink:

    There is something missing. For coherent source there is no correlation between two outputs of beamsplitter. As I see this directly contradicts your bunching model.
     
    Last edited by a moderator: May 5, 2017
  16. Mar 9, 2012 #15
    The choice of the meaning for the word "complete" seems to be a bit strange sometimes. In that paper you linked it is taken to mean "no theory could have more predictive power than quantum mechanics", but that is hardly the mathematical definition of the word.

    Compare for example with the discussions on Gödel's theorem with respect to quantum mechanics. Here, complete means that within the set of axioms used, there are true statements that cannot be proven true. With such a definition, there are strong indications (if not proofs) that any physical theory, and therefore also quantum mechanic, cannot be "complete", because it's not compatible with "consistent", which seems to be a required property.
     
    Last edited by a moderator: May 5, 2017
  17. Mar 9, 2012 #16
    There is another paper on detection event vs. photon by Marshall http://www.mendeley.com/research/myth-down-converted-photon/ which specifically addresses parametric down conversion.

    Right. The bunching model refers to partially coherent light. The purpose of bringing that up was to show that even if there were only one photon in a detection event it would be impossible to know that for sure (this is also true of extremely low intensity light). A detection event is similar to Maxwell's demon, a door is opened for a fraction of a second in hopes of admitting one photon. Except that the door is open one thousand times longer in time and many thousands of times wider than a single photon.

    What we say here is irrelevant because we don't have access to the press. I am talking about N. Herbert and all the other "experts" who choose what evidence to consider when pronouncing on the nature of reality and other questions. Maybe they are thinking about the royalties they can get in science fiction works.
     
  18. Mar 9, 2012 #17

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    I believe zonde and others have already answered this, but the short answer is that your hypothesis is experimentally refuted. The reason is that the BBo crystals that create the entangled photon pairs produce only thousands per second, which are easily resolved into individual detection events when you are looking at fast detectors. In other words, there are no bunches going in to the beamsplitters. Therefore there can be no bunches coming out. Furthermore, these experiments are also done with polarizers sometime rather than splitters, no change in outcomes. Plus, the same entanglement is seen when you are looking at properties other than polarization. The fact is that each photon of the pair (Alice and Bob) heralds the arrival of the other one.

    Yes, it is always technically possible that there are 2 photons being detected at EXACTLY the same time at both detectors and masking as 1, but this is far-fetched (and meaningless) in the extreme. There is no evidence of any effect like this at all. So the idea of this occurring at the calcite splitter is not viable. Unless, of course, you want to make up some new ad hoc physics.

    See for example:

    http://people.whitman.edu/~beckmk/QM/grangier/Thorn_ajp.pdf

    Observing the quantum behavior of light in an undergraduate laboratory
    J. J. Thorn, M. S. Neel, V. W. Donato, G. S. Bergreen, R. E. Davies, and M. Beck

    While the classical, wavelike behavior of light ~interference and diffraction! has been easily
    observed in undergraduate laboratories for many years, explicit observation of the quantum nature of light ~i.e., photons! is much more difficult. For example, while well-known phenomena such as the photoelectric effect and Compton scattering strongly suggest the existence of photons, they are not definitive proof of their existence. Here we present an experiment, suitable for an undergraduate laboratory, that unequivocally demonstrates the quantum nature of light. Spontaneously downconverted light is incident on a beamsplitter and the outputs are monitored with single-photon counting detectors. We observe a near absence of coincidence counts between the two detectors—a result inconsistent with a classical wave model of light, but consistent with a quantum description in which individual photons are incident on the beamsplitter. More explicitly, we measured the degree of second-order coherence between the outputs to be g(2)(0)50.017760.0026, which violates the classical inequality g(2)(0)>1 by 377 standard deviations.
     
  19. Mar 9, 2012 #18

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    This is an out of the blue comment, and I don't see any connection to the subject matter. Around here, an expert is an expert. Not an "expert".
     
  20. Mar 10, 2012 #19
    I agree.:smile:

    There are 8 pages of historical developments and experimental discussions in the paper you cite but only two sentences are used to define what a “single photon” is. I don't question the accuracy of the experiments or that they are able to make good predictions. I question the assumptions that they begin with and the logic behind them. Can you cite something more basic?

    I believe that the Marshall and Santos papers I cited do a better job of looking at fundamentals. Although they do not offer a more accurate theory they have the advantage that they reject non-locality. Will you comment on their argument that when the zero point field is used to describe the photon it is actually a classical model?
     
  21. Mar 11, 2012 #20

    zonde

    User Avatar
    Gold Member

    I am trying to understand your objections. Do you think that all the reasoning should start with something that we know for sure? And if we do not know for sure anything than we can do no reasoning, right?

    But then I do not understand how this bunching model is better. Or maybe I do:
    You believe that single-photon model somehow implies non-locality but bunching model implies locality.

    Well, I do not agree. Single-photon model by itself does not conflict with local realism.
    On the other hand Bell theorem applies to your bunching model just as well.
     
  22. Mar 11, 2012 #21
    It seems that it's reasonable to assume that the quantum theory is an incomplete description of physical reality. And that the incompleteness of the theory, in a certain sense, can be deduced/inferred from the theory itself. But, afaik, when people speak of the completeness of quantum theory they don't mean that it's a complete description of physical reality (After all, how could anyone ascertain that -- what might it refer to?). Rather, what they mean is that the quantum theory incorporates everything that's known about reality via quantum experimental phenomena.

    So, how can your OP possibly ever be definitively answered?
     
  23. Mar 12, 2012 #22
    For my part, I am trying to understand what a photon is, but when I look at the literature I am receiving contradictory information. If we don't know for sure what a photon is then it is ridiculous to use that model to reject locality. My objections to quantum mechanics are that the fundamentals are dealt with on a purely phenomenological basis. If you can't see it it doesn't exist. To show what I mean I have checked a well-respected source from the article you cited: R. Loudon, The Quantum Theory of Light, 3rd ed. ~Clarendon, Oxford, 2000.

    “The one-photon state has the important and distinctive property that it can produce only a single current pulse in the ionization of a photodetector.”

    DrChinese, If we are talking about one-photon states then when you said
    then I agree that you are right not because it is physically impossible, but because it was defined to be impossible.

    Loudon also states the following:

    “A one-photon excitation in such a mode (spatial mode) is distributed over the entire interferometer, including both internal paths.” page 2.

    I understand this to mean that the one-photon state is delocalized and because it is in both arms at the same time it is a non-local definition. It should not be surprising that a non-local model would result in non-locality.

    When quantum mechanics rejects a physical model such as bunching because it is viewed as incomplete they insist that a better model must give better predictions. IOW better predictions is more important than a local theory? We can have both locality and predictive power if we admit that it is impossible to know for sure what constitutes a detection event.

    ThomasT do you equate reality with what we observe? IOW is there more to reality than what we observe?
     
  24. Mar 12, 2012 #23

    DrChinese

    User Avatar
    Science Advisor
    Gold Member

    Now you are mixing metaphors. There are a lot of ideas about what a photon is, but with none of them is there a local realistic way to explain Bell test results. So no, you will be out on your own on this objection.

    As to the bunching phenomena you postulate, all you have to do is give me a specific scenario and I believe we can explain why this does not apply. Please recall that there are probably hundreds of different types of Bell tests which violate local realism, many which do not use photons at all. For example, see:

    http://www.nature.com/nature/journal/v409/n6822/full/409791a0.html

    Local realism is the idea that objects have definite properties whether or not they are measured, and that measurements of these properties are not affected by events taking place sufficiently far away1. Einstein, Podolsky and Rosen2 used these reasonable assumptions to conclude that quantum mechanics is incomplete. Starting in 1965, Bell and others constructed mathematical inequalities whereby experimental tests could distinguish between quantum mechanics and local realistic theories. Many experiments have since been done that are consistent with quantum mechanics and inconsistent with local realism. But these conclusions remain the subject of considerable interest and debate, and experiments are still being refined to overcome ‘loopholes’ that might allow a local realistic interpretation. Here we have measured correlations in the classical properties of massive entangled particles (9Be+ ions): these correlations violate a form of Bell's inequality. Our measured value of the appropriate Bell's ‘signal’ is 2.25 ± 0.03, whereas a value of 2 is the maximum allowed by local realistic theories of nature. In contrast to previous measurements with massive particles, this violation of Bell's inequality was obtained by use of a complete set of measurements. Moreover, the high detection efficiency of our apparatus eliminates the so-called ‘detection’ loophole.

    So your model needs a little revving up to explain this. 'Cause there ain't no bunching of Beryllium. :smile:
     
  25. Mar 13, 2012 #24

    zonde

    User Avatar
    Gold Member

    Yes, it is ridiculous to reject locality. Point. Without any "if ... then ...". Please try to understand that. It has nothing to do with different models of photons.

    Yes, I agree. They are valid objections.
     
  26. Mar 14, 2012 #25
    Quantum mechanics (from Loudon):
    Hypothetical bunching model:
    Photons are localized with a diffuse external field and a single frequency. By themselves they do not have sufficient energy to cause a detection event, but the superposition of fields of many photons leads to intensities sufficient to cause a detection event. Detections are caused by the superposed fields of photons.

    It is thereby assumed that realism can be defined by a mathematical analysis of experiments. Don't you think that this is presumptuous? It seems more likely that realism is more fundamental than quantum mechanics. The problem with quantum mechanics is that it only accepts challenges to its interpretations that abide by its rules. In the Nature article there are two objections to violations of the Bell inequality: that there is a subliminal communication and that not all detections were recorded. It does not suggest what to me is the real cause, that the detection event is incorrectly interpreted. Bell's inequality is a commentary on the nature of detection events, not locality or photons. Clearly we cannot look behind the phenomena to determine the truth, but as long as that possibility exists local realism has not been disproved.

    Quantum mechanics should be able to say what part of reality cannot be observed, IOW precisely define its own limitations.

    The trouble with trying to prove quantum mechanics wrong is that they insist that you come up with better predictions. All one has to do is prove that the predictions are based on a superficial understanding of nature or photons or whatever. If Bell was using an incorrect model then he is proving something about quantum mechanics, not reality.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook