Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

'Information' in Quantum Physics

  1. May 14, 2015 #1
    Reading about the concept of 'information' in physics, I have often read the Shannon definition of information contents, i.e. how compressible the description of something is. A truly random number can not be expressed in any shorter way than itself, thus having a high information contents, while other seemingly very complex phenomena can be compressed to some initial conditions plus a simple algorithm, therefore having a small information contents.

    But in quantum physics, those algorithms (the equations dictating the evolution of the system in time) do not have fixed outcomes, they are only probabilistic. The same initial conditions and the same equations can lead to completely different outcomes. Then I wonder how can the information contents be attributed. It looks like knowing the initial conditions and knowing the algorithm(s) is no use, we still have no idea what the outcome will be. A significant portion of the required information for determining the final state comes from 'chance' and is therefore forever hidden from us.

    Conversely, it seems that some particular physical situation can (in principle) have been the outcome of different initial conditions and different algorithms, just affected by different outcomes of chance. So it seems that in the quantum realm, a physical situation can never be compressed to initial conditions plus algorithm. Chance always plays a role in the output, so the Shannon concept of compressibility breaks down, nothing should be compressible because we can never know the contribution of chance to the outcome.

    Where am I wrong?
     
  2. jcsd
  3. May 14, 2015 #2

    Demystifier

    User Avatar
    Science Advisor

    From an information point of view, quantum physics is not so much about outcomes, but about correlations between outcomes. Given that I have measured the outcome of one variable, can I predict the outcome of another variable? It turns out that in many cases you can predict it with certainty, and it is this conditional probability that is quite similar to the Shannon concept of information.
     
  4. May 14, 2015 #3

    atyy

    User Avatar
    Science Advisor

    You are wrong in the first line. You are thinking of the Kolmogorov complexity. http://en.wikipedia.org/wiki/Kolmogorov_complexity

    A deep reason you are wrong is Bohmian Mechanics :P

    The closest concept in the Shannon theory is the Shannon entropy, which is the same as the Boltzmann-Gibbs entropy of classical statistical mechanics in physics.

    There does have to be a change to the Boltzmann-Gibbs-Shannon entropy for in quantum mechanics, and the new formula is the von Neumann entropy, which reduces to the old formula in special circumstances.

    A further concept in classical information theory is the Kullback-Leibler divergence or the relative entropy, of which the Shannon information is a particular case. If spacetime and matter are not discrete, then this is more fundamental than the Shannon entropy. The relative entropy does also have a quantum counterpart, based on a formula similar to the von Neumann entropy.
     
    Last edited: May 14, 2015
  5. May 14, 2015 #4

    bhobba

    User Avatar
    Science Advisor
    Gold Member

    Well that's one area you are wrong.

    As Demystifyer will tell you, in BM, everything is deterministic. You just cant know the outcome because QM does not let you know the initial conditions. MW is also deterministic - but in a more subtle way.

    One of the valuable insights you get in studying a number of interpretations is exactly what the formalism of QM implies, and what it doesn't. Its not always what it may seem at first sight.

    Thanks
    Bill
     
  6. May 18, 2015 #5
    There are two types of outcomes, one of which is predictable and one of which isn't.

    The distribution of particle detections (the pattern or image they form) is predictable, but the individual locations of those detections are not.

    An algorithm can therefore be created to represent (and compress) the distribution/pattern/image - but not the locations.

    To put it another way, one can write an algorithm that generates particle detections for a given experimental setup. Each time you run the algorithm for the same setup it will generate the same pattern of detections, but not the same location for each detection.

    Understanding the difference between each run of the algorithm allows one to grasp the concept of individual detections. Understanding what is the same on each run of the algorithm allows one to grasp the concept of a pattern (an image or distribution).

    The pattern constitutes information such as the density of detections in a given area. This information is non-local in the sense that it requires looking at more than one location to characterise such. Or to put it another way: at any single location such information is (obviously) not available.

    Indeed the so called measurement problem can be regarded as a feature instead. The information unavailable in the model (or algorithm) equates with the information unavailable in a single location.

    C
     
    Last edited: May 18, 2015
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: 'Information' in Quantum Physics
  1. Quantum Information (Replies: 6)

  2. Quantum Information (Replies: 11)

  3. Quantum Information (Replies: 8)

Loading...