Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Pure and mixed

  1. Mar 20, 2006 #1
    What is a pure state and a mixed state?
  2. jcsd
  3. Mar 20, 2006 #2

    Doc Al

    User Avatar

    Staff: Mentor

    A pure state is one that can be represented by a vector in a Hilbert space. A mixed state is one that cannot: it must be represented by a statistical mixture of pure states.
  4. Mar 20, 2006 #3


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    As Doc Al said, a pure quantum state is, well, a quantum state (an element in hilbert space). A mixed state is a statistical mixture of pure states. You can compare this with the situation in classical mechanics. A "pure state" would be a "point in phase space", while a "mixture" would be a statistical distribution over phase space (given by a probability density over phase space).

    However, there's an extra weirdness in the case of quantum theory. As where the probability density in classical phase space would give a unique probability to each individual phase space point, and as two different probability densities in classical phase space would be experimentally distinguishable (it is in principle possible, if you have an ensemble of physical systems which are described by the given probability density, to extract from this ensemble of systems, that probability density, by doing a complete measurement of the phase space point of each of them, and histogramming the outcomes over phase space), well, the quantum description of a mixed state allows DIFFERENT ensembles of different probabilities over different states to give rise to IDENTICAL mixed states, which are experimentally indistinguishable.
    This finds its origin in the probabilistic aspect of quantum measurements, where the two probability measures get mixed up: the probability of outcome due to quantum randomness, for a pure state, and the probability in the mixture to be a certain pure state.

    As an example, consider a spin-1/2 system.

    Pure states are, for example: |z+>,
    or |z->
    or |x+>
    or |x->

    These are elements of the 2-dimensional hilbert space describing the system.

    A mixture can be described, a priori, by, well, a mixture of pure states, such as: 30% |x+>, 60% |z-> and 10% |y+>. But this decomposition is not unique:

    The mixture:
    50% |z+> and 50% |z->

    is experimentally indistinguishable, for instance, from the mixture:

    50% |x+> and 50% |x->

    Mixtures are correctly described by a density matrix, rho.

    if a mixture is made up of p_a of state |a>, p_b of state |b> and p_c of state |c>, then:

    rho = p_a |a><a| + p_b |b><b| + p_c |c><c|

    A measurable quantity A will have its expectation value:

    <A> = Tr(A rho)

    As such, different mixtures with identical rho are experimentally identical.

    Some claim therefor that the true quantum state of a system is given by rho, and not by an element in hilbert space. However, this leads to other problems...
  5. Mar 21, 2006 #4
    First of all: what is a state? It's a probability algorithm. We use it to assign probabilities to possible measurement outcomes on the basis of actual measurement outcomes (usually called "preparations"). A measurement is complete if it yields the maximum possible amount of information about the system at hand. A state is pure if it assigns probabilities on the basis of the outcome of a complete measurement. Otherwise it is mixed.
  6. Mar 21, 2006 #5


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    What you write is a correct view from the "information" (epistemological) point of view. Personally, I like to see something more than just a statement about knowledge, but I agree that this is a possible viewpoint which is endorsed by some.
    In that viewpoint, the only "state" we talk about, is a state of our knowledge about nature, and not an ontological state of nature.
  7. Mar 21, 2006 #6
    Dear vanesh,

    I am with you in your expectation to see more than statements about knowledge. There is no denying, however, that the quantum formalism is a probability algorithm. Whereas this formalism is utterly incomprehensible as a one-to-one representation of the real world, it is almost self-evident if we think of it as a tool for describing the objective fuzziness of the quantum world.

    Almost the first thing people came to understand through quantum mechanics was the stability of atoms and objects composed of atoms: it rests on the objective fuzziness of their internal relative positions and momenta. (The literal meaning of Heisenberg's term "Unschärfe" is not "uncertainty" but "fuzziness".)

    What is the proper (mathematically rigorous and philosophically sound) way of dealing with a fuzzy observable? It is to assign probabilities to the possible outcomes of a measurement of this observable. But if the quantum-mechanical probability assignments serve to describe an objective fuzziness, then they are assignments of objective probabilities.

    So the fact that quantum mechanics deals with probabilities does not imply that it is an epistemic theory. If it deals with objective probabilities, then it is an ontological theory.
  8. Mar 23, 2006 #7


    User Avatar
    Science Advisor
    Homework Helper

    A pure state: it has a simple mathematical meaning, namely a point in the projective Hilbert space of the system, or, if you prefer, a unidimensional linear subspace (a.k.a. unit ray, or simply ray, if there's no room for confusions) in the Hilbert space associated to any quantum system.

    A mixed state: well, if you read any statistical physics course under the title "Virtual statistical ensembles in quantum statistics" you'll get a very good idea on it.

    BTW, the von Neumann formulation of QM allows the most natural description of mixed states...

  9. Mar 23, 2006 #8


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    There's a hic with this view, because it would imply that there are a set of observables (spanning a phase space) over which quantum theory generates us a Kolmogorov probability distribution, as such fixing entirely the probabilities of the outcomes of all POTENTIAL measurements.
    And we know that this cannot be done: we can only generate a Kolmogorov probability distribution for a set of COMPATIBLE measurements.

    The closest one can come is something like the Wigner quasidistribution:

    Last edited: Mar 23, 2006
  10. Mar 23, 2006 #9
    There is no need to read a statistical physics course. Quantum mechanics represents the possible outcomes to which its algorithms assign probabilities by the subspaces of a vector space, it represents its pure probability algorithms by 1-dimensional subspaces of the same vector space, and it represents its mixed algorithms by probability distributions over pure algorithms. Hence the name "mixed".
  11. Mar 23, 2006 #10
    Please explain how this would imply what you think it implies. State your assumptions so that I can point out either that they are wrong or that I do not share them.
  12. Mar 24, 2006 #11


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Well, you are correct in stating that, given a wavefunction, or a mixed state, AND GIVEN A CHOICE OF COMMUTING OBSERVABLES, that the wavefunction/density matrix generates a probability distribution over the set of these observables. As such, one might say - as you do - that these variables are "fuzzy" quantities, and that they are correctly described by the generated probability function.

    However, if I make ANOTHER choice of commuting observables, which is not compatible with the previous set, I will compute a different probability distribution for these new observables. No problem as of yet.

    But what doesn't work always is to consider the UNION of these two sets of observables, and require that there is an overall probability distribution that will describe this union. As such, one cannot say that the observable itself "has" a probability distribution, independent of whether we were going to pick it out or not in our set of commuting observables. This is what is indicated by the non-positivity of the Wigner quasi-distributions.

    The typical example is of course a Bell state |+>|-> - |->|+> and where we consider the 3 observables in 3 well-chosen directions on both sides of the experiment. Let us call them A,B,C,U,V and W, and each of them can have a result +1 or -1. There is NO probability distribution P(A,B,C,U,V,W) for the 64 different possibilities of A,B,C,U,V,W which corresponds to the quantum predictions - that's the essence of Bell's theorem in fact, because if this distribution existed, a common hidden variable thus distributed could generate the outcomes.

    So in that sense, I wanted to argue that it is not possible to claim that every POTENTIAL observable is a "fuzzy quantity" that is correctly described by a probability distribution - which - I assumed in that case, must be existing independent of the SET of (commuting) observables that we are going to select for the experiment.
  13. Mar 24, 2006 #12


    User Avatar
    Science Advisor
    Homework Helper

    Would you mind expanding the dots?

    An interested reader.

  14. Mar 24, 2006 #13
    Dear vanesh,

    Thank you for your detailed response. Now it's my turn to add some flesh to my earlier post about objective probabilities.

    It is indeed not possible to consistently define an overall probability distribution for a set of non-commuting observables. If one attributes a probability distribution to a set of observables, then one makes the (implicit) assumption that these observables can be measured simultaneously, and this is not possible for non-commuting observables. (In fact, every quantum-mechanical probability assignment implicitly assumes that the corresponding measurement not only can be but is made. Out of any measurement context, quantum-mechanical probabilities are simply meaningless.) I further admit that my all too brief post may have suggested the opposite: that I believe it makes sense to objectify quantum-mechanical probabilities out of measurement contexts. What could make people jump to this erroneous conclusion is the popular misconception that reference to measurements is the same as reference to observers.

    There are basically two kinds of interpretation, those that acknowledge the central role played by measurements in standard axiomatizations of quantum mechanics, and those that try to sweep it under the rug. As a referee of a philosophy-of-science journal once put it to me, "to solve [the measurement problem] means to design an interpretation in which measurement processes are not different in principle from ordinary physical interactions.'' To my way of thinking, this definition of "solving the measurement problem" is the reason why as yet no sensible solution has been found. Those who acknowledge the importance of measurements, on the other hand, appear think of probabilities as inherently subjective and therefore cannot comprehend the meaning of objective probabilities. Yet it should be perfectly obvious that quantum-mechanical probabilities cannot be subjective. Subjective (that is, ignorance) probabilities disappear when all relevant facts are taken into account (which in many cases is practically impossible). The uncertainty principle however guarantees that quantum-mechanical probabilities cannot be made to disappear. As David Mermin put it, "in a non-deterministic world, probability has nothing to do with incomplete knowledge. Quantum mechanics is the first example in human experience where probabilities play an essential role even when there is nothing to be ignorant about." Mermin in fact believes that the mysteries of quantum mechanics can be reduced to the single puzzle posed by the existence of objective probabilities, and I think that this is correct.

    This is the assumption that I did not make and that indeed cannot be made.
    Last edited: Mar 25, 2006
  15. Mar 26, 2006 #14


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Well, I fully subscribe to that referee's view, honestly. However, you are right that there are essentially two views on QM, one which considers a "measurement process" and others who say that there's no such thing - count me as partisan of the latter (caveat... see further).

    I would classify these two different views differently. I'd say that those who consider quantum theory as a "partial" theory have no problem adding an extra thing, called measurement process, while those that want to take on the view that quantum theory is a *universal* physical theory, cannot accept such a process.

    The reason is the following: if quantum theory is to be universal (that means that its axioms apply to everything in the world - necessarily a reductionist viewpoint of course), then they also apply to the observer. And a "measurement" for the observer is nothing else but "a state" of the observer. You can only consider that information is anything else but a physical state if you don't consider the "information-posessor" (= the observer) as being part of the very physics.
    In classical physics, there's no issue: the "bodystate" of the observer is a classical state, and is linked through a deterministic mechanics to the classical state of the observed system (the deterministic mechanics is the physics of the measurement apparatus). So the "state of the observer" is a kind of copy of the state of the system (eventually with errors, noise, omissions...), and this state, out of the many possible, is then the measurement result which contains the information about the system. But to convert "body state" into "information" needs an interpretation. No difficulty here, in classical physics.

    However, if you go now to quantum theory, there's a difficulty. First of all, there's a difficulty with the "bodystate" of the observer: if it is a quantum system as any other (quantum theory being universal), then it needs to be described by a state vector in Hilbert space. Now, you could still try to save the day, and introduce a kind of superselection rule, which allows only certain states ("classical states") to be associated to a body. But then there's the extra difficulty of the linearity of the time evolution operator, which follows from the physical interaction between the observer's body and the system under study, which will drive that body's state into a superposition of the different classical bodystates, hence violating that superselection rule.
    Now comes "measurement". As in the classical counterpart, a measurement is a physical link between (ultimately) the observer's body, and the physics of the system under study, such that the system state is more or less copied into the body state. That bodystate, amongst the many possible, contains then the information about the system that has been extracted. But as we see here, we can only roughly copy a quantum state (of the system under study) into a quantum state (of the body of the observer)! There's no way, if quantum theory is to be universally applied, to copy a quantum state to a *classical* state of the body - which is needed if we want to have a Copenhagen-style measurement and its associated information carrier (the observer's body).

    I don't think that there is any way out, if quantum theory is taken to be *universally* valid. However, if quantum theory is put in "microscopic boxes", and the macroworld (containing the observers' body) is *classical*, while CERTAIN physical systems out there are quantum systems, while OTHER physical systems are classical systems that can couple to quantum systems (preparation and measurement apparatus), so that quantum theory is allowed to be "set up", "run" and "give out its answer", then of course the information viewpoint makes sense (this is in fact the Copenhagen view). The *classical* state of the observer's body (and of the classical part of the measurement apparatus) will be one of many classical states, and hence correspond to the information of the measurement result, where ONE classical outcome has to be chosen over many (the collapse of the wavefunction).

    Note that I keep open (the earlier caveat...) the possibility of quantum theory NOT being universally valid. However, I claim that, when you want to give an interpretation of a theory, you cannot start by claiming that it is NOT universally valid (without saying also, then, what IS valid).

    The ONLY probabilitic part of the usual application of quantum theory is when one has to make a transition to a classical end state (the so-called collapse). Whatever it is that generates this (apparent?) transition, it surely is an objectively random process - but of which the dynamics is NOT described by quantum theory itself (it being a DETERMINISTIC theory concerning the wavefunction evolution).

    I would even say that the "proof" of this objectivity of quantum-mechanical probabilities resides exactly in the fact that there is no universal probability distribution of all quantum-mechanical quantities (the thing we've been talking about, such as a Wigner quasi distribution) - otherwise one could take it that there are hidden variables that are such, that our subjective lack of their precise value generates the quantum-mechanical probabilities. However, the example of Bohmian mechanics illustrates that one has to be careful with these statements.

    At the end of the day, there's no fundamental distinction between "objective probabilities" and "subjective, but in principle unknowable" probabilities (such as those given by the distribution of hidden variables, or, by the quantum equilibrium condition in Bohmian mechanics).

    Personally, I think that's a too simple way out. As I said, there's no fundamental difference between "objective probabilities" and subjective probabilities of things that are in principle forbidden to know. But we know that quantum theory cannot really be put into such a framework if we also cherish other principles such as locality (otherwise, I think it is fairly obvious that Bohmian mechanics would be demystifying the whole business!).
    I think that the fundamental difficulty in the measurement problem comes from our A PRIORI requirement of the observer, or the measurement apparatus, or whatever, to be in a CLASSICAL state, which is in contradiction with the superposition principle on which quantum theory is build up. You cannot require of your observer NOT to obey the universal theory you're describing, and hope you'll not run into difficulties!
  16. Mar 27, 2006 #15
    Rather, those who consider quantum theory as a universal theory (in your sense) feel the necessity of adding an extra thing: surreal particle trajectories (Bohm), nonlinear modifications of the dynamics (Ghirardi, Rimini, and Weber or Pearle), the so-called eigenstate-eigenvalue link (van Fraassen), the modal semantical rule (Dieks), and what have you.

    The only thing we are sure about is that quantum mechanics is an algorithm for assigning probabilities to possible measurement outcomes on the basis of actual outcomes. If measurements are an "extra thing", what is quantum mechanics without measurements? Nothing at all!
    I don’t know of any axiomatic formulation of quantum mechanics in which measurements do not play a fundamental role. What axioms are you talking about?

    Quoting from my earlier response to hurkyl: it is by definition impossible to find out by experiment what happened between one measurement and the next. Any story that tells you what happened between consecutive measurements is just that – a story. Bohmians believe in a story according to which particles follow mathematically exact trajectories, and the rest (apart from some laudable exceptions) believe in a story according to which the quantum-mechanical probability algorithm is an ontological state that evolves deterministically between measurements if not always. (One of those laudable exception was the late Asher Peres, who realized that there is no interpolating wave function giving the "state of the system" between measurements.)

    Whether you believe in unitary evolution between measurements or unitary evolution always makes no difference to me. I reject the whole idea of an evolving quantum state, not just because it is unscientific by Popper's definition (since the claim that it exists is unfalsifiable) but because it prevents us from recognizing the true ontological implications of the quantum formalism (which are pointed out at my site). The dependence on time of the quantum-mechanical probability algorithms (states, wave functions) is a dependence on the times of measurements, not the time dependence of an evolving state.
    In a theory that rejects evolving quantum states the question "to collapse or not to collapse?" doesn’t arise. What generates this "(apparent?) transition" is one of several pseudo-problems arising from the the unwarranted and unverifiable postulate of quantum state evolution.
    So you accept an objectively random process whose dynamics quantum theory cannot describe? What happened to your claim that
    What IS valid (and universally so) is that quantum mechanics correlates measurement outcomes. The really interesting question about quantum mechanics is: how can a theory that correlates measurement outcomes be fundamental and complete? Preposterous, isn’t it? If people had spend the same amount of time and energy trying to answer this question, rather than disputing whether quantum states collapse or don’t collapse, we would have gotten somewhere by now.
    There is no way, if reality is an evolving ray in Hilbert space, to even define subsystems, measurements, observers, interactions, etc. Also, it has never been explained why, if reality is an evolving ray in Hilbert space, certain mathematical expressions of the quantum formalism should be interpreted as probabilities. So far every attempt to explain this has proved circular. The decoherence program in particular relies heavily on reduced density operators, and the operation by which these are obtained - partial tracing - presupposes Born's probability rule. Obviously you don’t have this problem is the quantum formalism is fundamentally a probability algorithm.
  17. Mar 27, 2006 #16


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Indeed,... except for MWI :smile: ; or almost so.

    This can be said about any scientific theory.

    1) the Hilbert space, spanned by the eigenvectors of "a complete set of observables" (which is nothing else but an enumeration of the degrees of freedom of the system, and the values they can take)

    2) the unitary evolution (the derivative of it being the Hamiltonian)

    You are right of course that there is a statement that links what is "observed" with this mathematical state - but such a statement must be made in ALL physical theories. If you read that statement as: "it is subjectively experienced that..." you're home.

    That can be said about every scientific theory. You should then also reject the idea of an evolving classical state, or the existence of a classical electrical field, or even the existence of other persons you're not observing. When you leave your home, your cat "disappears" and it "reappears" when you come back home. The concept of "your cat" is then nothing else but a formal means of which its ontological existence outside of its direct observation is unscientific in Popper's sense because an unwarranted extrapolation of the observations of your cat when you are home... The state of your cat ("the poor Felix must be hungry, I forgot to give him his dinner this morning") outside of any observation is hence a meaningless concept. When he's a bit agressive when I come home, then that's just the result of an algorithm which depends on the time between me leaving my cat (without a meal) and me coming home again ; in between, no cat. That's what you want people to accept concerning quantum states, or any other physical state. I find that rather unsatisfying...

    As I said, this can be applied to any scientific theory. It doesn't lead to a very inspiring picture of the world ; it is essentially the "information" world view, where scientific (and other) theories are nothing else but organizing schemes of successive observations and no description of an actual reality.

    No, I don't. I could accept such a theory, but quantum theory isn't one of them. The random process, in the MWI view, is entirely subjective ; it is not part of the physics, but of what you happen to subjectively experience.

    All theory "correlates" subjective experiences (also called measurements), and to go beyond that is purely hypothetical: this is established by the non-falsifiability of solipsism. Nevertheless, making these hypotheses are useful activities, because it gives us an intuitive picture of a world that can explain things. It is a matter of conceptual economy, to postulate things to exist "for real", because they have strong suggestive power. So anybody claiming that one shouldn't say that certain concepts in an explanatory scheme of observations (such as quantum theory, or any scientific theory) are "real" misses the whole point of what "reality" is for: it is for its conceptual simplification ! The unprovable hypothesis that your cat exists, even if you have no observational evidence (because you're not at home), is a simplifying hypothesis which helps organize your subjective experiences (and makes for the fact that you're not surprised to find a cat when you come home). So I fail to see the point of people insisting that quantum theory tells us that there's nothing to be postulated for real in between measurements. You're not gaining any conceptual simplification from that statement, so what good is it ?

    You should look at my little paper quant-ph/0505059 then - I KNOW that it is not possible to derive the probabilities from the unitary part. My solution is simply to STATE that your subjective experience derives from a randomly selected term according to the Born rule - as you should state, in general relativity, how your subjective experience of "now" derives from a spacelike slice of the 4-manifold, and as you should state how a physical state gives rise to a subjective experience in about ANY scientific theory.
    When the objective physics is entirely described, no matter whether it is classical, quantum-mechanical or otherwise, you should STILL say how this gives rise to a subjective experience. Well, that's the place where I prefer to put the Born rule and the "projection postulate". It's as good a place as any! And I get back my nice physical ontology, my (even deterministic, although I didn't ask for it!) physical evolution - of the system, of the apparatus, of my body and all that. I get a weird rule that links my subjective experience to physical reality, but as that is in ANY CASE something weird, it's the place to hide any extra weirdness. You don't have to do as I do, of course. Any view on quantum theory that makes you happy is good enough. As I believe more in a formalism, than in intuition, or common sense, I need to give an ontological state to the elements of the formalism - it gives me the satisfaction of the simplifiying hypothesis of ontological reality, and it helps me devellop an intuition for the formalism (which are the two main purposes of the hypothesis of an ontology). Other people have other preferences.
    However, I fail to see the advantage on insisting that one SHOULDN'T make that simplifying hypothesis of an existing physical reality.
  18. Mar 27, 2006 #17


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    From your cite:
    I repeated often that the ONLY objection to an MWI/many minds view is "naah, too crazy"...
  19. Mar 27, 2006 #18
    Not too crazy. Borrowing the words of Niels Bohr, crazy but not crazy enough to be true.
  20. Mar 27, 2006 #19
    What about your own emphasis that classical physics can be formulated without reference to measurements, while quantum mechanics cannot?
    Let me tell you in a few steps why we all use a complex vector space. (I can give you the details later if you are interested.) I use this approach when I teach quantum mechanics to higher secondary and undergraduate student.
    1. "Ordinary" objects have spatial extent (they "occupy" space), are composed of a (large but) finite number of objects that lack spatial extent, and are stable - they neither collapse nor explode the moment they are formed. Thanks to quantum mechanics, we know that the stability of atoms (and hence of "ordinary" objects) rests on the fuzziness (the literal translation of Heisenberg's "Unschärfe") of their internal relative positions and momenta.
    2. The proper way of dealing with a fuzzy observable is to assign probabilities to the possible outcomes of a measurement of this observable.
    3. The classical probability algorithm is represented by a point P in a phase space; the measurement outcomes to which it assigns probabilities are represented by subsets of this space. Because this algorithm only assigns trivial probabilities (1 if P is inside the subset representing an outcome, 0 if P is outside), we may alternatively think of P as describing the state of the system in the classical sense (a collection of possessed properties), regardless of measurements.
    4. To deal with fuzzy observables, we need a probability algorithm that can accommodate probabilities in the whole range between 0 and 1. The straightforward way to do this is to replace the 0 dimensional point P by a 1 dimensional line L, and to replace the subsets by the subspaces of a vector space. (Because of the 1-1 correspondence between subspaces and projectors, we may equivalently think of outcomes as projectors.) We assign probability 1 if L is contained in the subspace representing an outcome, probability 0 if L is orthogonal to it, and a probability 0>p>1 otherwise. (Because this algorithm assigns nontrivial probabilities, it cannot be re-interpreted as a classical state.)
    5. We now have to incorporate a compatibility criterion. It is readily shown (later, if you are in the mood for it) that the outcomes of compatible measurements must correspond to commuting projectors.
    6. Last but not least we require: if the interval C is the union of two disjoint intervals A and B, then the probability of finding the value of an observable in C is the sum of the probabilities of finding it in A or B, respectively.
    7. We now have everything that is needed to prove Gleason's theorem, according to which the probability of an outcome represented by the projector P is the trace of WP, where W (known as the "density operator") is linear, self-adjoint, positive, has trace 1, and satisfies either WW=W (then we call it a "pure state") or WW<W (then we call it "mixed"). (We are back to the topic of this thread!)
    8. The next step is to determine how W depends on measurement outcomes, which is also readily established.
    9. The next step is to determine how W depends on the time of measurement, which is equally straightforward to establish.
    At this point we have all the axioms of your list (you missed a few) but with one crucial difference: we know where these axioms come from. We know where quantum mechanics comes from, whereas you haven’t the slightest idea about the origin of your axioms.
    Which is exactly what I do! Newton famously refused to make up a story purporting to explain how, by what mechanism or physical process, matter acts on matter. While the (Newtonian) gravitational action depends on the simultaneous positions of the interacting objects, the electromagnetic action of matter on matter is retarded. This made it possible to transmogrify the algorithm for calculating the electromagnetic effects of matter on matter into a physical mechanism or process by which matter acts on matter.
    Later Einstein's theory of gravity made it possible to similarly transmogrify the algorithm for calculating the gravitational effects of matter on matter into a mechanism or physical process.

    Let's separate the facts from the fictions (assuming for the moment that facts about the world of classical physics are facts rather than fictions).
    Fact is that the calculation of effects can be carried out in two steps:
    1. Given the distribution and motion of charges, we calculate six functions (the so-called "electromagnetic field"), and given these six functions, we calculate the electromagnetic effects that those charges have on other charges.
    2. Given the distribution and motion of matter, we calculate the stress-energy tensor, and given the stress-energy tensor, we calculate the gravitational effects that matter here has on matter there.
    Fiction is
    1. that the electromagnetic field is a physical entity in its own right, that it is locally generated by charges here, that it mediates electromagnetic interactions by locally acting on itself, and that it locally acts on charges there;
    2. that spacetime curvature is a physical entity in its own right, and that it mediates the gravitational action of matter on matter by a similar local process.
    Did you notice that those fictions do not explain how a charge locally acts on the electromagnetic field, how the electromagnetic field locally acts on a charges, and so on? Apparently, physicists consider the familiar experience of a well-placed kick sufficient to explain local action.
    This is what you are led to conclude because you don’t have a decent characterization of macroscopic objects.
    You find a deterministic theory of everything inspiring??? Perhaps this is because you want to believe in your omniscience-in-principle: you want to feel as if you know What Exists and how it behaves. To entertain this belief you must limit Reality to mathematically describable states and processes. This is in part a reaction to outdated religious doctrines (it is better to believe in our potential omniscience than in the omnipotence of someone capable of creating a mess like this world and thinking he did a great job) and in part the sustaining myth of the entire scientific enterprise (you had better believe that what you are trying to explain can actually be explained with the means at your disposal).

    Besides, you are wrong when you put me in the quantum-states-are-states-of-knowledge camp. Only if we reject the claptrap about evolving quantum states can we obtain a satisfactory description of the world between consecutive measurements. This description consists of the (objective) probabilities of the possible outcomes of all the measurements that could have been performed in the meantime. (I'm not in any way implying that it makes sense to simultaneously consider the probabilities of outcomes of incompatible measurements.)

    I, for one, find the ontological implications of the quantum-formalism - if this is taken seriously as being fundamentally an algorithm for computing objective probabilities – greatly inspiring. Among these implications are the numerical identity of all particles, the incomplete spatiotemporal differentiation of Reality, and the top-down structure of the physical world. Besides, it is the incomplete spatiotemporal differentiation of Reality that makes a rigorous definition of "macroscopic" possible.
    How convenient. What I experience is not part of physics. How does this square with your claimed universality of the quantum theory? And what I do not experience – Hilbert space vectors, wave functions, and suchlike – is part of physics. How silly!
    As long as you mix up experiences with measurements, you are not getting anywhere.
    I have a somewhat higher regard for "reality". Like Aristotle, I refuse to have it identified with computational devices. ("The so-called Pythagoreans, who were the first to take up mathematics, not only advanced this subject, but saturated with it, they fancied that the principles of mathematics were the principles of all things." - Metaphysics 1-5.)
    Chalmers called this the "law of minimization of mystery": quantum mechanics is mysterious, consciousness is mysterious, so maybe they are the same mystery. But mysteries need to be solved, not hidden.

    Let me express, in conclusion, my appreciation for the trouble you take to explain yourself. It really helps me understand people of your ilk.
  21. Mar 28, 2006 #20


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I will try to outline where I think that there is a problem in the approach you take, if you want it to be a universal explanation. The problem, according to me, resides in the mixture between formal aspects, and intuitive, common sense concepts. In a complete world picture, there is no room for intuitive and common sense concepts at the foundations.

    Now, I know your objection to that view: you say that it is overly pretentious to try to have a universal, complete world picture. Of course. But the exercise does not reside in giving yourself the almighty feeling of knowing it all! The exercise consists in building up, WITHOUT USING common sense concepts at the foundations, a mental picture of the world, AND SEE IF OUR COMMON SENSE and less common sense observations can be explained by it. If at that point, you *take for granted* certain common sense concepts, then the reasoning becomes circular. Why is it important to try to derive a complete world picture ? Firstly, to see where it fails! This will indicate us, maybe, what goes wrong with it. And secondly, to be an intuitive guide to help you devellop a sense of problem solving.

    1. I think it is already fairly clear here, that there is an appeal to a mixture of intuitive ontological concepts. But an "algorithmic" theory cannot take for granted the ontological existance of any such "ordinary" object: their existence must be DERIVABLE from its fundamental formulation. Otherwise, you already sneak in the ontology you're going to refute later.

      Even there, there is a problem: how does a "measurement apparatus" link to an observable ? Does the measurement apparatus have ontological existence ? Or does only the observation of the measurement apparatus (by a person ?) make sense, and we cannot postulate (ontological hypothesis which is to be rejected) that the measurement apparatus, as a physical construction, exists ?
      So *what* defines a fuzzy or other observable in the first place if we're not entitled to any ontology ? And IF we are entitled to an intuitive ontology, then exactly what is it ?


      I don't see why this procedure is "the straightforward way". I'd think that there are two ways of doing what you want to do. One is the "Kolgomorov" way: each thinkable observable is a random variable over a probability space. We already know that this doesn't work in quantum theory (the discussion we had previously). But one can go further. One can say that, to each "compatible" (to be defined at will) set of observables corresponds a different probability space, and the observables are then random variables over this space. THIS is the most general random algorithm. The projection of a ray in a vector space is way more restrictive, and I don't see why this must be the case.

      To illustrate what I want to say, consider this: consider two compatible observables, X1 and Y1. X1 can take on 3 possible outcomes: smaller than -1, between -1 and +1, and bigger than 1 (outcomes X1a, X1b and X1c).
      Y1 can take on 2 possible outcomes, Y1a and Y1b. For THIS SET OF OBSERVABLES, I can now define a probability space with distribution given by P(X1,Y1), with 6 different probabilities, satisfying the Kolmogorov axioms. But let us now consider that we have ANOTHER set of observables, X2 and Y2. In fact, in our naivity, we think that X2 is the "same" observable as X1, but more finegrained. But that would commit the mistake of assigning a kind of ontological existence to a measurement apparatus and to what it is going to measure. As only observations are to be considered "real", and we have of course a DIFFERENT measurement for the observable X2 than for X1 (we have to change scale, or resolution, on the hypothetical measurement apparatus), we can have a totally DIFFERENT probability distribution. Consider that X2 has 5 possible outcomes: smaller than -2, between -2 and -1, between -1 and +1, between +1 and +2, and bigger than 2. We would be tempted to state that, X2 measuring the "same" quantity as X1, the probability to measure X2a + the probability to measure X2b should equal the probability to have measured X1a. (smaller than -2, and between -2 and -1, is equivalent to smaller than -1). But THAT SUPPOSES A KIND OF ONTOLOGICAL EXISTENCE of the "quantity to be measured" independent of the measurement act, which is of course against the spirit of our purely algorithmic approach. Hence, a priori, there's no reason not to accept that the probability distribution for X2 and Y2 is totally unrelated to the one for X1 and Y1. This situation can easily be recognized as "contextuality".

      Yes, but we have placed ourselves already in a very restrictive class of probability algorithms for measurement outcomes. The contextual situation I sketched will not necessarily be incorporated in this more restrictive scheme. So postulating this is not staying open to "a probability algorithm in general".

      Ok, this is an explicit requirement of non-contextuality. Why ?

      Indeed. However, I had the impression you wanted to show that quantum theory is nothing else but a kind of "general scheme of writing down a generator for probability algorithms of observations", but we've made quite some hypotheses along the way! Especially the non-contextuality requirement, which requires us to HAVE A RELATIONSHIP BETWEEN THE PROBABILITIES OF DIFFERENT OBSERVATIONS (say, those with high, and those with low resolution), goes against the spirit of denying an ontological status to the "quantity to be measured outside of its measurement". If the only thing that makes sense, are measurement outcomes, then the resolution of this measurement makes integral part of it. As such, a hypothetical measurement with another resolution, is intrinsically entitled to a TOTALLY DIFFERENT and unrelated probability distribution. It is only when we say that what we measure has an independent ontological existence that we can start making assumptions about different measurements of the "same" thing: in order for it to be the "same" thing, it has to have ontological status.
      For instance, if what we measure is "position of a particle", but we say that the only thing that makes sense are *measurement outcomes", then the only thing that makes sense is "ruler says position 5.4 cm" and not "particle position is 5.4cm". Now, if we replace the ruler by a finer ruler, then the only thing that makes sense is now "fine ruler says position 5.43cm". There is a priori no relationship between the outcome "ruler says position 5.4cm" and "fine ruler says 5.43cm", because these are two DIFFERENT measurements. However, if there is an ontology behind it, and BOTH ARE MEASUREMENTS OF A PARTICLE POSITION, then these two things are related of course. But this REQUIRES THE POSTULATION OF SOME ONTOLOGICAL EXISTENCE OF A QUANTITY INDEPENDENT OF A MEASUREMENT - which is, according to your view, strictly forbidden.

      BTW, the above illustrates the "economy of concept" that results from postulating an ontology, and the intuitive help it provides. The unrelated statements "ruler says position 5.4cm" and "fine ruler says 5.43cm" which are hard to make any sense of, become suddenly almost trivial concepts when we say that there IS a particle, and that we have tried to find its position using two physical experiments, one with a better resolution than the other.
    As I tried to point out, I don't see where your axioms come from, either. Why this projection thing to generate probability algorithms, which restricts their choice ? And why this non-contextuality ?

    1. Well, these fictions are strong conceptual economies. For instance, if I have a static electrostatic field, I'm not really surprised that a charge can accelerate one way or another, but that the DIRECTION of its acceleration at a certain position is always the same: the electric field vector is pointing in one and only one direction ! Now, if I see this as an ALGORITHM, then I don't see, a priori, why suddenly charges could not decide to go a bit in all possible directions as a function of their charge. I can imagine writing myself any algorithm that can do that. But when I physically think of the electric field at a point, I find a natural explanation for this single direction.

      I'll stop here, because I'd like to watch a movie on TV :-)
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Pure and mixed
  1. Pure/mixed states (Replies: 1)

  2. Pure or mixed? (Replies: 12)