What is a Pure State and Mixed State?

In summary, a pure state in quantum mechanics is a state that can be represented by a vector in a Hilbert space, while a mixed state is a statistical mixture of pure states. This is similar to the concept of a point in phase space versus a statistical distribution in classical mechanics. However, in quantum mechanics, different mixtures with identical density matrices are experimentally indistinguishable, which leads to debates about the true quantum state of a system. The quantum formalism is a probability algorithm that describes the objective fuzziness of the quantum world, and thus the quantum-mechanical probability assignments are assignments of objective probabilities. A pure state can also be described as a point in the projective Hilbert space of the system, while a mixed state
  • #1
touqra
287
0
What is a pure state and a mixed state?
 
Physics news on Phys.org
  • #2
A pure state is one that can be represented by a vector in a Hilbert space. A mixed state is one that cannot: it must be represented by a statistical mixture of pure states.
 
  • #3
touqra said:
What is a pure state and a mixed state?

As Doc Al said, a pure quantum state is, well, a quantum state (an element in hilbert space). A mixed state is a statistical mixture of pure states. You can compare this with the situation in classical mechanics. A "pure state" would be a "point in phase space", while a "mixture" would be a statistical distribution over phase space (given by a probability density over phase space).

However, there's an extra weirdness in the case of quantum theory. As where the probability density in classical phase space would give a unique probability to each individual phase space point, and as two different probability densities in classical phase space would be experimentally distinguishable (it is in principle possible, if you have an ensemble of physical systems which are described by the given probability density, to extract from this ensemble of systems, that probability density, by doing a complete measurement of the phase space point of each of them, and histogramming the outcomes over phase space), well, the quantum description of a mixed state allows DIFFERENT ensembles of different probabilities over different states to give rise to IDENTICAL mixed states, which are experimentally indistinguishable.
This finds its origin in the probabilistic aspect of quantum measurements, where the two probability measures get mixed up: the probability of outcome due to quantum randomness, for a pure state, and the probability in the mixture to be a certain pure state.

As an example, consider a spin-1/2 system.

Pure states are, for example: |z+>,
or |z->
or |x+>
or |x->
or...

These are elements of the 2-dimensional hilbert space describing the system.

A mixture can be described, a priori, by, well, a mixture of pure states, such as: 30% |x+>, 60% |z-> and 10% |y+>. But this decomposition is not unique:

The mixture:
50% |z+> and 50% |z->

is experimentally indistinguishable, for instance, from the mixture:

50% |x+> and 50% |x->

Mixtures are correctly described by a density matrix, rho.

if a mixture is made up of p_a of state |a>, p_b of state |b> and p_c of state |c>, then:

rho = p_a |a><a| + p_b |b><b| + p_c |c><c|

A measurable quantity A will have its expectation value:

<A> = Tr(A rho)

As such, different mixtures with identical rho are experimentally identical.

Some claim therefor that the true quantum state of a system is given by rho, and not by an element in hilbert space. However, this leads to other problems...
 
  • #4
touqra said:
What is a pure state and a mixed state?

First of all: what is a state? It's a probability algorithm. We use it to assign probabilities to possible measurement outcomes on the basis of actual measurement outcomes (usually called "preparations"). A measurement is complete if it yields the maximum possible amount of information about the system at hand. A state is pure if it assigns probabilities on the basis of the outcome of a complete measurement. Otherwise it is mixed.
 
  • #5
koantum said:
First of all: what is a state? It's a probability algorithm. We use it to assign probabilities to possible measurement outcomes on the basis of actual measurement outcomes (usually called "preparations"). A measurement is complete if it yields the maximum possible amount of information about the system at hand. A state is pure if it assigns probabilities on the basis of the outcome of a complete measurement. Otherwise it is mixed.

What you write is a correct view from the "information" (epistemological) point of view. Personally, I like to see something more than just a statement about knowledge, but I agree that this is a possible viewpoint which is endorsed by some.
In that viewpoint, the only "state" we talk about, is a state of our knowledge about nature, and not an ontological state of nature.
 
  • #6
What you write is a correct view from the "information" (epistemological) point of view. Personally, I like to see something more than just a statement about knowledge, but I agree that this is a possible viewpoint which is endorsed by some. In that viewpoint, the only "state" we talk about, is a state of our knowledge about nature, and not an ontological state of nature.

Dear vanesh,

I am with you in your expectation to see more than statements about knowledge. There is no denying, however, that the quantum formalism is a probability algorithm. Whereas this formalism is utterly incomprehensible as a one-to-one representation of the real world, it is almost self-evident if we think of it as a tool for describing the objective fuzziness of the quantum world.

Almost the first thing people came to understand through quantum mechanics was the stability of atoms and objects composed of atoms: it rests on the objective fuzziness of their internal relative positions and momenta. (The literal meaning of Heisenberg's term "Unschärfe" is not "uncertainty" but "fuzziness".)

What is the proper (mathematically rigorous and philosophically sound) way of dealing with a fuzzy observable? It is to assign probabilities to the possible outcomes of a measurement of this observable. But if the quantum-mechanical probability assignments serve to describe an objective fuzziness, then they are assignments of objective probabilities.

So the fact that quantum mechanics deals with probabilities does not imply that it is an epistemic theory. If it deals with objective probabilities, then it is an ontological theory.
 
  • #7
touqra said:
What is a pure state and a mixed state?

A pure state: it has a simple mathematical meaning, namely a point in the projective Hilbert space of the system, or, if you prefer, a unidimensional linear subspace (a.k.a. unit ray, or simply ray, if there's no room for confusions) in the Hilbert space associated to any quantum system.

A mixed state: well, if you read any statistical physics course under the title "Virtual statistical ensembles in quantum statistics" you'll get a very good idea on it.

BTW, the von Neumann formulation of QM allows the most natural description of mixed states...

Daniel.
 
  • #8
koantum said:
What is the proper (mathematically rigorous and philosophically sound) way of dealing with a fuzzy observable? It is to assign probabilities to the possible outcomes of a measurement of this observable. But if the quantum-mechanical probability assignments serve to describe an objective fuzziness, then they are assignments of objective probabilities.

So the fact that quantum mechanics deals with probabilities does not imply that it is an epistemic theory. If it deals with objective probabilities, then it is an ontological theory.

There's a hic with this view, because it would imply that there are a set of observables (spanning a phase space) over which quantum theory generates us a Kolmogorov probability distribution, as such fixing entirely the probabilities of the outcomes of all POTENTIAL measurements.
And we know that this cannot be done: we can only generate a Kolmogorov probability distribution for a set of COMPATIBLE measurements.

The closest one can come is something like the Wigner quasidistribution:
http://en.wikipedia.org/wiki/Wigner_quasi-probability_distributioncheers,
Patrick.
 
Last edited:
  • #9
dextercioby said:
A pure state: it has a simple mathematical meaning, namely a point in the projective Hilbert space of the system, or, if you prefer, a unidimensional linear subspace (a.k.a. unit ray, or simply ray, if there's no room for confusions) in the Hilbert space associated to any quantum system.
A mixed state: well, if you read any statistical physics course under the title "Virtual statistical ensembles in quantum statistics" you'll get a very good idea on it.
There is no need to read a statistical physics course. Quantum mechanics represents the possible outcomes to which its algorithms assign probabilities by the subspaces of a vector space, it represents its pure probability algorithms by 1-dimensional subspaces of the same vector space, and it represents its mixed algorithms by probability distributions over pure algorithms. Hence the name "mixed".
 
  • #10
vanesch said:
There's a hic with this view, because it would imply that there are a set of observables (spanning a phase space) over which quantum theory generates us a Kolmogorov probability distribution, as such fixing entirely the probabilities of the outcomes of all POTENTIAL measurements.
Please explain how this would imply what you think it implies. State your assumptions so that I can point out either that they are wrong or that I do not share them.
 
  • #11
koantum said:
Please explain how this would imply what you think it implies. State your assumptions so that I can point out either that they are wrong or that I do not share them.

Well, you are correct in stating that, given a wavefunction, or a mixed state, AND GIVEN A CHOICE OF COMMUTING OBSERVABLES, that the wavefunction/density matrix generates a probability distribution over the set of these observables. As such, one might say - as you do - that these variables are "fuzzy" quantities, and that they are correctly described by the generated probability function.

However, if I make ANOTHER choice of commuting observables, which is not compatible with the previous set, I will compute a different probability distribution for these new observables. No problem as of yet.

But what doesn't work always is to consider the UNION of these two sets of observables, and require that there is an overall probability distribution that will describe this union. As such, one cannot say that the observable itself "has" a probability distribution, independent of whether we were going to pick it out or not in our set of commuting observables. This is what is indicated by the non-positivity of the Wigner quasi-distributions.

The typical example is of course a Bell state |+>|-> - |->|+> and where we consider the 3 observables in 3 well-chosen directions on both sides of the experiment. Let us call them A,B,C,U,V and W, and each of them can have a result +1 or -1. There is NO probability distribution P(A,B,C,U,V,W) for the 64 different possibilities of A,B,C,U,V,W which corresponds to the quantum predictions - that's the essence of Bell's theorem in fact, because if this distribution existed, a common hidden variable thus distributed could generate the outcomes.

So in that sense, I wanted to argue that it is not possible to claim that every POTENTIAL observable is a "fuzzy quantity" that is correctly described by a probability distribution - which - I assumed in that case, must be existing independent of the SET of (commuting) observables that we are going to select for the experiment.
 
  • #12
vanesch said:
Some claim therefor that the true quantum state of a system is given by rho, and not by an element in hilbert space. However, this leads to other problems...

Would you mind expanding the dots?

An interested reader.

Carl
 
  • #13
vanesch said:
Well, you are correct in stating that, given a wavefunction, or a mixed state, AND GIVEN A CHOICE OF COMMUTING OBSERVABLES, that the wavefunction/density matrix generates a probability distribution over the set of these observables. As such, one might say - as you do - that these variables are "fuzzy" quantities, and that they are correctly described by the generated probability function.

However, if I make ANOTHER choice of commuting observables, which is not compatible with the previous set, I will compute a different probability distribution for these new observables. No problem as of yet.

But what doesn't work always is to consider the UNION of these two sets of observables, and require that there is an overall probability distribution that will describe this union. As such, one cannot say that the observable itself "has" a probability distribution, independent of whether we were going to pick it out or not in our set of commuting observables...
Dear vanesh,

Thank you for your detailed response. Now it's my turn to add some flesh to my earlier post about objective probabilities.

It is indeed not possible to consistently define an overall probability distribution for a set of non-commuting observables. If one attributes a probability distribution to a set of observables, then one makes the (implicit) assumption that these observables can be measured simultaneously, and this is not possible for non-commuting observables. (In fact, every quantum-mechanical probability assignment implicitly assumes that the corresponding measurement not only can be but is made. Out of any measurement context, quantum-mechanical probabilities are simply meaningless.) I further admit that my all too brief post may have suggested the opposite: that I believe it makes sense to objectify quantum-mechanical probabilities out of measurement contexts. What could make people jump to this erroneous conclusion is the popular misconception that reference to measurements is the same as reference to observers.

There are basically two kinds of interpretation, those that acknowledge the central role played by measurements in standard axiomatizations of quantum mechanics, and those that try to sweep it under the rug. As a referee of a philosophy-of-science journal once put it to me, "to solve [the measurement problem] means to design an interpretation in which measurement processes are not different in principle from ordinary physical interactions.'' To my way of thinking, this definition of "solving the measurement problem" is the reason why as yet no sensible solution has been found. Those who acknowledge the importance of measurements, on the other hand, appear think of probabilities as inherently subjective and therefore cannot comprehend the meaning of objective probabilities. Yet it should be perfectly obvious that quantum-mechanical probabilities cannot be subjective. Subjective (that is, ignorance) probabilities disappear when all relevant facts are taken into account (which in many cases is practically impossible). The uncertainty principle however guarantees that quantum-mechanical probabilities cannot be made to disappear. As http://arxiv.org/abs/quant-ph/9801057" [Broken] put it, "in a non-deterministic world, probability has nothing to do with incomplete knowledge. Quantum mechanics is the first example in human experience where probabilities play an essential role even when there is nothing to be ignorant about." Mermin in fact believes that the mysteries of quantum mechanics can be reduced to the single puzzle posed by the existence of objective probabilities, and I think that this is correct.

So in that sense, I wanted to argue that it is not possible to claim that every POTENTIAL observable is a "fuzzy quantity" that is correctly described by a probability distribution - which - I assumed in that case, must be existing independent of the SET of (commuting) observables that we are going to select for the experiment.

This is the assumption that I did not make and that indeed cannot be made.
 
Last edited by a moderator:
  • #14
koantum said:
There are basically two kinds of interpretation, those that acknowledge the central role played by measurements in standard axiomatizations of quantum mechanics, and those that try to sweep it under the rug. As a referee of a philosophy-of-science journal once put it to me, "to solve [the measurement problem] means to design an interpretation in which measurement processes are not different in principle from ordinary physical interactions.''

Well, I fully subscribe to that referee's view, honestly. However, you are right that there are essentially two views on QM, one which considers a "measurement process" and others who say that there's no such thing - count me as partisan of the latter (caveat... see further).

I would classify these two different views differently. I'd say that those who consider quantum theory as a "partial" theory have no problem adding an extra thing, called measurement process, while those that want to take on the view that quantum theory is a *universal* physical theory, cannot accept such a process.

The reason is the following: if quantum theory is to be universal (that means that its axioms apply to everything in the world - necessarily a reductionist viewpoint of course), then they also apply to the observer. And a "measurement" for the observer is nothing else but "a state" of the observer. You can only consider that information is anything else but a physical state if you don't consider the "information-posessor" (= the observer) as being part of the very physics.
In classical physics, there's no issue: the "bodystate" of the observer is a classical state, and is linked through a deterministic mechanics to the classical state of the observed system (the deterministic mechanics is the physics of the measurement apparatus). So the "state of the observer" is a kind of copy of the state of the system (eventually with errors, noise, omissions...), and this state, out of the many possible, is then the measurement result which contains the information about the system. But to convert "body state" into "information" needs an interpretation. No difficulty here, in classical physics.

However, if you go now to quantum theory, there's a difficulty. First of all, there's a difficulty with the "bodystate" of the observer: if it is a quantum system as any other (quantum theory being universal), then it needs to be described by a state vector in Hilbert space. Now, you could still try to save the day, and introduce a kind of superselection rule, which allows only certain states ("classical states") to be associated to a body. But then there's the extra difficulty of the linearity of the time evolution operator, which follows from the physical interaction between the observer's body and the system under study, which will drive that body's state into a superposition of the different classical bodystates, hence violating that superselection rule.
Now comes "measurement". As in the classical counterpart, a measurement is a physical link between (ultimately) the observer's body, and the physics of the system under study, such that the system state is more or less copied into the body state. That bodystate, amongst the many possible, contains then the information about the system that has been extracted. But as we see here, we can only roughly copy a quantum state (of the system under study) into a quantum state (of the body of the observer)! There's no way, if quantum theory is to be universally applied, to copy a quantum state to a *classical* state of the body - which is needed if we want to have a Copenhagen-style measurement and its associated information carrier (the observer's body).

I don't think that there is any way out, if quantum theory is taken to be *universally* valid. However, if quantum theory is put in "microscopic boxes", and the macroworld (containing the observers' body) is *classical*, while CERTAIN physical systems out there are quantum systems, while OTHER physical systems are classical systems that can couple to quantum systems (preparation and measurement apparatus), so that quantum theory is allowed to be "set up", "run" and "give out its answer", then of course the information viewpoint makes sense (this is in fact the Copenhagen view). The *classical* state of the observer's body (and of the classical part of the measurement apparatus) will be one of many classical states, and hence correspond to the information of the measurement result, where ONE classical outcome has to be chosen over many (the collapse of the wavefunction).

Note that I keep open (the earlier caveat...) the possibility of quantum theory NOT being universally valid. However, I claim that, when you want to give an interpretation of a theory, you cannot start by claiming that it is NOT universally valid (without saying also, then, what IS valid).

The ONLY probabilitic part of the usual application of quantum theory is when one has to make a transition to a classical end state (the so-called collapse). Whatever it is that generates this (apparent?) transition, it surely is an objectively random process - but of which the dynamics is NOT described by quantum theory itself (it being a DETERMINISTIC theory concerning the wavefunction evolution).

To my way of thinking, this definition of "solving the measurement problem" is the reason why as yet no sensible solution has been found. Those who acknowledge the importance of measurements, on the other hand, appear think of probabilities as inherently subjective and therefore cannot comprehend the meaning of objective probabilities. Yet it should be perfectly obvious that quantum-mechanical probabilities cannot be subjective. Subjective (that is, ignorance) probabilities disappear when all relevant facts are taken into account (which in many cases is practically impossible). The uncertainty principle however guarantees that quantum-mechanical probabilities cannot be made to disappear.

I would even say that the "proof" of this objectivity of quantum-mechanical probabilities resides exactly in the fact that there is no universal probability distribution of all quantum-mechanical quantities (the thing we've been talking about, such as a Wigner quasi distribution) - otherwise one could take it that there are hidden variables that are such, that our subjective lack of their precise value generates the quantum-mechanical probabilities. However, the example of Bohmian mechanics illustrates that one has to be careful with these statements.

At the end of the day, there's no fundamental distinction between "objective probabilities" and "subjective, but in principle unknowable" probabilities (such as those given by the distribution of hidden variables, or, by the quantum equilibrium condition in Bohmian mechanics).

Mermin in fact believes that the mysteries of quantum mechanics can be reduced to the single puzzle posed by the existence of objective probabilities, and I think that this is correct.

Personally, I think that's a too simple way out. As I said, there's no fundamental difference between "objective probabilities" and subjective probabilities of things that are in principle forbidden to know. But we know that quantum theory cannot really be put into such a framework if we also cherish other principles such as locality (otherwise, I think it is fairly obvious that Bohmian mechanics would be demystifying the whole business!).
I think that the fundamental difficulty in the measurement problem comes from our A PRIORI requirement of the observer, or the measurement apparatus, or whatever, to be in a CLASSICAL state, which is in contradiction with the superposition principle on which quantum theory is build up. You cannot require of your observer NOT to obey the universal theory you're describing, and hope you'll not run into difficulties!
 
  • #15
vanesh said:
I would classify these two different views differently. I'd say that those who consider quantum theory as a "partial" theory have no problem adding an extra thing, called measurement process, while those that want to take on the view that quantum theory is a *universal* physical theory, cannot accept such a process.
Rather, those who consider quantum theory as a universal theory (in your sense) feel the necessity of adding an extra thing: surreal particle trajectories (Bohm), nonlinear modifications of the dynamics (Ghirardi, Rimini, and Weber or Pearle), the so-called eigenstate-eigenvalue link (van Fraassen), the modal semantical rule (Dieks), and what have you.

The only thing we are sure about is that quantum mechanics is an algorithm for assigning probabilities to possible measurement outcomes on the basis of actual outcomes. If measurements are an "extra thing", what is quantum mechanics without measurements? Nothing at all!
if quantum theory is to be universal (that means that its axioms apply to everything in the world - necessarily a reductionist viewpoint of course)…
I don’t know of any axiomatic formulation of quantum mechanics in which measurements do not play a fundamental role. What axioms are you talking about?

Quoting from my earlier response to hurkyl: it is by definition impossible to find out by experiment what happened between one measurement and the next. Any story that tells you what happened between consecutive measurements is just that – a story. Bohmians believe in a story according to which particles follow mathematically exact trajectories, and the rest (apart from some laudable exceptions) believe in a story according to which the quantum-mechanical probability algorithm is an ontological state that evolves deterministically between measurements if not always. (One of those laudable exception was the late Asher Peres, who realized that there is no interpolating wave function giving the "state of the system" between measurements.)

Whether you believe in unitary evolution between measurements or unitary evolution always makes no difference to me. I reject the whole idea of an evolving quantum state, not just because it is unscientific by Popper's definition (since the claim that it exists is unfalsifiable) but because it prevents us from recognizing the true ontological implications of the quantum formalism (which are pointed out at http://thisquantumworld.com" [Broken]). The dependence on time of the quantum-mechanical probability algorithms (states, wave functions) is a dependence on the times of measurements, not the time dependence of an evolving state.
The ONLY probabilistic part of the usual application of quantum theory is when one has to make a transition to a classical end state (the so-called collapse). Whatever it is that generates this (apparent?) transition…
In a theory that rejects evolving quantum states the question "to collapse or not to collapse?" doesn’t arise. What generates this "(apparent?) transition" is one of several http://thisquantumworld.com/pseudo.htm" [Broken] arising from the the unwarranted and unverifiable postulate of quantum state evolution.
… it surely is an objectively random process - but of which the dynamics is NOT described by quantum theory itself (it being a DETERMINISTIC theory concerning the wave function evolution).
So you accept an objectively random process whose dynamics quantum theory cannot describe? What happened to your claim that
when you want to give an interpretation of a theory, you cannot start by claiming that it is NOT universally valid (without saying also, then, what IS valid).
What IS valid (and universally so) is that quantum mechanics correlates measurement outcomes. The really interesting question about quantum mechanics is: how can a theory that correlates measurement outcomes be fundamental and complete? Preposterous, isn’t it? If people had spend the same amount of time and energy trying to answer this question, rather than disputing whether quantum states collapse or don’t collapse, we would have gotten somewhere by now.
There's no way, if quantum theory is to be universally applied, to copy a quantum state to a *classical* state of the body…
There is no way, if reality is an evolving ray in Hilbert space, to even define subsystems, measurements, observers, interactions, etc. Also, it has never been explained why, if reality is an evolving ray in Hilbert space, certain mathematical expressions of the quantum formalism should be interpreted as probabilities. So far every attempt to explain this has proved circular. The decoherence program in particular relies heavily on reduced density operators, and the operation by which these are obtained - partial tracing - presupposes Born's probability rule. Obviously you don’t have this problem is the quantum formalism is fundamentally a probability algorithm.
 
Last edited by a moderator:
  • #16
koantum said:
Rather, those who consider quantum theory as a universal theory (in your sense) feel the necessity of adding an extra thing: surreal particle trajectories (Bohm), nonlinear modifications of the dynamics (Ghirardi, Rimini, and Weber or Pearle), the so-called eigenstate-eigenvalue link (van Fraassen), the modal semantical rule (Dieks), and what have you.

Indeed,... except for MWI :smile: ; or almost so.

The only thing we are sure about is that quantum mechanics is an algorithm for assigning probabilities to possible measurement outcomes on the basis of actual outcomes. If measurements are an "extra thing", what is quantum mechanics without measurements? Nothing at all!

This can be said about any scientific theory.

I don’t know of any axiomatic formulation of quantum mechanics in which measurements do not play a fundamental role. What axioms are you talking about?

1) the Hilbert space, spanned by the eigenvectors of "a complete set of observables" (which is nothing else but an enumeration of the degrees of freedom of the system, and the values they can take)

2) the unitary evolution (the derivative of it being the Hamiltonian)

You are right of course that there is a statement that links what is "observed" with this mathematical state - but such a statement must be made in ALL physical theories. If you read that statement as: "it is subjectively experienced that..." you're home.

Whether you believe in unitary evolution between measurements or unitary evolution always makes no difference to me. I reject the whole idea of an evolving quantum state, not just because it is unscientific by Popper's definition (since the claim that it exists is unfalsifiable) but because it prevents us from recognizing the true ontological implications of the quantum formalism (which are pointed out at http://thisquantumworld.com" [Broken]). The dependence on time of the quantum-mechanical probability algorithms (states, wave functions) is a dependence on the times of measurements, not the time dependence of an evolving state.

That can be said about every scientific theory. You should then also reject the idea of an evolving classical state, or the existence of a classical electrical field, or even the existence of other persons you're not observing. When you leave your home, your cat "disappears" and it "reappears" when you come back home. The concept of "your cat" is then nothing else but a formal means of which its ontological existence outside of its direct observation is unscientific in Popper's sense because an unwarranted extrapolation of the observations of your cat when you are home... The state of your cat ("the poor Felix must be hungry, I forgot to give him his dinner this morning") outside of any observation is hence a meaningless concept. When he's a bit aggressive when I come home, then that's just the result of an algorithm which depends on the time between me leaving my cat (without a meal) and me coming home again ; in between, no cat. That's what you want people to accept concerning quantum states, or any other physical state. I find that rather unsatisfying...

In a theory that rejects evolving quantum states the question "to collapse or not to collapse?" doesn’t arise. What generates this "(apparent?) transition" is one of several http://thisquantumworld.com/pseudo.htm" [Broken] arising from the the unwarranted and unverifiable postulate of quantum state evolution.

As I said, this can be applied to any scientific theory. It doesn't lead to a very inspiring picture of the world ; it is essentially the "information" world view, where scientific (and other) theories are nothing else but organizing schemes of successive observations and no description of an actual reality.

So you accept an objectively random process whose dynamics quantum theory cannot describe? What happened to your claim that

No, I don't. I could accept such a theory, but quantum theory isn't one of them. The random process, in the MWI view, is entirely subjective ; it is not part of the physics, but of what you happen to subjectively experience.

What IS valid (and universally so) is that quantum mechanics correlates measurement outcomes. The really interesting question about quantum mechanics is: how can a theory that correlates measurement outcomes be fundamental and complete? Preposterous, isn’t it? If people had spend the same amount of time and energy trying to answer this question, rather than disputing whether quantum states collapse or don’t collapse, we would have gotten somewhere by now.

All theory "correlates" subjective experiences (also called measurements), and to go beyond that is purely hypothetical: this is established by the non-falsifiability of solipsism. Nevertheless, making these hypotheses are useful activities, because it gives us an intuitive picture of a world that can explain things. It is a matter of conceptual economy, to postulate things to exist "for real", because they have strong suggestive power. So anybody claiming that one shouldn't say that certain concepts in an explanatory scheme of observations (such as quantum theory, or any scientific theory) are "real" misses the whole point of what "reality" is for: it is for its conceptual simplification ! The unprovable hypothesis that your cat exists, even if you have no observational evidence (because you're not at home), is a simplifying hypothesis which helps organize your subjective experiences (and makes for the fact that you're not surprised to find a cat when you come home). So I fail to see the point of people insisting that quantum theory tells us that there's nothing to be postulated for real in between measurements. You're not gaining any conceptual simplification from that statement, so what good is it ?

There is no way, if reality is an evolving ray in Hilbert space, to even define subsystems, measurements, observers, interactions, etc. Also, it has never been explained why, if reality is an evolving ray in Hilbert space, certain mathematical expressions of the quantum formalism should be interpreted as probabilities. So far every attempt to explain this has proved circular. The decoherence program in particular relies heavily on reduced density operators, and the operation by which these are obtained - partial tracing - presupposes Born's probability rule. Obviously you don’t have this problem is the quantum formalism is fundamentally a probability algorithm.

You should look at my little paper quant-ph/0505059 then - I KNOW that it is not possible to derive the probabilities from the unitary part. My solution is simply to STATE that your subjective experience derives from a randomly selected term according to the Born rule - as you should state, in general relativity, how your subjective experience of "now" derives from a spacelike slice of the 4-manifold, and as you should state how a physical state gives rise to a subjective experience in about ANY scientific theory.
When the objective physics is entirely described, no matter whether it is classical, quantum-mechanical or otherwise, you should STILL say how this gives rise to a subjective experience. Well, that's the place where I prefer to put the Born rule and the "projection postulate". It's as good a place as any! And I get back my nice physical ontology, my (even deterministic, although I didn't ask for it!) physical evolution - of the system, of the apparatus, of my body and all that. I get a weird rule that links my subjective experience to physical reality, but as that is in ANY CASE something weird, it's the place to hide any extra weirdness. You don't have to do as I do, of course. Any view on quantum theory that makes you happy is good enough. As I believe more in a formalism, than in intuition, or common sense, I need to give an ontological state to the elements of the formalism - it gives me the satisfaction of the simplifiying hypothesis of ontological reality, and it helps me devellop an intuition for the formalism (which are the two main purposes of the hypothesis of an ontology). Other people have other preferences.
However, I fail to see the advantage on insisting that one SHOULDN'T make that simplifying hypothesis of an existing physical reality.
 
Last edited by a moderator:
  • #17
From your cite:
An informed choice should weigh the absurdities spawned by the second option against the merits of the first.

I repeated often that the ONLY objection to an MWI/many minds view is "naah, too crazy"...
 
  • #18
vanesch said:
I repeated often that the ONLY objection to an MWI/many minds view is "naah, too crazy"...
Not too crazy. Borrowing the words of Niels Bohr, crazy but not crazy enough to be true.
 
  • #19
vanesh said:
The only thing we are sure about is that quantum mechanics is an algorithm for assigning probabilities to possible measurement outcomes on the basis of actual outcomes. If measurements are an "extra thing", what is quantum mechanics without measurements? Nothing at all!
This can be said about any scientific theory.
What about your own emphasis that classical physics can be formulated without reference to measurements, while quantum mechanics cannot?
1) the Hilbert space, spanned by the eigenvectors of "a complete set of observables" (which is nothing else but an enumeration of the degrees of freedom of the system, and the values they can take)
2) the unitary evolution (the derivative of it being the Hamiltonian)
You are right of course that there is a statement that links what is "observed" with this mathematical state - but such a statement must be made in ALL physical theories. If you read that statement as: "it is subjectively experienced that..." you're home.
Let me tell you in a few steps why we all use a complex vector space. (I can give you the details later if you are interested.) I use this approach when I teach quantum mechanics to higher secondary and undergraduate student.
  1. "Ordinary" objects have spatial extent (they "occupy" space), are composed of a (large but) finite number of objects that lack spatial extent, and are stable - they neither collapse nor explode the moment they are formed. Thanks to quantum mechanics, we know that the stability of atoms (and hence of "ordinary" objects) rests on the fuzziness (the literal translation of Heisenberg's "Unschärfe") of their internal relative positions and momenta.
  2. The proper way of dealing with a fuzzy observable is to assign probabilities to the possible outcomes of a measurement of this observable.
  3. The classical probability algorithm is represented by a point P in a phase space; the measurement outcomes to which it assigns probabilities are represented by subsets of this space. Because this algorithm only assigns trivial probabilities (1 if P is inside the subset representing an outcome, 0 if P is outside), we may alternatively think of P as describing the state of the system in the classical sense (a collection of possessed properties), regardless of measurements.
  4. To deal with fuzzy observables, we need a probability algorithm that can accommodate probabilities in the whole range between 0 and 1. The straightforward way to do this is to replace the 0 dimensional point P by a 1 dimensional line L, and to replace the subsets by the subspaces of a vector space. (Because of the 1-1 correspondence between subspaces and projectors, we may equivalently think of outcomes as projectors.) We assign probability 1 if L is contained in the subspace representing an outcome, probability 0 if L is orthogonal to it, and a probability 0>p>1 otherwise. (Because this algorithm assigns nontrivial probabilities, it cannot be re-interpreted as a classical state.)
  5. We now have to incorporate a compatibility criterion. It is readily shown (later, if you are in the mood for it) that the outcomes of compatible measurements must correspond to commuting projectors.
  6. Last but not least we require: if the interval C is the union of two disjoint intervals A and B, then the probability of finding the value of an observable in C is the sum of the probabilities of finding it in A or B, respectively.
  7. We now have everything that is needed to prove Gleason's theorem, according to which the probability of an outcome represented by the projector P is the trace of WP, where W (known as the "density operator") is linear, self-adjoint, positive, has trace 1, and satisfies either WW=W (then we call it a "pure state") or WW<W (then we call it "mixed"). (We are back to the topic of this thread!)
  8. The next step is to determine how W depends on measurement outcomes, which is also readily established.
  9. The next step is to determine how W depends on the time of measurement, which is equally straightforward to establish.
At this point we have all the axioms of your list (you missed a few) but with one crucial difference: we know where these axioms come from. We know where quantum mechanics comes from, whereas you haven’t the slightest idea about the origin of your axioms.
You should then also reject the idea of an evolving classical state, or the existence of a classical electrical field…
Which is exactly what I do! Newton famously refused to make up a story purporting to explain how, by what mechanism or physical process, matter acts on matter. While the (Newtonian) gravitational action depends on the simultaneous positions of the interacting objects, the electromagnetic action of matter on matter is retarded. This made it possible to transmogrify the algorithm for calculating the electromagnetic effects of matter on matter into a physical mechanism or process by which matter acts on matter.
Later Einstein's theory of gravity made it possible to similarly transmogrify the algorithm for calculating the gravitational effects of matter on matter into a mechanism or physical process.

Let's separate the facts from the fictions (assuming for the moment that facts about the world of classical physics are facts rather than fictions).
Fact is that the calculation of effects can be carried out in two steps:
  1. Given the distribution and motion of charges, we calculate six functions (the so-called "electromagnetic field"), and given these six functions, we calculate the electromagnetic effects that those charges have on other charges.
  2. Given the distribution and motion of matter, we calculate the stress-energy tensor, and given the stress-energy tensor, we calculate the gravitational effects that matter here has on matter there.
Fiction is
  1. that the electromagnetic field is a physical entity in its own right, that it is locally generated by charges here, that it mediates electromagnetic interactions by locally acting on itself, and that it locally acts on charges there;
  2. that spacetime curvature is a physical entity in its own right, and that it mediates the gravitational action of matter on matter by a similar local process.
Did you notice that those fictions do not explain how a charge locally acts on the electromagnetic field, how the electromagnetic field locally acts on a charges, and so on? Apparently, physicists consider the familiar experience of a well-placed kick sufficient to explain local action.
Physicists are, at bottom, a naive breed, forever trying to come to terms with the 'world out there' by methods which, however imaginative and refined, involve in essence the same element of contact as a well-placed kick. (B.S. DeWitt and R.N. Graham, Resource letter IQM-1 on the interpretation of quantum mechanics, AJP 39, pp. 724-38, 1971.)
vanesh said:
… or even the existence of other persons you're not observing.
This is what you are led to conclude because you don’t have a decent characterization of macroscopic objects.
It doesn't lead to a very inspiring picture of the world ; it is essentially the "information" world view, where scientific (and other) theories are nothing else but organizing schemes of successive observations and no description of an actual reality.
You find a deterministic theory of everything inspiring? Perhaps this is because you want to believe in your omniscience-in-principle: you want to feel as if you know What Exists and how it behaves. To entertain this belief you must limit Reality to mathematically describable states and processes. This is in part a reaction to outdated religious doctrines (it is better to believe in our potential omniscience than in the omnipotence of someone capable of creating a mess like this world and thinking he did a great job) and in part the sustaining myth of the entire scientific enterprise (you had better believe that what you are trying to explain can actually be explained with the means at your disposal).

Besides, you are wrong when you put me in the quantum-states-are-states-of-knowledge camp. Only if we reject the claptrap about evolving quantum states can we obtain a satisfactory description of the world between consecutive measurements. This description consists of the (objective) probabilities of the possible outcomes of all the measurements that could have been performed in the meantime. (I'm not in any way implying that it makes sense to simultaneously consider the probabilities of outcomes of incompatible measurements.)

I, for one, find the ontological implications of the quantum-formalism - if this is taken seriously as being fundamentally an algorithm for computing objective probabilities – greatly inspiring. Among these implications are the http://thisquantumworld.com/conundrum.htm" [Broken]. Besides, it is the incomplete spatiotemporal differentiation of Reality that makes a rigorous definition of "macroscopic" possible.
The random process, in the MWI view, is entirely subjective ; it is not part of the physics, but of what you happen to subjectively experience.
How convenient. What I experience is not part of physics. How does this square with your claimed universality of the quantum theory? And what I do not experience – Hilbert space vectors, wave functions, and suchlike – is part of physics. How silly!
All theory "correlates" subjective experiences (also called measurements)…
As long as you mix up experiences with measurements, you are not getting anywhere.
So anybody claiming that one shouldn't say that certain concepts in an explanatory scheme of observations (such as quantum theory, or any scientific theory) are "real" misses the whole point of what "reality" is for: it is for its conceptual simplification !
I have a somewhat higher regard for "reality". Like Aristotle, I refuse to have it identified with computational devices. ("The so-called Pythagoreans, who were the first to take up mathematics, not only advanced this subject, but saturated with it, they fancied that the principles of mathematics were the principles of all things." - Metaphysics 1-5.)
I get a weird rule that links my subjective experience to physical reality, but as that is in ANY CASE something weird, it's the place to hide any extra weirdness.
Chalmers called this the "law of minimization of mystery": quantum mechanics is mysterious, consciousness is mysterious, so maybe they are the same mystery. But mysteries need to be solved, not hidden.

Let me express, in conclusion, my appreciation for the trouble you take to explain yourself. It really helps me understand people of your ilk.
 
Last edited by a moderator:
  • #20
I will try to outline where I think that there is a problem in the approach you take, if you want it to be a universal explanation. The problem, according to me, resides in the mixture between formal aspects, and intuitive, common sense concepts. In a complete world picture, there is no room for intuitive and common sense concepts at the foundations.

Now, I know your objection to that view: you say that it is overly pretentious to try to have a universal, complete world picture. Of course. But the exercise does not reside in giving yourself the almighty feeling of knowing it all! The exercise consists in building up, WITHOUT USING common sense concepts at the foundations, a mental picture of the world, AND SEE IF OUR COMMON SENSE and less common sense observations can be explained by it. If at that point, you *take for granted* certain common sense concepts, then the reasoning becomes circular. Why is it important to try to derive a complete world picture ? Firstly, to see where it fails! This will indicate us, maybe, what goes wrong with it. And secondly, to be an intuitive guide to help you devellop a sense of problem solving.

koantum said:
Let me tell you in a few steps why we all use a complex vector space. (I can give you the details later if you are interested.) I use this approach when I teach quantum mechanics to higher secondary and undergraduate student.
  1. "Ordinary" objects have spatial extent (they "occupy" space), are composed of a (large but) finite number of objects that lack spatial extent, and are stable - they neither collapse nor explode the moment they are formed. Thanks to quantum mechanics, we know that the stability of atoms (and hence of "ordinary" objects) rests on the fuzziness (the literal translation of Heisenberg's "Unschärfe") of their internal relative positions and momenta.


  1. I think it is already fairly clear here, that there is an appeal to a mixture of intuitive ontological concepts. But an "algorithmic" theory cannot take for granted the ontological existence of any such "ordinary" object: their existence must be DERIVABLE from its fundamental formulation. Otherwise, you already sneak in the ontology you're going to refute later.

    [*]The proper way of dealing with a fuzzy observable is to assign probabilities to the possible outcomes of a measurement of this observable.

    Even there, there is a problem: how does a "measurement apparatus" link to an observable ? Does the measurement apparatus have ontological existence ? Or does only the observation of the measurement apparatus (by a person ?) make sense, and we cannot postulate (ontological hypothesis which is to be rejected) that the measurement apparatus, as a physical construction, exists ?
    So *what* defines a fuzzy or other observable in the first place if we're not entitled to any ontology ? And IF we are entitled to an intuitive ontology, then exactly what is it ?

    [*]The classical probability algorithm is represented by a point P in a phase space; the measurement outcomes to which it assigns probabilities are represented by subsets of this space. Because this algorithm only assigns trivial probabilities (1 if P is inside the subset representing an outcome, 0 if P is outside), we may alternatively think of P as describing the state of the system in the classical sense (a collection of possessed properties), regardless of measurements.

    Ok.

    [*]To deal with fuzzy observables, we need a probability algorithm that can accommodate probabilities in the whole range between 0 and 1. The straightforward way to do this is to replace the 0 dimensional point P by a 1 dimensional line L, and to replace the subsets by the subspaces of a vector space. (Because of the 1-1 correspondence between subspaces and projectors, we may equivalently think of outcomes as projectors.) We assign probability 1 if L is contained in the subspace representing an outcome, probability 0 if L is orthogonal to it, and a probability 0>p>1 otherwise. (Because this algorithm assigns nontrivial probabilities, it cannot be re-interpreted as a classical state.)

    I don't see why this procedure is "the straightforward way". I'd think that there are two ways of doing what you want to do. One is the "Kolgomorov" way: each thinkable observable is a random variable over a probability space. We already know that this doesn't work in quantum theory (the discussion we had previously). But one can go further. One can say that, to each "compatible" (to be defined at will) set of observables corresponds a different probability space, and the observables are then random variables over this space. THIS is the most general random algorithm. The projection of a ray in a vector space is way more restrictive, and I don't see why this must be the case.

    To illustrate what I want to say, consider this: consider two compatible observables, X1 and Y1. X1 can take on 3 possible outcomes: smaller than -1, between -1 and +1, and bigger than 1 (outcomes X1a, X1b and X1c).
    Y1 can take on 2 possible outcomes, Y1a and Y1b. For THIS SET OF OBSERVABLES, I can now define a probability space with distribution given by P(X1,Y1), with 6 different probabilities, satisfying the Kolmogorov axioms. But let us now consider that we have ANOTHER set of observables, X2 and Y2. In fact, in our naivity, we think that X2 is the "same" observable as X1, but more finegrained. But that would commit the mistake of assigning a kind of ontological existence to a measurement apparatus and to what it is going to measure. As only observations are to be considered "real", and we have of course a DIFFERENT measurement for the observable X2 than for X1 (we have to change scale, or resolution, on the hypothetical measurement apparatus), we can have a totally DIFFERENT probability distribution. Consider that X2 has 5 possible outcomes: smaller than -2, between -2 and -1, between -1 and +1, between +1 and +2, and bigger than 2. We would be tempted to state that, X2 measuring the "same" quantity as X1, the probability to measure X2a + the probability to measure X2b should equal the probability to have measured X1a. (smaller than -2, and between -2 and -1, is equivalent to smaller than -1). But THAT SUPPOSES A KIND OF ONTOLOGICAL EXISTENCE of the "quantity to be measured" independent of the measurement act, which is of course against the spirit of our purely algorithmic approach. Hence, a priori, there's no reason not to accept that the probability distribution for X2 and Y2 is totally unrelated to the one for X1 and Y1. This situation can easily be recognized as "contextuality".

    [*]We now have to incorporate a compatibility criterion. It is readily shown (later, if you are in the mood for it) that the outcomes of compatible measurements must correspond to commuting projectors.

    Yes, but we have placed ourselves already in a very restrictive class of probability algorithms for measurement outcomes. The contextual situation I sketched will not necessarily be incorporated in this more restrictive scheme. So postulating this is not staying open to "a probability algorithm in general".

    [*]Last but not least we require: if the interval C is the union of two disjoint intervals A and B, then the probability of finding the value of an observable in C is the sum of the probabilities of finding it in A or B, respectively.

    Ok, this is an explicit requirement of non-contextuality. Why ?

    [*]We now have everything that is needed to prove Gleason's theorem, according to which the probability of an outcome represented by the projector P is the trace of WP, where W (known as the "density operator") is linear, self-adjoint, positive, has trace 1, and satisfies either WW=W (then we call it a "pure state") or WW<W (then we call it "mixed"). (We are back to the topic of this thread!)

    Indeed. However, I had the impression you wanted to show that quantum theory is nothing else but a kind of "general scheme of writing down a generator for probability algorithms of observations", but we've made quite some hypotheses along the way! Especially the non-contextuality requirement, which requires us to HAVE A RELATIONSHIP BETWEEN THE PROBABILITIES OF DIFFERENT OBSERVATIONS (say, those with high, and those with low resolution), goes against the spirit of denying an ontological status to the "quantity to be measured outside of its measurement". If the only thing that makes sense, are measurement outcomes, then the resolution of this measurement makes integral part of it. As such, a hypothetical measurement with another resolution, is intrinsically entitled to a TOTALLY DIFFERENT and unrelated probability distribution. It is only when we say that what we measure has an independent ontological existence that we can start making assumptions about different measurements of the "same" thing: in order for it to be the "same" thing, it has to have ontological status.
    For instance, if what we measure is "position of a particle", but we say that the only thing that makes sense are *measurement outcomes", then the only thing that makes sense is "ruler says position 5.4 cm" and not "particle position is 5.4cm". Now, if we replace the ruler by a finer ruler, then the only thing that makes sense is now "fine ruler says position 5.43cm". There is a priori no relationship between the outcome "ruler says position 5.4cm" and "fine ruler says 5.43cm", because these are two DIFFERENT measurements. However, if there is an ontology behind it, and BOTH ARE MEASUREMENTS OF A PARTICLE POSITION, then these two things are related of course. But this REQUIRES THE POSTULATION OF SOME ONTOLOGICAL EXISTENCE OF A QUANTITY INDEPENDENT OF A MEASUREMENT - which is, according to your view, strictly forbidden.

    BTW, the above illustrates the "economy of concept" that results from postulating an ontology, and the intuitive help it provides. The unrelated statements "ruler says position 5.4cm" and "fine ruler says 5.43cm" which are hard to make any sense of, become suddenly almost trivial concepts when we say that there IS a particle, and that we have tried to find its position using two physical experiments, one with a better resolution than the other.
At this point we have all the axioms of your list (you missed a few) but with one crucial difference: we know where these axioms come from. We know where quantum mechanics comes from, whereas you haven’t the slightest idea about the origin of your axioms.

As I tried to point out, I don't see where your axioms come from, either. Why this projection thing to generate probability algorithms, which restricts their choice ? And why this non-contextuality ?

This made it possible to transmogrify the algorithm for calculating the electromagnetic effects of matter on matter into a physical mechanism or process by which matter acts on matter.
Later Einstein's theory of gravity made it possible to similarly transmogrify the algorithm for calculating the gravitational effects of matter on matter into a mechanism or physical process.

Let's separate the facts from the fictions (assuming for the moment that facts about the world of classical physics are facts rather than fictions).
Fact is that the calculation of effects can be carried out in two steps:
  1. Given the distribution and motion of charges, we calculate six functions (the so-called "electromagnetic field"), and given these six functions, we calculate the electromagnetic effects that those charges have on other charges.
  2. Given the distribution and motion of matter, we calculate the stress-energy tensor, and given the stress-energy tensor, we calculate the gravitational effects that matter here has on matter there.
Fiction is
  1. that the electromagnetic field is a physical entity in its own right, that it is locally generated by charges here, that it mediates electromagnetic interactions by locally acting on itself, and that it locally acts on charges there;
  2. that spacetime curvature is a physical entity in its own right, and that it mediates the gravitational action of matter on matter by a similar local process.


  1. Well, these fictions are strong conceptual economies. For instance, if I have a static electrostatic field, I'm not really surprised that a charge can accelerate one way or another, but that the DIRECTION of its acceleration at a certain position is always the same: the electric field vector is pointing in one and only one direction ! Now, if I see this as an ALGORITHM, then I don't see, a priori, why suddenly charges could not decide to go a bit in all possible directions as a function of their charge. I can imagine writing myself any algorithm that can do that. But when I physically think of the electric field at a point, I find a natural explanation for this single direction.

    I'll stop here, because I'd like to watch a movie on TV :-)
 
  • #21
koantum said:
Let me tell you in a few steps why we all use a complex vector space. (I can give you the details later if you are interested.) I use this approach when I teach quantum mechanics to higher secondary and undergraduate student. ...

This was the most beautiful post I've read on physics forums to date. I agree that the heart of QM is the process of measurement.

The most elegant description of QM I've seen is the Schwinger measurement algebra, and I've been busily trying to geometrize this for the last few years. It turns out that when one does this, one ends up having to associate a geometric (i.e. Clifford algebraic) constant with the imaginary unit. (This is similar to David Hestenes' geometrization of the Dirac equation back in 1982.) It turns out that there are many ways of doing this, and they correspond to gauge transformations. Basically, to get spinors from a density matrix formalism (where the states are pure density matrices or projection operators), you have to choose what Schwinger called a "vacuum" state. His calling it a vacuum was from what it shows up as when you go to a quantum field theory based on these ideas.

Carl
 
Last edited:
  • #22
The proper way of dealing with a fuzzy observable is to assign probabilities to the possible outcomes of a measurement of this observable.
Why?

This goes directly against what I remember about fuzzy sets and fuzzy logic.
 
Last edited:
  • #23
Hurkyl said:
Why?
For one thing, because nobody ever has come up with a different way of dealing with a fuzzy observable. Or am I misinformed? But I should have been more precise: the proper way of dealing with a fuzzy observable O is to assign probabilities to the possible outcomes of an unperformed measurement of O. If no measurement is actually made, all we can say about a quantum system is with what probability this or that outcome would be obtained if the corresponding measurement were made. If the probability is >0 for the possible outcomes v1,v2,v3..., then the value of O is fuzzy in the sense that the propositions "the value of O is vi" (i=1,2,3,...) are neither true nor false but meaningless.
This goes directly against what I remember about fuzzy sets and fuzzy logic.
And what would that be?
 
  • #24
And what would that be?
That it's not based at all on probability.

Let me share with you some quotes from Fuzzy Sets and Fuzzy Logic Theory and Applications in the introduction to its chapter on possibility theory:


... possibility theory, a theory that is closely connected with fuzzy set theory... [fuzzy measures] will allow us to explicate differences between fuzzy set theory and probability theory

and later in that chapter:


It is shown that probability and possibility theory are distinct theories, and neither is subsumed under the other.
 
  • #25
koantum said:
the proper way of dealing with a fuzzy observable O is to assign probabilities to the possible outcomes of an unperformed measurement of O. If no measurement is actually made, all we can say about a quantum system is with what probability this or that outcome would be obtained if the corresponding measurement were made. If the probability is >0 for the possible outcomes v1,v2,v3..., then the value of O is fuzzy in the sense that the propositions "the value of O is vi" (i=1,2,3,...) are neither true nor false but meaningless.

Accepting the above, why would there be a relationship between the "probability of outcome" of the "measurement of O if the corresponding measurement would be made with resolution D" and of the "measurement of O if the corresponding measurement would be made with resolution d" ?
These being two DIFFERENT measurements, and O itself not having any existence outside of its revelation by a measurement, there is no a priori requirement for these two probability distributions to be related in any way, no ?

As an example, let us say that measurement M1 of O takes on the possible outcomes {A,B,C}, with A standing for "O is 1 or 2", B standing for "O is 3 or 4" and C standing for "O is 5 or 6".
Measurement M2 has 6 possible outcomes, {a,b,c,d,e,f}, with a standing for "O is 1", b standing for "O is 2" etc...

Now, you want a probability distribution to be assigned to a potential measurement. Fine: potential measurement M1 of O: p(A) = 0.6, p(B) = 0.4, p(C) = 0.0

Potential measurement M2 of O: p(a) = 0.1, p(b) = 0.1, p(c) = 0.1, p(d)= 0.1, p(e)=0.1, p(f) = 0.5

I have assigned probabilities to the outcomes of measurements M1 and M2. You cannot reproduce this with standard quantum theory, so it is NOT a universal probability-of-potential-compatible-measurements description algorithm.

And if you now say that p(f) = 0.5 with p(C) is IMPOSSIBLE because "O cannot be at the same time NOT in {5,6} and equal to 6", then you have assigned a measurement-independent reality (ontology) to the quantity O.
 
Last edited:
  • #26
In a complete world picture, there is no room for intuitive and common sense concepts at the foundations…. The exercise consists in building up, WITHOUT USING common sense concepts at the foundations, a mental picture of the world, AND SEE IF OUR COMMON SENSE and less common sense observations can be explained by it.
An intuitive concept is one thing, a commonsense concept is quite another. Time is an intuitive concept. So is space. Like pink and turquoise, spatial extension is a quale that can only be defined by ostentation - by drawing attention to something of which we are directly aware. While the intuition of space can lend a phenomenal quality to numerical parameters, it cannot be reduced to such parameters.
If you are not convinced, try to explain to my friend Andy, who lives in a spaceless world, what space is like. Andy is good at maths, so he understands you perfectly if you tell him that it is like the set of all triplets of real numbers. But if you believe that this gives him a sense of the expanse we call space, you are deluding yourself. We can imagine triplets of real numbers as geometrical points embedded in space; he can't. We can interpret the difference between two numbers as the distance between two points; he can't. At any rate, he can't associate with the word "distance" the remoteness it conveys to us.
So without using intuitive concepts at the foundations, you cannot even talk about space (and this should be even more obvious for time).
I'm not saying that you cannot come up with a mathematical construct and call it "space". You can define "self-adjoint operator" = "elephant" and "spectral decomposition" = "trunk", and then you can prove a theorem according to which every elephant has a trunk. But please don’t tell me that this theorem has anything to do with real pachyderms.
Why is it important to try to derive a complete world picture? Firstly, to see where it fails!
Agreed. (But then one mustn't sweep under the rug all those data that don’t fit.) In fact, I said something to this effect in several of my papers. Permit me to quote myself:
Science is driven by the desire to know how things really are. It owes its immense success in large measure to its powerful "sustaining myth" [this is reference to an article by Mermin] - the belief that this can be discovered. Neither the ultraviolet catastrophe nor the spectacular failure of Rutherford's model of the atom made physicists question their faith in what they can achieve. Instead, Planck and Bohr went on to discover the quantization of energy and angular momentum, respectively. If today we seem to have reason to question our "sustaining myth", it ought to be taken as a sign that we are once again making the wrong assumptions, and it ought to spur us on to ferret them out." Anything else should be seen for what it is - a cop-out.​
I wrote this in response to Bernard d'Espagnat's claim that without nonlinear modifications of the Schrödinger equation (or similar adulterations of standard quantum mechanics) we cannot go beyond objectivity in the weak sense of inter-subjective agreement. I wrote something similar in response to the claim by Fuchs and Peres (in their opinion piece in Physics Today, March 2000) that QM is an epistemic theory and does not yield a model of a "free-standing" reality.
I think it is already fairly clear here, that there is an appeal to a mixture of intuitive ontological concepts. But an "algorithmic" theory cannot take for granted the ontological existence of any such "ordinary" object: their existence must be DERIVABLE from its fundamental formulation.
I have found that students (higher secondary and undergraduate) are much happier if I can show them where exactly the quantum formalism comes from and why it has the form that it does, than if I confront them with a set of abstruse axioms and tell them that that's the way it is! What value does an explanation have if it is based on something nobody comprehends? You may call my approach teleological. I ask, what must the laws of physics be like so that the "ordinary" objects which surround us can exist? You stop at the fundamental laws and take them for God-given. If you want to go further and understand a fundamental theory, the teleological (not theological!) approach is the only viable one: explaining why (in the teleological sense) the laws of physics are just so.
how does a "measurement apparatus" link to an observable? Does the measurement apparatus have ontological existence? Or does only the observation of the measurement apparatus (by a person?) make sense…
It ought to be clear by now that I reject the view that measurements have anything to do with conscious observations. Measurements are presupposed by the quantum formalism since all it does is correlate measurement outcomes. Attempts to make the quantum formalism consistent with the existence of measurements are therefore misconceived. Since it presupposes measurements, it is trivially consistent with their existence. Any notion to the contrary arises from misconceptions that must be identified and eliminated.
So what are measurements? Any event or state of affairs from which the truth or the falsity of a statement about the world can be inferred, qualifies as a measurement, regardless of whether anyone is around to make that inference.
How the "apparatus" links to an observable? It defines it. Consider an electron spin associated with the ket |z+>. What do we know about this spin? All we know is how it behaves in any given measurement context, that is, we know the possible outcomes and we can calculate their probabilities. By defining - and not just defining but realizing - an axis, the setup makes available two possible values; it creates possibilities to which probabilities can be assigned. In the absence of an apparatus that realizes a particular axis, the properties "up" and "down" do not even exist as possibilities. The idea that |z+> represents something as it is, all by itself, rather than as it behaves in possible measurement situations, is completely vacuous.
And the same applies to all quantum states, wave functions, etc.
Does the measurement apparatus have ontological existence? Certainly. Any macroscopic object has, and so has everything that can be inferred from a measurement (as defined above).

To prevent our posts from becoming interminable, I'll return to the rest of your post later.
 
  • #27
Hurkyl said:
probability and possibility theory are distinct theories, and neither is subsumed under the other.
So? In quantum mechanics we have measurement outcomes (possibilities) and an algorithm that assigns to them probabilities.
 
  • #28
koantum said:
An intuitive concept is one thing, a commonsense concept is quite another. Time is an intuitive concept. So is space. Like pink and turquoise, spatial extension is a quale that can only be defined by ostentation - by drawing attention to something of which we are directly aware.

I couldn't formulate that better myself...

While the intuition of space can lend a phenomenal quality to numerical parameters, it cannot be reduced to such parameters.
If you are not convinced, try to explain to my friend Andy, who lives in a spaceless world, what space is like. Andy is good at maths, so he understands you perfectly if you tell him that it is like the set of all triplets of real numbers. But if you believe that this gives him a sense of the expanse we call space, you are deluding yourself. We can imagine triplets of real numbers as geometrical points embedded in space; he can't. We can interpret the difference between two numbers as the distance between two points; he can't. At any rate, he can't associate with the word "distance" the remoteness it conveys to us.

But we are in absolute agreement here !

So without using intuitive concepts at the foundations, you cannot even talk about space (and this should be even more obvious for time).
I'm not saying that you cannot come up with a mathematical construct and call it "space". You can define "self-adjoint operator" = "elephant" and "spectral decomposition" = "trunk", and then you can prove a theorem according to which every elephant has a trunk. But please don’t tell me that this theorem has anything to do with real pachyderms.

Exactly ! So at a certain point, you have to link your formal terms in your mathematical formalism to qualia, to subjective experiences. *This* is the essence of the interpretation of ANY theory, classical, quantum or otherwise. It is why I always insist on the fact that there is no fundamental difference between the "measurement problem" in quantum theory, and the one in classical theory ; although the POSTULATE that assigns qualia to formal mathematical elements is simpler in classical theory.

The kind of argument you are putting forward - and with which I agree entirely up to now - is essentially another indication of the unfalsifiability of solipsism, as you seem to point out yourself:

Agreed. (But then one mustn't sweep under the rug all those data that don’t fit.) In fact, I said something to this effect in several of my papers. Permit me to quote myself:
Science is driven by the desire to know how things really are. It owes its immense success in large measure to its powerful "sustaining myth" [this is reference to an article by Mermin] - the belief that this can be discovered. Neither the ultraviolet catastrophe nor the spectacular failure of Rutherford's model of the atom made physicists question their faith in what they can achieve. Instead, Planck and Bohr went on to discover the quantization of energy and angular momentum, respectively. If today we seem to have reason to question our "sustaining myth", it ought to be taken as a sign that we are once again making the wrong assumptions, and it ought to spur us on to ferret them out." Anything else should be seen for what it is - a cop-out.​

What this statement means, to me, is that a proof for the falsity of solipsism (and hence of the ontological existence of what so ever) is a myth. Agreed. I think we know that already for a few centuries. However, it is not because one cannot PROVE the existence of an ontology, that this is a proof of its falsity either. And this is the point where we seem to differ in opinion:

the *hypothesis* (and it will never be anything else, granted) of an objective ontology IS a useful hypothesis. It guides us in our quest for what is "acceptable" and what is not. Its denial doesn't lead anywhere useful: you just open the bag of possibilities. What we need are ideas that *constrain* the possibilities of physical theories, not open it up, in order to guide us. As long as one CAN make the hypothesis of an objective ontology, one should do so, because of its conceptual power.
Dropping the hypothesis of an objective ontology is not productive: ANY algorithm could do. Astrology could do ; it is an algorithm like any other to "predict outcomes of measurements". It surely performs worse in lab conditions, but it doesn't perform badly in the "everyday world" of social events, happiness and so on. Astrology does not seem compatible with any ontology of a physical theory, but it surely is an algorithm like any other. This is where one can see the power of the hypothesis of an ontology over its denial. With an ontological interpretation, there are grounds to reject astrology ; in a purely algorithmic concept, no such grounds exist. Anything goes.

I wrote this in response to Bernard d'Espagnat's claim that without nonlinear modifications of the Schrödinger equation (or similar adulterations of standard quantum mechanics) we cannot go beyond objectivity in the weak sense of inter-subjective agreement. I wrote something similar in response to the claim by Fuchs and Peres (in their opinion piece in Physics Today, March 2000) that QM is an epistemic theory and does not yield a model of a "free-standing" reality.

I'd agree with you to reject these requirements. They make overly severe hypotheses of the thing that is missing: the link between objective ontology and the subjective experience - which, we both seem to agree upon, is a necessary part of any physical theory, classical or otherwise. But it is not because of these unnecessary requirements that one needs to go to the other extreme and reject the possibility of an ontology.

I have found that students (higher secondary and undergraduate) are much happier if I can show them where exactly the quantum formalism comes from and why it has the form that it does, than if I confront them with a set of abstruse axioms and tell them that that's the way it is! What value does an explanation have if it is based on something nobody comprehends?

It's a sleight of hand what you present. You DIDN'T present any REASON why the quantum formalism has the form it has, although you seem to claim so. Once we are in the world of "algorithms that calculate probabilities of possible outcomes of measurements", the class of algorithms such defined is LARGER than quantum theory can generate. You need extra assumptions which you are sneaking in, such as the requirement of non-contextuality ; which is totally incomprehensible from a purely algorithmic viewpoint (although it does make some more sense if there is a postulated ontology).

You may call my approach teleological. I ask, what must the laws of physics be like so that the "ordinary" objects which surround us can exist? You stop at the fundamental laws and take them for God-given. If you want to go further and understand a fundamental theory, the teleological (not theological!) approach is the only viable one: explaining why (in the teleological sense) the laws of physics are just so.

But, as I said, you DIDN'T derive the laws of quantum theory. You sneaked in all necessary conditions to ARRIVE at them. I gave a few examples of algorithmic possibilities, which are NOT realisable by a quantum formalism.

It ought to be clear by now that I reject the view that measurements have anything to do with conscious observations. Measurements are presupposed by the quantum formalism since all it does is correlate measurement outcomes.

But I (think I) understand your viewpoint, which is "minimalistic", and which is the "shut up and calculate" approach: you say, intuitively, we arrive at setting up our quantum formalism for most lab situations, this gives us the structure of the hilbert space and so on, we have some intuitive hockus pokus to think up the correct hamiltonian that corresponds to the case at hand, and we intuitively think we know what we are measuring at the end. We now turn the mathematical handle of the quantum formalism, and out come probabilities of outcomes. They fit (or they don't fit) with experiment. Period.
Sure. But now's my question: why do you think that a voltmeter is "measuring volts" and not "particle position" or "color of your eyes" ? Because the salesman told you that it is a voltmeter ? The only answer you can provide is probably that voltmeters are exactly that: things that measure volts. But imagine I sneak into your lab, and change your voltmeter in a bolometer, without you noticing. Suddenly you find strange results. But a "general probability generating algorithm" should have no difficulties adapting to the situation, no ? So what's going to be your reaction to the readings of your changed "voltmeter" ? Worse, if "volts" are what your "voltmeter" is measuring, there is no way for you to find out that I fiddled with your apparatus, because it is the *defining entity* of what are volts, according to your statement.

Attempts to make the quantum formalism consistent with the existence of measurements are therefore misconceived. Since it presupposes measurements, it is trivially consistent with their existence. Any notion to the contrary arises from misconceptions that must be identified and eliminated.

How do you use the quantum formalism then in the design of measurement apparatus ?

So what are measurements? Any event or state of affairs from which the truth or the falsity of a statement about the world can be inferred, qualifies as a measurement, regardless of whether anyone is around to make that inference.
How the "apparatus" links to an observable? It defines it. Consider an electron spin associated with the ket |z+>. What do we know about this spin? All we know is how it behaves in any given measurement context, that is, we know the possible outcomes and we can calculate their probabilities. By defining - and not just defining but realizing - an axis, the setup makes available two possible values; it creates possibilities to which probabilities can be assigned. In the absence of an apparatus that realizes a particular axis, the properties "up" and "down" do not even exist as possibilities. The idea that |z+> represents something as it is, all by itself, rather than as it behaves in possible measurement situations, is completely vacuous.

But how do you know distinguish then between different measurement apparatus ? How are you going to analyse that apparatus, and make sure that there is not simply a coin-flipping device inside, while it is written on the display: "spin-measurement apparatus: used axis: Z."

What IS a measurement apparatus ? How do you make one ? And how do you determine what it measures ?

And the same applies to all quantum states, wave functions, etc.
Does the measurement apparatus have ontological existence? Certainly. Any macroscopic object has, and so has everything that can be inferred from a measurement (as defined above).

So the position of a particle "exists" ? And its momentum "exists" ? What does that mean, for a particle to have a position and a momentum ? Does that mean that my particle IS really there somewhere, and is MOVING in a certain direction ? At any moment ? And if we analyse a double-slit experiment with that ? Does this mean that my particle has an ONTOLOGICALLY EXISTING POSITION at any moment in time (because it could POTENTIALLY be measured) ? But because of its fuzzyness, it is at several places at once ? In other words, it is ontologically, at every moment in time, in a superposition of precise position states ? And at the same time, ontologically, it has several momentum values as a fuzzy quantity ? In other words, it is at the same time in a superposition of precise momentum states ?
But didn't we just give an ONTOLOGICAL EXISTENCE to the wavefunction then ?? So what's all the fuzz then of "one should not give ontological existence to the wavefunction" ?

In conclusion: any physical theory that takes on this special status that "measurements are given", makes it impossible to DESIGN measurement apparatus. As it is my professional activity, I can indicate that this is an annoying feature of a physical theory, that I'm not entitled to analyse the physics of a measurement apparatus!
 
Last edited:
  • #29
So? In quantum mechanics we have measurement outcomes (possibilities) and an algorithm that assigns to them probabilities.
That's not how possibility theory works.

Evidence theory studies something called a belief measure and a plausibility measure. These are nonadditive measures, but they do live in [0, 1], and map the empty set to zero and the whole space to 1.

A belief measure Bel is a measure where [itex]\text{Bel}(A \cup B) \geq Bel(A) + Bel(B)[/itex] if A and B are both disjoint.

And we associate with it a plausibility measure Pl by, if B is the complement of A, then Pl(A) + Bel(B) = 1.

Possibility theory deals with the case when:

[itex]\text{Bel}(A \cap B) = \min \{ \text{Bel}(A), \text{Bel}(B) \}[/itex]
[itex]\text{Pl}(A \cup B) = \max \{ \text{Pl}(A), \text{Pl}(B) \}[/itex]

In this case, we call them necessity and possibility measures, respectively.


I have no idea if it's possible, in general, to come up with a reasonable way to take a necessity and a possibility measure and produce a probability measure.

But even if you can, you would generally lose information in the translation.


The reason to use probabilities is because probabilities seem to work well -- AFAIK there is no higher reason. I do begin to wonder, now, if the reason probabilities work well is because we design experiments that look for probabilities. :smile: To quote another sentence from the text:

probability theory is an ideal tool for formalizing uncertainty in situations where class frequencies are known or where evidence is based on outcomes of a sufficiently long series of independent random experiments.​

so that probabilities are good for talking about the kinds of experiments we do. On the other hand...

Possibility theory, on the other hand, is ideal for formalizing incomplete information expressed in terms of fuzzy propositions​

which sounds a lot like the fundamental uncertainty posited by quantum mechanics.


Of course, this book is not about physical foundations -- it would be talking about subjective probability/possibility, so these comments may not be applicable at all.
 
  • #30
vanesch said:
I'd think that there are two ways of doing what you want to do.
Great. Then only one needs to be eliminated.
One can say that, to each "compatible" (to be defined at will) set of observables corresponds a different probability space, and the observables are then random variables over this space. THIS is the most general random algorithm. The projection of a ray in a vector space is way more restrictive, and I don't see why this must be the case.
This is indeed the most general algorithm but it can be narrowed down (via Gleason's theorem) to the conventional Hilbert space formalism. This is shown in J.M. Jauch, Foundations of Quantum Mechanics (Reading, MA: Addison-Wesley, 1968). Also, "compatible" is not defined at will. Once you have the Hilbert space formalism, it is obvious how to define compatibility.
Ok, this is an explicit requirement of non-contextuality. Why?
I admit that this requirement is not inevitable. As you pointed out, probabilities can depend on measurement contexts; in a different context the same outcome need not have the same probability. In the context of composite systems contextual observables are indeed readily identified, as they are if we allow probability assignments based on earlier and later outcomes using the ABL rule (so named after Aharonov, Bergmann, and Lebowitz) instead of the Born rule, which assigns probabilities on the basis of earlier or later outcomes.
However, my first aim is to make quantum mechanics comprehensible to bright kids (something that is sorely needed) rather than to hardened quantum mechanicians (for whom there is little hope anymore), and those kids are as happy with this commonsense requirement as they are astonished by the contextualities that arise when systems are combined or when probabilities are assigned symmetrically with respect to time.
My second aim is to find the simplest set of laws that permits the existence of "ordinary" objects, and therefore I require non-contextuality wherever it is possible at all. Nature appears to take the same approach.
I had the impression you wanted to show that quantum theory is nothing else but a kind of "general scheme of writing down a generator for probability algorithms of observations", but we've made quite some hypotheses along the way!
Sorry if I gave the wrong impression. Not a "general scheme, period" but a general scheme for dealing with the objectively fuzzy observables that we need if we want to have "ordinary" objects. We started out with a discussion of objective probabilities, which certainly raises lots of questions. To be able to answer these questions consistently, I have to repudiate more than one accepted prejudice about quantum mechanics.
the non-contextuality requirement goes against the spirit of denying an ontological status to the "quantity to be measured outside of its measurement"…. [it] REQUIRES THE POSTULATION OF SOME ONTOLOGICAL EXISTENCE OF A QUANTITY INDEPENDENT OF A MEASUREMENT - which is, according to your view, strictly forbidden.
Whereas non-contextuality is implied by an ontology of self-existent positions (or values of whatever kind), it doesn’t imply such an ontology.
BTW, the above illustrates the "economy of concept" that results from postulating an ontology, and the intuitive help it provides. The unrelated statements "ruler says position 5.4cm" and "fine ruler says 5.43cm" which are hard to make any sense of, become suddenly almost trivial concepts when we say that there IS a particle, and that we have tried to find its position using two physical experiments, one with a better resolution than the other.
Have you now turned from an Everettic into a Bohmian? How come you seem to be all praise for intuitive concepts when a few moments ago you spurned them? And how is it that "ruler says position 5.4cm" is hard to make sense of for non-Bohmians? I find statements about self-existing positions or "regions of space" harder to make sense of. If I have a detector monitoring the interval from 5.4 to 5.6 (or from 5.40 to 5.41 for that matter) then I know what I am talking about. The detector is needed to realize (make real) this interval or region of space. It makes the property of being in this interval available for attribution. Then it only takes a click to make it "stick" to a particle.
When we come to the non-contextuality requirement, I ask my students to assume that p(C)=1, 0<p(A)<1, and 0<p(B)<1. (Recall: A and B are disjoint regions, C is their union, and p(C) is the probability of finding the particle in C if the appropriate measurement is made.) Then I ask: since neither of the detectors monitoring A and B, respectively, is certain to click, how come it is certain that either of them will click? The likely answer: "So what? If p(C)=1 then the particle is in C, and if it isn’t in A (no click), then it is in B (click)." Economy of concept but wrong!
At this point the students are well aware that (paraphrasing Wheeler) no property is a possessed property unless it is a measured property. They have discussed several experiments (Mermin's "simplest version" of Bell's theorem, the experiments of Hardy, GHZ, and ESW) all of which illustrate that assuming self-existent values leads to contradictions. So I ask them again: how come either counter will click if neither counter is certain to click? Bafflement.
Actually the answer is elementary, for implicit in every quantum-mechanical probability assignment is the assumption that a measurement is made. It is always taken for granted that the probabilities of the possible outcomes add up to 1. There is therefore no need to explain this. But there is a lesson here: not even probability 1 is sufficient for "is" or "has". P(C)=1 does not mean that the particle is in C but only that it is certain to be found in C provided that the appropriate measurement is made. Farewell to Einstein's "elements of reality". Farewell to van Fraassen's eigenstate-eigenvalue link.
You say "there IS a particle". What does this mean? It means there is a conservation law (only in non-relativistic quantum mechanics, though) which tells us that every time we make a position measurement exactly one detector clicks. If every time exactly two detectors click, we say there are two particles.
Well, these fictions are strong conceptual economies.
It might be better to call them visual aids or heuristic tools.
For instance, if I have a static electrostatic field, I'm not really surprised that a charge can accelerate one way or another, but that the DIRECTION of its acceleration at a certain position is always the same: the electric field vector is pointing in one and only one direction ! Now, if I see this as an ALGORITHM, then I don't see, a priori, why suddenly charges could not decide to go a bit in all possible directions as a function of their charge. I can imagine writing myself any algorithm that can do that. But when I physically think of the electric field at a point, I find a natural explanation for this single direction.
I don’t deny that thinking of the electromagnetic field as a tensor sitting at every spacetime point is a powerful visual aid to solving problems in classical electrodynamics. If you only want to use the physics, this is OK. But not if you want to understand it. There just isn’t any way in which one and the same thing can be both a computational tool and a physical entity in its own right. The "classical" habit of transmogrifying computational devices into physical entities is one of the chief reasons why we fail to make sense of the quantum formalism, for in quantum physics the same sleight of hand only produces pseudo-problems and gratuitous solutions.
You also get pseudo-problems in the classical context. Instead of thinking of the electromagnetic field as a tool for calculating the interactions between charges, you think of charges as interacting with the electromagnetic field. How does this interaction work? We have a tool for calculating the interactions between charges, but no tool for calculating the interactions between charges and the electromagnetic field. With the notable exception of Roger Boscovich, a Croatian physicist and philosopher of the 18th Century, nobody seems to have noticed that local action is as unintelligible as the ability of material objects to act where they are not. Why do we stop worrying once we have transmuted the mystery of action at a distance into the mystery of local action? Is this the answer?:
Physicists are, at bottom, a naive breed, forever trying to come to terms with the 'world out there' by methods which, however imaginative and refined, involve in essence the same element of contact as a well-placed kick. (B.S. DeWitt and R.N. Graham, Resource letter IQM-1 on the interpretation of quantum mechanics, AJP 39, pp. 724-38, 1971.)
As an example, let us say that measurement M1 of O takes on the possible outcomes {A,B,C}, with A standing for "O is 1 or 2", B standing for "O is 3 or 4" and C standing for "O is 5 or 6".
Measurement M2 has 6 possible outcomes, {a,b,c,d,e,f}, with a standing for "O is 1", b standing for "O is 2" etc... Now, you want a probability distribution to be assigned to a potential measurement. Fine:
potential measurement M1 of O: p(A) = 0.6, p(B) = 0.4, p(C) = 0.0
Potential measurement M2 of O: p(a) = 0.1, p(b) = 0.1, p(c) = 0.1, p(d)= 0.1, p(e)=0.1, p(f) = 0.5
I have assigned probabilities to the outcomes of measurements M1 and M2. You cannot reproduce this with standard quantum theory, so it is NOT a universal probability-of-potential-compatible-measurements description algorithm.
As I have pointed out, there are additional factors that narrow down the range of possible algorithms. I never claimed that kind of arbitrariness for the quantum-mechanical algorithm.
And if you now say that p(f) = 0.5 with p(C) is IMPOSSIBLE because "O cannot be at the same time NOT in {5,6} and equal to 6", then you have assigned a measurement-independent reality (ontology) to the quantity O.
But I never say that! I wouldn't even consider O in the M1 context to be the same observable as O in the M2 context. Observables are defined by how they are measured, what are the possible outcomes, and what other measurements are made at the same time.
 
  • #31
Hurkyl said:
That's not how possibility theory works. Evidence theory studies something called a belief measure and a plausibility measure... Of course, this book is not about physical foundations -- it would be talking about subjective probability/possibility, so these comments may not be applicable at all.
I think your hunch is correct. The quantum-mechanical assignments of observable probabilities have nothing to do with belief or plausibility. Let me requote Mermin: "in a non-deterministic world, probability has nothing to do with incomplete knowledge. Quantum mechanics is the first example in human experience where probabilities play an essential role even when there is nothing to be ignorant about."
 
  • #32
So at a certain point, you have to link your formal terms in your mathematical formalism to qualia, to subjective experiences. *This* is the essence of the interpretation of ANY theory, classical, quantum or otherwise. It is why I always insist on the fact that there is no fundamental difference between the "measurement problem" in quantum theory, and the one in classical theory ; although the POSTULATE that assigns qualia to formal mathematical elements is simpler in classical theory.
My http://xxx.lanl.gov/abs/quant-ph/0102103"to d'Espagnat was that his argument for weak objectivity = inter-subjective agreement is a cop-out. (I take it that d'Espagnat's weak objectivity corresponds to what you call solipsism.) My point was that it is our duty as physicists to find what Fuchs and Peres called a "freestanding reality" (which they claim quantum mechanics doesn’t allow). According to d'Espagnat, the elision of the subject is not possible within unadulterated, standard quantum mechanics. I maintain that it is possible. I want a conception of the quantum world to which the conscious subject is as irrelevant as it was to the classical view of the world. It's rather like a game I like to play: let's go find a strongly objective conception of the quantum world that owes nothing to subjects or conscious observations. It is precisely for this reason that I reject the naïve quantum realism that identifies reality with symbols of the mathematical formalism.
this is the point where we seem to differ in opinion: the *hypothesis* (and it will never be anything else, granted) of an objective ontology IS a useful hypothesis.
As you can see, we are in perfect agreement even here.
With an ontological interpretation, there are grounds to reject astrology; in a purely algorithmic concept, no such grounds exist.
While I'm certainly no believer in astrology, what you're saying is that your grounds for rejecting astrology are not scientific but metaphysical. That's not good enough for me.
It's a sleight of hand what you present. You DIDN'T present any REASON why the quantum formalism has the form it has, although you seem to claim so.
What I show is that if the quantum formalism didn’t have the form that it does then the familiar objects that surround us couldn’t exist. I pointed out that this is a teleological reason, and you are free to deny that teleological reasons are REASONS. But keep in mind that this is the only possible reason a fundamental physical theory can have. Our difference in opinion is that, for me, a mathematical structure that exists without any reason is not an acceptable reason for the existence of everything else.
But I (think I) understand your viewpoint, which is "minimalistic", and which is the "shut up and calculate" approach.
Absolutely not. I say: stop the naïve transmogrification of mathematical symbols into ontological entities in order to be finally in a position to see the true ontological implications of the quantum formalism.
How do you use the quantum formalism then in the design of measurement apparatus ?
As I implied earlier, using physics is not the same as understanding it. Keep in mind that technological applications invariably use approximate laws, the classical laws not being the poorest of them all, and remember Feynman's insistence that "philosophically we are completely wrong with the approximate law" (Feynman's emphasis).
What IS a measurement apparatus ? How do you make one ? And how do you determine what it measures?
I could certainly answer these question, but why should I be the first? How do you answer them?
So the position of a particle "exists"? And its momentum "exists"?
If, when, and to the extent that it is measured.
What does that mean, for a particle to have a position and a momentum?
It has a position (or momentum) if, when, and to the extent that its position (or momentum) can be inferred from something that qualifies as a measurement device (see above definition).
Does that mean that my particle IS really there somewhere, and is MOVING in a certain direction?
Nothing is there unless it is indicated by a measurement outcome.
Does this mean that my particle has an ONTOLOGICALLY EXISTING POSITION at any moment in time (because it could POTENTIALLY be measured)?
It has a position if, when, and to the extent that its position is measured. Between measurements (and also beyond the resolution of actual measurements) we can describe the particle only in terms of the probabilities of the possible outcomes of unperformed measurements. The particle isn’t like that "by itself", of course. Nothing can be said without reference to (actual or counterfactual=unperformed) measurements.
But didn't we just give an ONTOLOGICAL EXISTENCE to the wave function then ??
NO WAY!
any physical theory that takes on this special status that "measurements are given", makes it impossible to DESIGN measurement apparatus.
Nonsense.
As it is my professional activity, I can indicate that this is an annoying feature of a physical theory, that I'm not entitled to analyze the physics of a measurement apparatus!
Analyze away to your heart's content! You will be using approximate laws, and you won't be bothered about where the underlying laws come from or what their ontological implications are. You, as a professional magician, don’t need to know how the magic formulas work. You just need to use them. Contrariwise, no amount of ontological wisdom will help you even build a mousetrap.
 
Last edited by a moderator:
  • #33
koantum said:
This is indeed the most general algorithm but it can be narrowed down (via Gleason's theorem) to the conventional Hilbert space formalism. This is shown in J.M. Jauch, Foundations of Quantum Mechanics (Reading, MA: Addison-Wesley, 1968). Also, "compatible" is not defined at will. Once you have the Hilbert space formalism, it is obvious how to define compatibility.

I must have completely misunderstood you then. I thought you wanted to show the *naturalness* of the quantum-mechanical formalism, in the sense that you start by stating that we had it wrong all the way, that physical theories do not describe anything ontological, but are algorithms to compute probabilities of measurements, and that that single assumption is sufficient to arrive at the quantum-mechanical formalism.
In other words, that once we say that a physical theory is an algorithm to arrive at probabilities of measurements, then that the general framework is NECESSARILY the hilbert space formalism.
I thought that that was your whole point, and I tried to point out that this has not only not been demonstrated, but is bluntly not true. But apparently this is NOT what you want to say. I'm then at loss WHAT you want to say ? You give me a hint here:

However, my first aim is to make quantum mechanics comprehensible to bright kids (something that is sorely needed) rather than to hardened quantum mechanicians (for whom there is little hope anymore), and those kids are as happy with this commonsense requirement as they are astonished by the contextualities that arise when systems are combined or when probabilities are assigned symmetrically with respect to time.

Bright kids are amazing. They still believe what people tell them, because they don't realize they might be smarter than the guy/gal who's in front of them :tongue2:

Seriously, now. Your approach is a valuable approach, as are many others, but I don't think you have made any _clearer_ quantum mechanics. I think that an introduction to quantum theory should NOT talk about these issues, and should limit itself to a statement that there ARE issues, but that these issues can only reasonably discussed once one understands the formalism. I think that anyone FORCING upon the novice a particular view is not doing any service to the novice.

As you see, I think I'm relatively well versed in quantum theory and I don't completely agree with your view (although I can respect it, on the condition that you can be open-minded to my view too). So you should leave open that possibility to your public too, no ?

My second aim is to find the simplest set of laws that permits the existence of "ordinary" objects, and therefore I require non-contextuality wherever it is possible at all. Nature appears to take the same approach.

Ha, the simplest set of laws, to me, would be an overall probability distribution (hidden variable approach). THAT is intuitively understandable, this is what Einstein taught should be done, and this is, for instance, what Bohmians insist upon. This is the simplest, and most intuitive approach to the introduction of "ordinary" objects, no ?

Sorry if I gave the wrong impression. Not a "general scheme, period" but a general scheme for dealing with the objectively fuzzy observables that we need if we want to have "ordinary" objects. We started out with a discussion of objective probabilities, which certainly raises lots of questions. To be able to answer these questions consistently, I have to repudiate more than one accepted prejudice about quantum mechanics.

Don't you think that a Kolmogorov overall probability distribution of all potential measurement outcomes is the most obvious "general scheme for dealing with the objectively fuzzy observables" ? And then you end up for sure with something like Bohmian mechanics, no ?

Whereas non-contextuality is implied by an ontology of self-existent positions (or values of whatever kind), it doesn’t imply such an ontology.

As I said before, NOTHING implies any ontology. An ontology is a mental concept, it is a working hypothesis. This follows from the non-falsifiability of solipsism. Nothing implies any dynamics either. There can be a great lookup table in which all past, present and future events are written down, and we're just scrolling down the lookup table. Any systematics discovered in that lookup table, which we take for "laws of nature" are also a working hypothesis which is not implied.
But these considerations do not lead us anywhere.

Have you now turned from an Everettic into a Bohmian?

As you can see, I do have some sympathy for the Bohmian viewpoint too, but I was hoping you realized that my examples of rulers and so on were taken in a classical context. I wanted to indicate that if you have postulated an ontological concept from which you DERIVE observations, that this is more helpful than to stick to the observations themselves, and that such an ontology makes certain aspects, such as the relationship between different kinds of observations, more obvious.
We could apply your concept also to the classical world, and say that "matter points in space" and so on are just algorithmic elements from which we calculate probabilities, or in this case, certainties of observations. But if you take that viewpoint, it is hard to consider that one cannot modify the algorithm a little bit, and make the observations contextually dependent (so that there is no relationship between the position measurement with a ruler with 1mm resolution, and one with 0.1 mm resolution). If, on the contrary, you make the hypothesis of an existing ontology, which, in the classical context, is to posit that there REALLY IS a particle at a certain point in space, then the relationship between the reading on the 1mm ruler, and the 0.1 mm ruler, is evident: you're measuring twice the same ontological quantity "position of the particle".
So, in a classical context, your approach of claiming that we should only look at an algorithm that relates outcomes of measurement, and not think of anything ontological behind it, is counter productive.

How come you seem to be all praise for intuitive concepts when a few moments ago you spurned them? And how is it that "ruler says position 5.4cm" is hard to make sense of for non-Bohmians? I find statements about self-existing positions or "regions of space" harder to make sense of.

In a classical context ?? You have difficulties imagining there is an Euclidean space in classical physics ?

Again, I was talking about the classical version. But you seemed to imply that there was also a kind of "existence" to POTENTIAL outcomes of measurement in the quantum case: it was a "fuzzy" variable, but as I understood, it DID exist, somehow. I had the impression you said that there WAS a position, even unmeasured, but that it was not a real number, but a "fuzzy variable".

Now, I take on the position that there is no such thing as a "fuzzy position" as such, but that there REALLY is a wavefunction. As there IS a matter point in Euclidean space in classical physics, there IS a wavefunction in quantum physics. This is a simplifying ontological hypothesis, as was the point in Euclidean space, no ?
A measurement apparatus ALSO has a wavefunction, and a measurement is nothing else but an interaction, acting on the overall wavefunction of the measurement apparatus and the system ; this changes the (part of) the wavefunction that is representing the measurement apparatus. What's wrong with that ? As the measurement apparatus' wavefunction is now usually in a superposition of different states, of which you happen to see one, this explains your observation. What's wrong with that ? At no point, I needed to introduce the concept of a "potential measurement which I didn't perform", as you need to do. I just recon that, when I DO perform a measurement, then this is the result of an interaction (just as any other interaction, btw), which puts my measurement apparatus' wavefunction in a superposition of different outcomes, of which I see one. And I don't have to say what "would" happen to a measurement that I DIDN'T perform.
I have to say that I find this viewpoint so closely related to the formal statements of quantum theory, that I wonder why it meets so much resistance, and that people need to invent strange things such as "fuzzy potential measurement results" and things like that.
Well, ok, I know why. It is the idea that "your measurement apparatus can be in a superposition but you only see one term of it" ; we're not used to think that there may be things "existing" which we don't "see". I agree that this has some strangeness to it, but, when considering the alternatives, I find this the least of all difficulties, and not at all conceptually destabilizing, on the contrary. The entire difficulty of quantum theory resides simply in the extra requirement that ONLY exists what we see, of "ordinary" objects.

When we come to the non-contextuality requirement, I ask my students to assume that p(C)=1, 0<p(A)<1, and 0<p(B)<1. (Recall: A and B are disjoint regions, C is their union, and p(C) is the probability of finding the particle in C if the appropriate measurement is made.) Then I ask: since neither of the detectors monitoring A and B, respectively, is certain to click, how come it is certain that either of them will click? The likely answer: "So what? If p(C)=1 then the particle is in C, and if it isn’t in A (no click), then it is in B (click)." Economy of concept but wrong!
At this point the students are well aware that (paraphrasing Wheeler) no property is a possessed property unless it is a measured property. They have discussed several experiments (Mermin's "simplest version" of Bell's theorem, the experiments of Hardy, GHZ, and ESW) all of which illustrate that assuming self-existent values leads to contradictions. So I ask them again: how come either counter will click if neither counter is certain to click? Bafflement.

Of course, bafflement, because you make the (IMO) erroneous implicit assumption of measurements of "existing" or "non-existing" quantities. But "the position of a particle" as a "potential measurement outcome" has no meaning in a quantum context. THIS is the trap.

Isn't a simpler answer: the system is in state |A> + |B> ; the detector at A, D1, interacts in the following way with this state:

|D1-0> |A> --> |D1-click> |A>
|D1-0> |B> --> |D1-noclick> |B>

D2 (detector at B) interacts in the following way with the same state:

|D2-0>|A> --> |D2-noclick> |A>
|D2-0>|B> --> |D2-click>|B>

Both together:

Initial state: |D1-0>|D2-0>(|A> + |B>)/sqrt2

--> (using linearity of the evolution operator)

(|D1-click>|D2-noclick>|A> + |D1-noclick>|D2-click>|B>)/sqrt2

There are two terms, of which you are going to observe one:
the first one is |D1-click>|D2-noclick> and the second one is |D1-noclick>|D2-click>, which you pick using the Born rule (that's the famous link between conscious observation and physical ontology).
Each branch has, according to that Born rule, a probability of 1/2 to be experienced by you.

So you have one "branch" or "world" or whatever, where you observe that D1 clicked and D2 didn't, and you have another one where D1 didn't click and D2 did. You don't have a world where D1 and D2 did click, or didn't click, so that's not an observational possibility.

No bafflement.

Interference ? No problem.

DA is a detector after the two slits, placed at a position of a peak in the interference pattern.
It evolves hence according to:

|DA-0> (|A> + |B>) ---> |DA-click> (|A>+|B>)

|DA-0> (|A> - |B>) ---> |DA-noclick> (|A> - |B>)

Now, first case, D1 and D2 are not present: we have the first line. The only "branch" that is present contains |DA-click>, so it clicks always.

The second case: D1 and D2 are ALSO present (the typical case where one tries to find out through which slit the particle went).

We had, after our interaction of the particle with D1 and D2, but before hitting DA:

|DA-0> (|D1-click>|D2-noclick>|A> + |D1-noclick>|D2-click>|B>)/sqrt2

now, we're going to interact with DA. By the superposition principle, we can write the interaction of DA on |A>:

|DA-0> |A> ---> (|DA-click> (|A>+|B>)+ |DA-noclick>(|A>-|B>)) /2

and:

|DA-0> |B> --> (|DA-click>(|A>+|B>) - |DA-noclick>(|A>-|B>))/2

So this gives us:

(|D1-click>|D2-noclick>(|DA-click> (|A>+|B>)+ |DA-noclick>(|A>-|B>)) /2 + |D1-noclick>|D2-click>(|DA-click>(|A>+|B>) - |DA-noclick>(|A>-|B>))/2 )/sqrt2

If we expand this, we obtain:

1/sqrt8 {
|D1-click>|D2-noclick>(|DA-click>|DA-click> (|A>+|B>)
+ |D1-click>|D2-noclick>|DA-noclick>(|A>-|B>)
+ |D1-noclick>|D2-click>|DA-click>(|A>+|B>)
- |D1-noclick>|D2-click> |DA-noclick>(|A>-|B>)
}

There are 4 branches, of which you will experience one, using the Born rule:
1/4 probability that you will experience D1 clicking, D2 not clicking and DA clicking;
1/4 probability that you will experience D1 clicking, D2 not clicking and DA clicking;
1/4 probability that you will experience D1 not clicking ...

So, always one of the two D1 or D2 clicked, and DA has one chance out of 2 to click.
We could naively and wrongly conclude from this that the particle "went" through one of the two slits.

All observational facts are explained this way. There's no "ambiguity" or "fuzzyness" as to the state of the system: it has always a clearly defined wavefunction, and so do the measurement apparati.
There's no "bafflement" concerning the apparent clash between the "position" of the particle, and the interference pattern.
Note also that it wasn't necessary to introduce an "unavoidable disturbance" due to the measurement at the slits to make the interference pattern "disappear".

Actually the answer is elementary, for implicit in every quantum-mechanical probability assignment is the assumption that a measurement is made. It is always taken for granted that the probabilities of the possible outcomes add up to 1. There is therefore no need to explain this. But there is a lesson here: not even probability 1 is sufficient for "is" or "has". P(C)=1 does not mean that the particle is in C but only that it is certain to be found in C provided that the appropriate measurement is made.

Entirely correct. This is because there IS no such thing as a "potential position measurement result" ontology.

Farewell to Einstein's "elements of reality". Farewell to van Fraassen's eigenstate-eigenvalue link.

Well, Einstein's elements of reality are simply the wavefunction, and everything becomes clear, no ? The error is to think that there is some reality to "potential measurement outcomes".

You say "there IS a particle". What does this mean? It means there is a conservation law (only in non-relativistic quantum mechanics, though) which tells us that every time we make a position measurement exactly one detector clicks. If every time exactly two detectors click, we say there are two particles.

No, my example was taken from classical physics.
Look at the above for the view on the quantum version. "potential position measurement" has no meaning there. Interaction with measurement apparatus has, and the wavefunction has a meaning.

I don’t deny that thinking of the electromagnetic field as a tensor sitting at every spacetime point is a powerful visual aid to solving problems in classical electrodynamics. If you only want to use the physics, this is OK. But not if you want to understand it. There just isn’t any way in which one and the same thing can be both a computational tool and a physical entity in its own right.

This is a strange statement, because I'm convinced of the opposite. To me, the fundamental dogma of physics is the assumption that all of nature IS a mathematical structure (or, if you want to, that maps perfectly on a mathematical structure). Up to us to discover that structure. It's a Platonic view of things.

The "classical" habit of transmogrifying computational devices into physical entities is one of the chief reasons why we fail to make sense of the quantum formalism, for in quantum physics the same sleight of hand only produces pseudo-problems and gratuitous solutions.

No, I don't think so. I think what is really making for all these pseudoproblems is our insistence of "what we see is (only) what is there", instead of "what we see can be derived from what is there". The naive realism view.

You also get pseudo-problems in the classical context. Instead of thinking of the electromagnetic field as a tool for calculating the interactions between charges, you think of charges as interacting with the electromagnetic field. How does this interaction work? We have a tool for calculating the interactions between charges, but no tool for calculating the interactions between charges and the electromagnetic field.

I don't follow what you're talking about ? We have no tool for calculating the interactions between charges and the EM field ?


Physicists are, at bottom, a naive breed, forever trying to come to terms with the 'world out there' by methods which, however imaginative and refined, involve in essence the same element of contact as a well-placed kick. (B.S. DeWitt and R.N. Graham, Resource letter IQM-1 on the interpretation of quantum mechanics, AJP 39, pp. 724-38, 1971.)

Indeed, "naive realism"!
 
  • #34
koantum said:
(I take it that d'Espagnat's weak objectivity corresponds to what you call solipsism.

Not at all. Solipsism is the denial of the existence of an objective ontology and the idea that you are the one and only sole subjective experience ; in other words, that all you ever sensed are nothing else but illusions of a subjective experience. Your body doesn't exist, your brain doesn't exist, the world doesn't exist ; only your subjective experience exists.
This is as undeniable a possibility as it is useless as a working hypothesis.

) My point was that it is our duty as physicists to find what Fuchs and Peres called a "freestanding reality" (which they claim quantum mechanics doesn’t allow). According to d'Espagnat, the elision of the subject is not possible within unadulterated, standard quantum mechanics. I maintain that it is possible. I want a conception of the quantum world to which the conscious subject is as irrelevant as it was to the classical view of the world. It's rather like a game I like to play: let's go find a strongly objective conception of the quantum world that owes nothing to subjects or conscious observations. It is precisely for this reason that I reject the naïve quantum realism that identifies reality with symbols of the mathematical formalism.

Well, unless I misunderstood you, I don't see how you are constructing a conception of the quantum world which is strongly objective, if you START by saying that we only have an algorithm, and no description!
(or I must have seriously misunderstood you)

While I'm certainly no believer in astrology, what you're saying is that your grounds for rejecting astrology are not scientific but metaphysical. That's not good enough for me.

It is very difficult to reject astrology *empirically* (with the usual vagueness of the used terms, and the complexity of the subjects adressed, such as your happiness in love or something).

What I show is that if the quantum formalism didn’t have the form that it does then the familiar objects that surround us couldn’t exist.

Ahum. Well, you consider quantum theory then solidly PROVEN beyond doubt, and by pure reasoning ?? And what if one day, quantum theory is falsified ? Do familiar objects disappear in a puff of logic then ?

Our difference in opinion is that, for me, a mathematical structure that exists without any reason is not an acceptable reason for the existence of everything else.

So you seem to claim that, from the pure observation of the existence of ordinary objects, the ONE AND ONLY POSSIBLE PHYSICAL THEORY that makes logical sense is quantum theory ? No need for any empirical input then ? If only we would have been thinking harder, it would have been OBVIOUS that quantum theory is the ultimate correct theory ?

and remember Feynman's insistence that "philosophically we are completely wrong with the approximate law" (Feynman's emphasis).

But of course we are completely "wrong". I'd bet that even today, we are "completely wrong", and that 500 or 1000 years from now, quantum theory is an old and forgotten theory (except maybe for simplified calculations, as is classical physics today). Quantum theory being the current paradigm, it is only waiting to be falsified, no ? And to be replaced by something else. Which will then also be falsified. Or then maybe not. So of course the metaphysical, ontological picture that is suggested by our current theories are "completely wrong". And so will be the next ones, etc... In other words, we will NEVER know what is "really out there" (IF there even is such a thing, cfr solipsism). We will always be wrong. But we will have more and more refined mental pictures (= ontologies) of nature.

But that doesn't mean that in the mean time, we should not build up an ontological picture of what we have, now, today, in order to make sense of it. With a formalism comes an ontology. You change the formalism, you change the ontology. You work in classical physics: take a classical ontology. You use quantum theory: take the ontology that goes with it. But trying to force upon a certain formalism, the ontology of another one, and you have troubles. Trying to force upon quantum theory, the ontology of classical physics, and you create a whole lot of pseudoproblems. The rule: the fundamental formal elements of a theory dictate the ontologically existing elements according to that theory. It gives you the most useful mental picture to work with and to devellop an intuition for.
 
  • #35
The quantum-mechanical assignments of observable probabilities have nothing to do with belief or plausibility. Let me requote Mermin: "in a non-deterministic world, probability has nothing to do with incomplete knowledge. Quantum mechanics is the first example in human experience where probabilities play an essential role even when there is nothing to be ignorant about."
They're just names, and you shouldn't read things into them -- just like the fact the "rational numbers" are not somehow more logical than the "irrational numbers", and the "real numbers" are no more real than the "imaginary numbers".

There's no evident reason why the underlying physical measure should be a probability measure -- why isn't it possible, for example, for

P(particle in (0, 2))

to be bigger than

P(particle in (0, 1)) + P(particle in (1, 2))

? Or maybe that there isn't some sort of fundamental measure on the outcome space?


At the moment, in a non-MWI type interpretation, I see no possible theoretical or intuitive justification for the use of probabilities. If I understand my history correctly, we actually have the following:

(1) We use subjective probabilities in classical physics.
(2) Quantum came along, and we used it to compute probabilities.
(3) We failed to come up with a classical interpretation of QM.
(4) So, we promote probabilities to a fundamental status in QM.

so we actaully have quite the opposite -- probabilities have achieved a fundamental status in QM because it was doing a good job predicting the outcomes of our frequency-counting experiments... not because there was some theoretical or intuitive reason to do so.


In MWI, though, there is at least the possibilitiy of deriving probabilities as emergent phenomena, by considering a limit of the resulting states of frequency-counting experiments of increasing length.
 
<h2>What is a Pure State?</h2><p>A pure state is a state in quantum mechanics that describes a system with a definite and well-defined set of properties. In other words, the system is in a single, specific state with no uncertainty or superposition.</p><h2>What is a Mixed State?</h2><p>A mixed state is a state in quantum mechanics that describes a system with multiple possible states, each with a certain probability. This means that the system is not in a definite state, but rather a combination of different states.</p><h2>What is the difference between a Pure State and Mixed State?</h2><p>The main difference between a pure state and a mixed state is the level of certainty about the system's properties. A pure state has a well-defined and certain set of properties, while a mixed state has a range of possible states with varying probabilities.</p><h2>How are Pure States and Mixed States represented mathematically?</h2><p>Pure states are represented by a single state vector in quantum mechanics, while mixed states are represented by a density matrix. The state vector represents the probability amplitudes for each possible state, while the density matrix represents the probabilities for each state.</p><h2>What is the physical significance of Pure States and Mixed States?</h2><p>Pure states and mixed states have important implications in quantum mechanics, as they describe the fundamental nature of quantum systems. Pure states are associated with definite, measurable properties, while mixed states reflect the probabilistic nature of quantum mechanics and the uncertainty of measuring certain properties.</p>

What is a Pure State?

A pure state is a state in quantum mechanics that describes a system with a definite and well-defined set of properties. In other words, the system is in a single, specific state with no uncertainty or superposition.

What is a Mixed State?

A mixed state is a state in quantum mechanics that describes a system with multiple possible states, each with a certain probability. This means that the system is not in a definite state, but rather a combination of different states.

What is the difference between a Pure State and Mixed State?

The main difference between a pure state and a mixed state is the level of certainty about the system's properties. A pure state has a well-defined and certain set of properties, while a mixed state has a range of possible states with varying probabilities.

How are Pure States and Mixed States represented mathematically?

Pure states are represented by a single state vector in quantum mechanics, while mixed states are represented by a density matrix. The state vector represents the probability amplitudes for each possible state, while the density matrix represents the probabilities for each state.

What is the physical significance of Pure States and Mixed States?

Pure states and mixed states have important implications in quantum mechanics, as they describe the fundamental nature of quantum systems. Pure states are associated with definite, measurable properties, while mixed states reflect the probabilistic nature of quantum mechanics and the uncertainty of measuring certain properties.

Similar threads

  • Quantum Physics
Replies
16
Views
1K
  • Quantum Physics
Replies
3
Views
1K
  • Quantum Physics
Replies
6
Views
1K
Replies
1
Views
595
  • Quantum Physics
Replies
6
Views
923
  • Quantum Physics
Replies
3
Views
1K
  • Quantum Physics
Replies
8
Views
677
  • Quantum Physics
Replies
2
Views
1K
Replies
13
Views
2K
  • Quantum Physics
2
Replies
65
Views
7K
Back
Top