What is a Pure State and Mixed State?

touqra
Messages
284
Reaction score
0
What is a pure state and a mixed state?
 
Physics news on Phys.org
A pure state is one that can be represented by a vector in a Hilbert space. A mixed state is one that cannot: it must be represented by a statistical mixture of pure states.
 
touqra said:
What is a pure state and a mixed state?

As Doc Al said, a pure quantum state is, well, a quantum state (an element in hilbert space). A mixed state is a statistical mixture of pure states. You can compare this with the situation in classical mechanics. A "pure state" would be a "point in phase space", while a "mixture" would be a statistical distribution over phase space (given by a probability density over phase space).

However, there's an extra weirdness in the case of quantum theory. As where the probability density in classical phase space would give a unique probability to each individual phase space point, and as two different probability densities in classical phase space would be experimentally distinguishable (it is in principle possible, if you have an ensemble of physical systems which are described by the given probability density, to extract from this ensemble of systems, that probability density, by doing a complete measurement of the phase space point of each of them, and histogramming the outcomes over phase space), well, the quantum description of a mixed state allows DIFFERENT ensembles of different probabilities over different states to give rise to IDENTICAL mixed states, which are experimentally indistinguishable.
This finds its origin in the probabilistic aspect of quantum measurements, where the two probability measures get mixed up: the probability of outcome due to quantum randomness, for a pure state, and the probability in the mixture to be a certain pure state.

As an example, consider a spin-1/2 system.

Pure states are, for example: |z+>,
or |z->
or |x+>
or |x->
or...

These are elements of the 2-dimensional hilbert space describing the system.

A mixture can be described, a priori, by, well, a mixture of pure states, such as: 30% |x+>, 60% |z-> and 10% |y+>. But this decomposition is not unique:

The mixture:
50% |z+> and 50% |z->

is experimentally indistinguishable, for instance, from the mixture:

50% |x+> and 50% |x->

Mixtures are correctly described by a density matrix, rho.

if a mixture is made up of p_a of state |a>, p_b of state |b> and p_c of state |c>, then:

rho = p_a |a><a| + p_b |b><b| + p_c |c><c|

A measurable quantity A will have its expectation value:

<A> = Tr(A rho)

As such, different mixtures with identical rho are experimentally identical.

Some claim therefor that the true quantum state of a system is given by rho, and not by an element in hilbert space. However, this leads to other problems...
 
touqra said:
What is a pure state and a mixed state?

First of all: what is a state? It's a probability algorithm. We use it to assign probabilities to possible measurement outcomes on the basis of actual measurement outcomes (usually called "preparations"). A measurement is complete if it yields the maximum possible amount of information about the system at hand. A state is pure if it assigns probabilities on the basis of the outcome of a complete measurement. Otherwise it is mixed.
 
koantum said:
First of all: what is a state? It's a probability algorithm. We use it to assign probabilities to possible measurement outcomes on the basis of actual measurement outcomes (usually called "preparations"). A measurement is complete if it yields the maximum possible amount of information about the system at hand. A state is pure if it assigns probabilities on the basis of the outcome of a complete measurement. Otherwise it is mixed.

What you write is a correct view from the "information" (epistemological) point of view. Personally, I like to see something more than just a statement about knowledge, but I agree that this is a possible viewpoint which is endorsed by some.
In that viewpoint, the only "state" we talk about, is a state of our knowledge about nature, and not an ontological state of nature.
 
What you write is a correct view from the "information" (epistemological) point of view. Personally, I like to see something more than just a statement about knowledge, but I agree that this is a possible viewpoint which is endorsed by some. In that viewpoint, the only "state" we talk about, is a state of our knowledge about nature, and not an ontological state of nature.

Dear vanesh,

I am with you in your expectation to see more than statements about knowledge. There is no denying, however, that the quantum formalism is a probability algorithm. Whereas this formalism is utterly incomprehensible as a one-to-one representation of the real world, it is almost self-evident if we think of it as a tool for describing the objective fuzziness of the quantum world.

Almost the first thing people came to understand through quantum mechanics was the stability of atoms and objects composed of atoms: it rests on the objective fuzziness of their internal relative positions and momenta. (The literal meaning of Heisenberg's term "Unschärfe" is not "uncertainty" but "fuzziness".)

What is the proper (mathematically rigorous and philosophically sound) way of dealing with a fuzzy observable? It is to assign probabilities to the possible outcomes of a measurement of this observable. But if the quantum-mechanical probability assignments serve to describe an objective fuzziness, then they are assignments of objective probabilities.

So the fact that quantum mechanics deals with probabilities does not imply that it is an epistemic theory. If it deals with objective probabilities, then it is an ontological theory.
 
touqra said:
What is a pure state and a mixed state?

A pure state: it has a simple mathematical meaning, namely a point in the projective Hilbert space of the system, or, if you prefer, a unidimensional linear subspace (a.k.a. unit ray, or simply ray, if there's no room for confusions) in the Hilbert space associated to any quantum system.

A mixed state: well, if you read any statistical physics course under the title "Virtual statistical ensembles in quantum statistics" you'll get a very good idea on it.

BTW, the von Neumann formulation of QM allows the most natural description of mixed states...

Daniel.
 
koantum said:
What is the proper (mathematically rigorous and philosophically sound) way of dealing with a fuzzy observable? It is to assign probabilities to the possible outcomes of a measurement of this observable. But if the quantum-mechanical probability assignments serve to describe an objective fuzziness, then they are assignments of objective probabilities.

So the fact that quantum mechanics deals with probabilities does not imply that it is an epistemic theory. If it deals with objective probabilities, then it is an ontological theory.

There's a hic with this view, because it would imply that there are a set of observables (spanning a phase space) over which quantum theory generates us a Kolmogorov probability distribution, as such fixing entirely the probabilities of the outcomes of all POTENTIAL measurements.
And we know that this cannot be done: we can only generate a Kolmogorov probability distribution for a set of COMPATIBLE measurements.

The closest one can come is something like the Wigner quasidistribution:
http://en.wikipedia.org/wiki/Wigner_quasi-probability_distributioncheers,
Patrick.
 
Last edited:
dextercioby said:
A pure state: it has a simple mathematical meaning, namely a point in the projective Hilbert space of the system, or, if you prefer, a unidimensional linear subspace (a.k.a. unit ray, or simply ray, if there's no room for confusions) in the Hilbert space associated to any quantum system.
A mixed state: well, if you read any statistical physics course under the title "Virtual statistical ensembles in quantum statistics" you'll get a very good idea on it.
There is no need to read a statistical physics course. Quantum mechanics represents the possible outcomes to which its algorithms assign probabilities by the subspaces of a vector space, it represents its pure probability algorithms by 1-dimensional subspaces of the same vector space, and it represents its mixed algorithms by probability distributions over pure algorithms. Hence the name "mixed".
 
  • #10
vanesch said:
There's a hic with this view, because it would imply that there are a set of observables (spanning a phase space) over which quantum theory generates us a Kolmogorov probability distribution, as such fixing entirely the probabilities of the outcomes of all POTENTIAL measurements.
Please explain how this would imply what you think it implies. State your assumptions so that I can point out either that they are wrong or that I do not share them.
 
  • #11
koantum said:
Please explain how this would imply what you think it implies. State your assumptions so that I can point out either that they are wrong or that I do not share them.

Well, you are correct in stating that, given a wavefunction, or a mixed state, AND GIVEN A CHOICE OF COMMUTING OBSERVABLES, that the wavefunction/density matrix generates a probability distribution over the set of these observables. As such, one might say - as you do - that these variables are "fuzzy" quantities, and that they are correctly described by the generated probability function.

However, if I make ANOTHER choice of commuting observables, which is not compatible with the previous set, I will compute a different probability distribution for these new observables. No problem as of yet.

But what doesn't work always is to consider the UNION of these two sets of observables, and require that there is an overall probability distribution that will describe this union. As such, one cannot say that the observable itself "has" a probability distribution, independent of whether we were going to pick it out or not in our set of commuting observables. This is what is indicated by the non-positivity of the Wigner quasi-distributions.

The typical example is of course a Bell state |+>|-> - |->|+> and where we consider the 3 observables in 3 well-chosen directions on both sides of the experiment. Let us call them A,B,C,U,V and W, and each of them can have a result +1 or -1. There is NO probability distribution P(A,B,C,U,V,W) for the 64 different possibilities of A,B,C,U,V,W which corresponds to the quantum predictions - that's the essence of Bell's theorem in fact, because if this distribution existed, a common hidden variable thus distributed could generate the outcomes.

So in that sense, I wanted to argue that it is not possible to claim that every POTENTIAL observable is a "fuzzy quantity" that is correctly described by a probability distribution - which - I assumed in that case, must be existing independent of the SET of (commuting) observables that we are going to select for the experiment.
 
  • #12
vanesch said:
Some claim therefor that the true quantum state of a system is given by rho, and not by an element in hilbert space. However, this leads to other problems...

Would you mind expanding the dots?

An interested reader.

Carl
 
  • #13
vanesch said:
Well, you are correct in stating that, given a wavefunction, or a mixed state, AND GIVEN A CHOICE OF COMMUTING OBSERVABLES, that the wavefunction/density matrix generates a probability distribution over the set of these observables. As such, one might say - as you do - that these variables are "fuzzy" quantities, and that they are correctly described by the generated probability function.

However, if I make ANOTHER choice of commuting observables, which is not compatible with the previous set, I will compute a different probability distribution for these new observables. No problem as of yet.

But what doesn't work always is to consider the UNION of these two sets of observables, and require that there is an overall probability distribution that will describe this union. As such, one cannot say that the observable itself "has" a probability distribution, independent of whether we were going to pick it out or not in our set of commuting observables...
Dear vanesh,

Thank you for your detailed response. Now it's my turn to add some flesh to my earlier post about objective probabilities.

It is indeed not possible to consistently define an overall probability distribution for a set of non-commuting observables. If one attributes a probability distribution to a set of observables, then one makes the (implicit) assumption that these observables can be measured simultaneously, and this is not possible for non-commuting observables. (In fact, every quantum-mechanical probability assignment implicitly assumes that the corresponding measurement not only can be but is made. Out of any measurement context, quantum-mechanical probabilities are simply meaningless.) I further admit that my all too brief post may have suggested the opposite: that I believe it makes sense to objectify quantum-mechanical probabilities out of measurement contexts. What could make people jump to this erroneous conclusion is the popular misconception that reference to measurements is the same as reference to observers.

There are basically two kinds of interpretation, those that acknowledge the central role played by measurements in standard axiomatizations of quantum mechanics, and those that try to sweep it under the rug. As a referee of a philosophy-of-science journal once put it to me, "to solve [the measurement problem] means to design an interpretation in which measurement processes are not different in principle from ordinary physical interactions.'' To my way of thinking, this definition of "solving the measurement problem" is the reason why as yet no sensible solution has been found. Those who acknowledge the importance of measurements, on the other hand, appear think of probabilities as inherently subjective and therefore cannot comprehend the meaning of objective probabilities. Yet it should be perfectly obvious that quantum-mechanical probabilities cannot be subjective. Subjective (that is, ignorance) probabilities disappear when all relevant facts are taken into account (which in many cases is practically impossible). The uncertainty principle however guarantees that quantum-mechanical probabilities cannot be made to disappear. As http://arxiv.org/abs/quant-ph/9801057" put it, "in a non-deterministic world, probability has nothing to do with incomplete knowledge. Quantum mechanics is the first example in human experience where probabilities play an essential role even when there is nothing to be ignorant about." Mermin in fact believes that the mysteries of quantum mechanics can be reduced to the single puzzle posed by the existence of objective probabilities, and I think that this is correct.

So in that sense, I wanted to argue that it is not possible to claim that every POTENTIAL observable is a "fuzzy quantity" that is correctly described by a probability distribution - which - I assumed in that case, must be existing independent of the SET of (commuting) observables that we are going to select for the experiment.

This is the assumption that I did not make and that indeed cannot be made.
 
Last edited by a moderator:
  • #14
koantum said:
There are basically two kinds of interpretation, those that acknowledge the central role played by measurements in standard axiomatizations of quantum mechanics, and those that try to sweep it under the rug. As a referee of a philosophy-of-science journal once put it to me, "to solve [the measurement problem] means to design an interpretation in which measurement processes are not different in principle from ordinary physical interactions.''

Well, I fully subscribe to that referee's view, honestly. However, you are right that there are essentially two views on QM, one which considers a "measurement process" and others who say that there's no such thing - count me as partisan of the latter (caveat... see further).

I would classify these two different views differently. I'd say that those who consider quantum theory as a "partial" theory have no problem adding an extra thing, called measurement process, while those that want to take on the view that quantum theory is a *universal* physical theory, cannot accept such a process.

The reason is the following: if quantum theory is to be universal (that means that its axioms apply to everything in the world - necessarily a reductionist viewpoint of course), then they also apply to the observer. And a "measurement" for the observer is nothing else but "a state" of the observer. You can only consider that information is anything else but a physical state if you don't consider the "information-posessor" (= the observer) as being part of the very physics.
In classical physics, there's no issue: the "bodystate" of the observer is a classical state, and is linked through a deterministic mechanics to the classical state of the observed system (the deterministic mechanics is the physics of the measurement apparatus). So the "state of the observer" is a kind of copy of the state of the system (eventually with errors, noise, omissions...), and this state, out of the many possible, is then the measurement result which contains the information about the system. But to convert "body state" into "information" needs an interpretation. No difficulty here, in classical physics.

However, if you go now to quantum theory, there's a difficulty. First of all, there's a difficulty with the "bodystate" of the observer: if it is a quantum system as any other (quantum theory being universal), then it needs to be described by a state vector in Hilbert space. Now, you could still try to save the day, and introduce a kind of superselection rule, which allows only certain states ("classical states") to be associated to a body. But then there's the extra difficulty of the linearity of the time evolution operator, which follows from the physical interaction between the observer's body and the system under study, which will drive that body's state into a superposition of the different classical bodystates, hence violating that superselection rule.
Now comes "measurement". As in the classical counterpart, a measurement is a physical link between (ultimately) the observer's body, and the physics of the system under study, such that the system state is more or less copied into the body state. That bodystate, amongst the many possible, contains then the information about the system that has been extracted. But as we see here, we can only roughly copy a quantum state (of the system under study) into a quantum state (of the body of the observer)! There's no way, if quantum theory is to be universally applied, to copy a quantum state to a *classical* state of the body - which is needed if we want to have a Copenhagen-style measurement and its associated information carrier (the observer's body).

I don't think that there is any way out, if quantum theory is taken to be *universally* valid. However, if quantum theory is put in "microscopic boxes", and the macroworld (containing the observers' body) is *classical*, while CERTAIN physical systems out there are quantum systems, while OTHER physical systems are classical systems that can couple to quantum systems (preparation and measurement apparatus), so that quantum theory is allowed to be "set up", "run" and "give out its answer", then of course the information viewpoint makes sense (this is in fact the Copenhagen view). The *classical* state of the observer's body (and of the classical part of the measurement apparatus) will be one of many classical states, and hence correspond to the information of the measurement result, where ONE classical outcome has to be chosen over many (the collapse of the wavefunction).

Note that I keep open (the earlier caveat...) the possibility of quantum theory NOT being universally valid. However, I claim that, when you want to give an interpretation of a theory, you cannot start by claiming that it is NOT universally valid (without saying also, then, what IS valid).

The ONLY probabilitic part of the usual application of quantum theory is when one has to make a transition to a classical end state (the so-called collapse). Whatever it is that generates this (apparent?) transition, it surely is an objectively random process - but of which the dynamics is NOT described by quantum theory itself (it being a DETERMINISTIC theory concerning the wavefunction evolution).

To my way of thinking, this definition of "solving the measurement problem" is the reason why as yet no sensible solution has been found. Those who acknowledge the importance of measurements, on the other hand, appear think of probabilities as inherently subjective and therefore cannot comprehend the meaning of objective probabilities. Yet it should be perfectly obvious that quantum-mechanical probabilities cannot be subjective. Subjective (that is, ignorance) probabilities disappear when all relevant facts are taken into account (which in many cases is practically impossible). The uncertainty principle however guarantees that quantum-mechanical probabilities cannot be made to disappear.

I would even say that the "proof" of this objectivity of quantum-mechanical probabilities resides exactly in the fact that there is no universal probability distribution of all quantum-mechanical quantities (the thing we've been talking about, such as a Wigner quasi distribution) - otherwise one could take it that there are hidden variables that are such, that our subjective lack of their precise value generates the quantum-mechanical probabilities. However, the example of Bohmian mechanics illustrates that one has to be careful with these statements.

At the end of the day, there's no fundamental distinction between "objective probabilities" and "subjective, but in principle unknowable" probabilities (such as those given by the distribution of hidden variables, or, by the quantum equilibrium condition in Bohmian mechanics).

Mermin in fact believes that the mysteries of quantum mechanics can be reduced to the single puzzle posed by the existence of objective probabilities, and I think that this is correct.

Personally, I think that's a too simple way out. As I said, there's no fundamental difference between "objective probabilities" and subjective probabilities of things that are in principle forbidden to know. But we know that quantum theory cannot really be put into such a framework if we also cherish other principles such as locality (otherwise, I think it is fairly obvious that Bohmian mechanics would be demystifying the whole business!).
I think that the fundamental difficulty in the measurement problem comes from our A PRIORI requirement of the observer, or the measurement apparatus, or whatever, to be in a CLASSICAL state, which is in contradiction with the superposition principle on which quantum theory is build up. You cannot require of your observer NOT to obey the universal theory you're describing, and hope you'll not run into difficulties!
 
  • #15
vanesh said:
I would classify these two different views differently. I'd say that those who consider quantum theory as a "partial" theory have no problem adding an extra thing, called measurement process, while those that want to take on the view that quantum theory is a *universal* physical theory, cannot accept such a process.
Rather, those who consider quantum theory as a universal theory (in your sense) feel the necessity of adding an extra thing: surreal particle trajectories (Bohm), nonlinear modifications of the dynamics (Ghirardi, Rimini, and Weber or Pearle), the so-called eigenstate-eigenvalue link (van Fraassen), the modal semantical rule (Dieks), and what have you.

The only thing we are sure about is that quantum mechanics is an algorithm for assigning probabilities to possible measurement outcomes on the basis of actual outcomes. If measurements are an "extra thing", what is quantum mechanics without measurements? Nothing at all!
if quantum theory is to be universal (that means that its axioms apply to everything in the world - necessarily a reductionist viewpoint of course)…
I don’t know of any axiomatic formulation of quantum mechanics in which measurements do not play a fundamental role. What axioms are you talking about?

Quoting from my earlier response to hurkyl: it is by definition impossible to find out by experiment what happened between one measurement and the next. Any story that tells you what happened between consecutive measurements is just that – a story. Bohmians believe in a story according to which particles follow mathematically exact trajectories, and the rest (apart from some laudable exceptions) believe in a story according to which the quantum-mechanical probability algorithm is an ontological state that evolves deterministically between measurements if not always. (One of those laudable exception was the late Asher Peres, who realized that there is no interpolating wave function giving the "state of the system" between measurements.)

Whether you believe in unitary evolution between measurements or unitary evolution always makes no difference to me. I reject the whole idea of an evolving quantum state, not just because it is unscientific by Popper's definition (since the claim that it exists is unfalsifiable) but because it prevents us from recognizing the true ontological implications of the quantum formalism (which are pointed out at http://thisquantumworld.com" ). The dependence on time of the quantum-mechanical probability algorithms (states, wave functions) is a dependence on the times of measurements, not the time dependence of an evolving state.
The ONLY probabilistic part of the usual application of quantum theory is when one has to make a transition to a classical end state (the so-called collapse). Whatever it is that generates this (apparent?) transition…
In a theory that rejects evolving quantum states the question "to collapse or not to collapse?" doesn’t arise. What generates this "(apparent?) transition" is one of several http://thisquantumworld.com/pseudo.htm" arising from the the unwarranted and unverifiable postulate of quantum state evolution.
… it surely is an objectively random process - but of which the dynamics is NOT described by quantum theory itself (it being a DETERMINISTIC theory concerning the wave function evolution).
So you accept an objectively random process whose dynamics quantum theory cannot describe? What happened to your claim that
when you want to give an interpretation of a theory, you cannot start by claiming that it is NOT universally valid (without saying also, then, what IS valid).
What IS valid (and universally so) is that quantum mechanics correlates measurement outcomes. The really interesting question about quantum mechanics is: how can a theory that correlates measurement outcomes be fundamental and complete? Preposterous, isn’t it? If people had spend the same amount of time and energy trying to answer this question, rather than disputing whether quantum states collapse or don’t collapse, we would have gotten somewhere by now.
There's no way, if quantum theory is to be universally applied, to copy a quantum state to a *classical* state of the body…
There is no way, if reality is an evolving ray in Hilbert space, to even define subsystems, measurements, observers, interactions, etc. Also, it has never been explained why, if reality is an evolving ray in Hilbert space, certain mathematical expressions of the quantum formalism should be interpreted as probabilities. So far every attempt to explain this has proved circular. The decoherence program in particular relies heavily on reduced density operators, and the operation by which these are obtained - partial tracing - presupposes Born's probability rule. Obviously you don’t have this problem is the quantum formalism is fundamentally a probability algorithm.
 
Last edited by a moderator:
  • #16
koantum said:
Rather, those who consider quantum theory as a universal theory (in your sense) feel the necessity of adding an extra thing: surreal particle trajectories (Bohm), nonlinear modifications of the dynamics (Ghirardi, Rimini, and Weber or Pearle), the so-called eigenstate-eigenvalue link (van Fraassen), the modal semantical rule (Dieks), and what have you.

Indeed,... except for MWI :smile: ; or almost so.

The only thing we are sure about is that quantum mechanics is an algorithm for assigning probabilities to possible measurement outcomes on the basis of actual outcomes. If measurements are an "extra thing", what is quantum mechanics without measurements? Nothing at all!

This can be said about any scientific theory.

I don’t know of any axiomatic formulation of quantum mechanics in which measurements do not play a fundamental role. What axioms are you talking about?

1) the Hilbert space, spanned by the eigenvectors of "a complete set of observables" (which is nothing else but an enumeration of the degrees of freedom of the system, and the values they can take)

2) the unitary evolution (the derivative of it being the Hamiltonian)

You are right of course that there is a statement that links what is "observed" with this mathematical state - but such a statement must be made in ALL physical theories. If you read that statement as: "it is subjectively experienced that..." you're home.

Whether you believe in unitary evolution between measurements or unitary evolution always makes no difference to me. I reject the whole idea of an evolving quantum state, not just because it is unscientific by Popper's definition (since the claim that it exists is unfalsifiable) but because it prevents us from recognizing the true ontological implications of the quantum formalism (which are pointed out at http://thisquantumworld.com" ). The dependence on time of the quantum-mechanical probability algorithms (states, wave functions) is a dependence on the times of measurements, not the time dependence of an evolving state.

That can be said about every scientific theory. You should then also reject the idea of an evolving classical state, or the existence of a classical electrical field, or even the existence of other persons you're not observing. When you leave your home, your cat "disappears" and it "reappears" when you come back home. The concept of "your cat" is then nothing else but a formal means of which its ontological existence outside of its direct observation is unscientific in Popper's sense because an unwarranted extrapolation of the observations of your cat when you are home... The state of your cat ("the poor Felix must be hungry, I forgot to give him his dinner this morning") outside of any observation is hence a meaningless concept. When he's a bit aggressive when I come home, then that's just the result of an algorithm which depends on the time between me leaving my cat (without a meal) and me coming home again ; in between, no cat. That's what you want people to accept concerning quantum states, or any other physical state. I find that rather unsatisfying...

In a theory that rejects evolving quantum states the question "to collapse or not to collapse?" doesn’t arise. What generates this "(apparent?) transition" is one of several http://thisquantumworld.com/pseudo.htm" arising from the the unwarranted and unverifiable postulate of quantum state evolution.

As I said, this can be applied to any scientific theory. It doesn't lead to a very inspiring picture of the world ; it is essentially the "information" world view, where scientific (and other) theories are nothing else but organizing schemes of successive observations and no description of an actual reality.

So you accept an objectively random process whose dynamics quantum theory cannot describe? What happened to your claim that

No, I don't. I could accept such a theory, but quantum theory isn't one of them. The random process, in the MWI view, is entirely subjective ; it is not part of the physics, but of what you happen to subjectively experience.

What IS valid (and universally so) is that quantum mechanics correlates measurement outcomes. The really interesting question about quantum mechanics is: how can a theory that correlates measurement outcomes be fundamental and complete? Preposterous, isn’t it? If people had spend the same amount of time and energy trying to answer this question, rather than disputing whether quantum states collapse or don’t collapse, we would have gotten somewhere by now.

All theory "correlates" subjective experiences (also called measurements), and to go beyond that is purely hypothetical: this is established by the non-falsifiability of solipsism. Nevertheless, making these hypotheses are useful activities, because it gives us an intuitive picture of a world that can explain things. It is a matter of conceptual economy, to postulate things to exist "for real", because they have strong suggestive power. So anybody claiming that one shouldn't say that certain concepts in an explanatory scheme of observations (such as quantum theory, or any scientific theory) are "real" misses the whole point of what "reality" is for: it is for its conceptual simplification ! The unprovable hypothesis that your cat exists, even if you have no observational evidence (because you're not at home), is a simplifying hypothesis which helps organize your subjective experiences (and makes for the fact that you're not surprised to find a cat when you come home). So I fail to see the point of people insisting that quantum theory tells us that there's nothing to be postulated for real in between measurements. You're not gaining any conceptual simplification from that statement, so what good is it ?

There is no way, if reality is an evolving ray in Hilbert space, to even define subsystems, measurements, observers, interactions, etc. Also, it has never been explained why, if reality is an evolving ray in Hilbert space, certain mathematical expressions of the quantum formalism should be interpreted as probabilities. So far every attempt to explain this has proved circular. The decoherence program in particular relies heavily on reduced density operators, and the operation by which these are obtained - partial tracing - presupposes Born's probability rule. Obviously you don’t have this problem is the quantum formalism is fundamentally a probability algorithm.

You should look at my little paper quant-ph/0505059 then - I KNOW that it is not possible to derive the probabilities from the unitary part. My solution is simply to STATE that your subjective experience derives from a randomly selected term according to the Born rule - as you should state, in general relativity, how your subjective experience of "now" derives from a spacelike slice of the 4-manifold, and as you should state how a physical state gives rise to a subjective experience in about ANY scientific theory.
When the objective physics is entirely described, no matter whether it is classical, quantum-mechanical or otherwise, you should STILL say how this gives rise to a subjective experience. Well, that's the place where I prefer to put the Born rule and the "projection postulate". It's as good a place as any! And I get back my nice physical ontology, my (even deterministic, although I didn't ask for it!) physical evolution - of the system, of the apparatus, of my body and all that. I get a weird rule that links my subjective experience to physical reality, but as that is in ANY CASE something weird, it's the place to hide any extra weirdness. You don't have to do as I do, of course. Any view on quantum theory that makes you happy is good enough. As I believe more in a formalism, than in intuition, or common sense, I need to give an ontological state to the elements of the formalism - it gives me the satisfaction of the simplifiying hypothesis of ontological reality, and it helps me devellop an intuition for the formalism (which are the two main purposes of the hypothesis of an ontology). Other people have other preferences.
However, I fail to see the advantage on insisting that one SHOULDN'T make that simplifying hypothesis of an existing physical reality.
 
Last edited by a moderator:
  • #17
From your cite:
An informed choice should weigh the absurdities spawned by the second option against the merits of the first.

I repeated often that the ONLY objection to an MWI/many minds view is "naah, too crazy"...
 
  • #18
vanesch said:
I repeated often that the ONLY objection to an MWI/many minds view is "naah, too crazy"...
Not too crazy. Borrowing the words of Niels Bohr, crazy but not crazy enough to be true.
 
  • #19
vanesh said:
The only thing we are sure about is that quantum mechanics is an algorithm for assigning probabilities to possible measurement outcomes on the basis of actual outcomes. If measurements are an "extra thing", what is quantum mechanics without measurements? Nothing at all!
This can be said about any scientific theory.
What about your own emphasis that classical physics can be formulated without reference to measurements, while quantum mechanics cannot?
1) the Hilbert space, spanned by the eigenvectors of "a complete set of observables" (which is nothing else but an enumeration of the degrees of freedom of the system, and the values they can take)
2) the unitary evolution (the derivative of it being the Hamiltonian)
You are right of course that there is a statement that links what is "observed" with this mathematical state - but such a statement must be made in ALL physical theories. If you read that statement as: "it is subjectively experienced that..." you're home.
Let me tell you in a few steps why we all use a complex vector space. (I can give you the details later if you are interested.) I use this approach when I teach quantum mechanics to higher secondary and undergraduate student.
  1. "Ordinary" objects have spatial extent (they "occupy" space), are composed of a (large but) finite number of objects that lack spatial extent, and are stable - they neither collapse nor explode the moment they are formed. Thanks to quantum mechanics, we know that the stability of atoms (and hence of "ordinary" objects) rests on the fuzziness (the literal translation of Heisenberg's "Unschärfe") of their internal relative positions and momenta.
  2. The proper way of dealing with a fuzzy observable is to assign probabilities to the possible outcomes of a measurement of this observable.
  3. The classical probability algorithm is represented by a point P in a phase space; the measurement outcomes to which it assigns probabilities are represented by subsets of this space. Because this algorithm only assigns trivial probabilities (1 if P is inside the subset representing an outcome, 0 if P is outside), we may alternatively think of P as describing the state of the system in the classical sense (a collection of possessed properties), regardless of measurements.
  4. To deal with fuzzy observables, we need a probability algorithm that can accommodate probabilities in the whole range between 0 and 1. The straightforward way to do this is to replace the 0 dimensional point P by a 1 dimensional line L, and to replace the subsets by the subspaces of a vector space. (Because of the 1-1 correspondence between subspaces and projectors, we may equivalently think of outcomes as projectors.) We assign probability 1 if L is contained in the subspace representing an outcome, probability 0 if L is orthogonal to it, and a probability 0>p>1 otherwise. (Because this algorithm assigns nontrivial probabilities, it cannot be re-interpreted as a classical state.)
  5. We now have to incorporate a compatibility criterion. It is readily shown (later, if you are in the mood for it) that the outcomes of compatible measurements must correspond to commuting projectors.
  6. Last but not least we require: if the interval C is the union of two disjoint intervals A and B, then the probability of finding the value of an observable in C is the sum of the probabilities of finding it in A or B, respectively.
  7. We now have everything that is needed to prove Gleason's theorem, according to which the probability of an outcome represented by the projector P is the trace of WP, where W (known as the "density operator") is linear, self-adjoint, positive, has trace 1, and satisfies either WW=W (then we call it a "pure state") or WW<W (then we call it "mixed"). (We are back to the topic of this thread!)
  8. The next step is to determine how W depends on measurement outcomes, which is also readily established.
  9. The next step is to determine how W depends on the time of measurement, which is equally straightforward to establish.
At this point we have all the axioms of your list (you missed a few) but with one crucial difference: we know where these axioms come from. We know where quantum mechanics comes from, whereas you haven’t the slightest idea about the origin of your axioms.
You should then also reject the idea of an evolving classical state, or the existence of a classical electrical field…
Which is exactly what I do! Newton famously refused to make up a story purporting to explain how, by what mechanism or physical process, matter acts on matter. While the (Newtonian) gravitational action depends on the simultaneous positions of the interacting objects, the electromagnetic action of matter on matter is retarded. This made it possible to transmogrify the algorithm for calculating the electromagnetic effects of matter on matter into a physical mechanism or process by which matter acts on matter.
Later Einstein's theory of gravity made it possible to similarly transmogrify the algorithm for calculating the gravitational effects of matter on matter into a mechanism or physical process.

Let's separate the facts from the fictions (assuming for the moment that facts about the world of classical physics are facts rather than fictions).
Fact is that the calculation of effects can be carried out in two steps:
  1. Given the distribution and motion of charges, we calculate six functions (the so-called "electromagnetic field"), and given these six functions, we calculate the electromagnetic effects that those charges have on other charges.
  2. Given the distribution and motion of matter, we calculate the stress-energy tensor, and given the stress-energy tensor, we calculate the gravitational effects that matter here has on matter there.
Fiction is
  1. that the electromagnetic field is a physical entity in its own right, that it is locally generated by charges here, that it mediates electromagnetic interactions by locally acting on itself, and that it locally acts on charges there;
  2. that spacetime curvature is a physical entity in its own right, and that it mediates the gravitational action of matter on matter by a similar local process.
Did you notice that those fictions do not explain how a charge locally acts on the electromagnetic field, how the electromagnetic field locally acts on a charges, and so on? Apparently, physicists consider the familiar experience of a well-placed kick sufficient to explain local action.
Physicists are, at bottom, a naive breed, forever trying to come to terms with the 'world out there' by methods which, however imaginative and refined, involve in essence the same element of contact as a well-placed kick. (B.S. DeWitt and R.N. Graham, Resource letter IQM-1 on the interpretation of quantum mechanics, AJP 39, pp. 724-38, 1971.)
vanesh said:
… or even the existence of other persons you're not observing.
This is what you are led to conclude because you don’t have a decent characterization of macroscopic objects.
It doesn't lead to a very inspiring picture of the world ; it is essentially the "information" world view, where scientific (and other) theories are nothing else but organizing schemes of successive observations and no description of an actual reality.
You find a deterministic theory of everything inspiring? Perhaps this is because you want to believe in your omniscience-in-principle: you want to feel as if you know What Exists and how it behaves. To entertain this belief you must limit Reality to mathematically describable states and processes. This is in part a reaction to outdated religious doctrines (it is better to believe in our potential omniscience than in the omnipotence of someone capable of creating a mess like this world and thinking he did a great job) and in part the sustaining myth of the entire scientific enterprise (you had better believe that what you are trying to explain can actually be explained with the means at your disposal).

Besides, you are wrong when you put me in the quantum-states-are-states-of-knowledge camp. Only if we reject the claptrap about evolving quantum states can we obtain a satisfactory description of the world between consecutive measurements. This description consists of the (objective) probabilities of the possible outcomes of all the measurements that could have been performed in the meantime. (I'm not in any way implying that it makes sense to simultaneously consider the probabilities of outcomes of incompatible measurements.)

I, for one, find the ontological implications of the quantum-formalism - if this is taken seriously as being fundamentally an algorithm for computing objective probabilities – greatly inspiring. Among these implications are the http://thisquantumworld.com/conundrum.htm" . Besides, it is the incomplete spatiotemporal differentiation of Reality that makes a rigorous definition of "macroscopic" possible.
The random process, in the MWI view, is entirely subjective ; it is not part of the physics, but of what you happen to subjectively experience.
How convenient. What I experience is not part of physics. How does this square with your claimed universality of the quantum theory? And what I do not experience – Hilbert space vectors, wave functions, and suchlike – is part of physics. How silly!
All theory "correlates" subjective experiences (also called measurements)…
As long as you mix up experiences with measurements, you are not getting anywhere.
So anybody claiming that one shouldn't say that certain concepts in an explanatory scheme of observations (such as quantum theory, or any scientific theory) are "real" misses the whole point of what "reality" is for: it is for its conceptual simplification !
I have a somewhat higher regard for "reality". Like Aristotle, I refuse to have it identified with computational devices. ("The so-called Pythagoreans, who were the first to take up mathematics, not only advanced this subject, but saturated with it, they fancied that the principles of mathematics were the principles of all things." - Metaphysics 1-5.)
I get a weird rule that links my subjective experience to physical reality, but as that is in ANY CASE something weird, it's the place to hide any extra weirdness.
Chalmers called this the "law of minimization of mystery": quantum mechanics is mysterious, consciousness is mysterious, so maybe they are the same mystery. But mysteries need to be solved, not hidden.

Let me express, in conclusion, my appreciation for the trouble you take to explain yourself. It really helps me understand people of your ilk.
 
Last edited by a moderator:
  • #20
I will try to outline where I think that there is a problem in the approach you take, if you want it to be a universal explanation. The problem, according to me, resides in the mixture between formal aspects, and intuitive, common sense concepts. In a complete world picture, there is no room for intuitive and common sense concepts at the foundations.

Now, I know your objection to that view: you say that it is overly pretentious to try to have a universal, complete world picture. Of course. But the exercise does not reside in giving yourself the almighty feeling of knowing it all! The exercise consists in building up, WITHOUT USING common sense concepts at the foundations, a mental picture of the world, AND SEE IF OUR COMMON SENSE and less common sense observations can be explained by it. If at that point, you *take for granted* certain common sense concepts, then the reasoning becomes circular. Why is it important to try to derive a complete world picture ? Firstly, to see where it fails! This will indicate us, maybe, what goes wrong with it. And secondly, to be an intuitive guide to help you devellop a sense of problem solving.

koantum said:
Let me tell you in a few steps why we all use a complex vector space. (I can give you the details later if you are interested.) I use this approach when I teach quantum mechanics to higher secondary and undergraduate student.
  1. "Ordinary" objects have spatial extent (they "occupy" space), are composed of a (large but) finite number of objects that lack spatial extent, and are stable - they neither collapse nor explode the moment they are formed. Thanks to quantum mechanics, we know that the stability of atoms (and hence of "ordinary" objects) rests on the fuzziness (the literal translation of Heisenberg's "Unschärfe") of their internal relative positions and momenta.


  1. I think it is already fairly clear here, that there is an appeal to a mixture of intuitive ontological concepts. But an "algorithmic" theory cannot take for granted the ontological existence of any such "ordinary" object: their existence must be DERIVABLE from its fundamental formulation. Otherwise, you already sneak in the ontology you're going to refute later.

    [*]The proper way of dealing with a fuzzy observable is to assign probabilities to the possible outcomes of a measurement of this observable.

    Even there, there is a problem: how does a "measurement apparatus" link to an observable ? Does the measurement apparatus have ontological existence ? Or does only the observation of the measurement apparatus (by a person ?) make sense, and we cannot postulate (ontological hypothesis which is to be rejected) that the measurement apparatus, as a physical construction, exists ?
    So *what* defines a fuzzy or other observable in the first place if we're not entitled to any ontology ? And IF we are entitled to an intuitive ontology, then exactly what is it ?

    [*]The classical probability algorithm is represented by a point P in a phase space; the measurement outcomes to which it assigns probabilities are represented by subsets of this space. Because this algorithm only assigns trivial probabilities (1 if P is inside the subset representing an outcome, 0 if P is outside), we may alternatively think of P as describing the state of the system in the classical sense (a collection of possessed properties), regardless of measurements.

    Ok.

    [*]To deal with fuzzy observables, we need a probability algorithm that can accommodate probabilities in the whole range between 0 and 1. The straightforward way to do this is to replace the 0 dimensional point P by a 1 dimensional line L, and to replace the subsets by the subspaces of a vector space. (Because of the 1-1 correspondence between subspaces and projectors, we may equivalently think of outcomes as projectors.) We assign probability 1 if L is contained in the subspace representing an outcome, probability 0 if L is orthogonal to it, and a probability 0>p>1 otherwise. (Because this algorithm assigns nontrivial probabilities, it cannot be re-interpreted as a classical state.)

    I don't see why this procedure is "the straightforward way". I'd think that there are two ways of doing what you want to do. One is the "Kolgomorov" way: each thinkable observable is a random variable over a probability space. We already know that this doesn't work in quantum theory (the discussion we had previously). But one can go further. One can say that, to each "compatible" (to be defined at will) set of observables corresponds a different probability space, and the observables are then random variables over this space. THIS is the most general random algorithm. The projection of a ray in a vector space is way more restrictive, and I don't see why this must be the case.

    To illustrate what I want to say, consider this: consider two compatible observables, X1 and Y1. X1 can take on 3 possible outcomes: smaller than -1, between -1 and +1, and bigger than 1 (outcomes X1a, X1b and X1c).
    Y1 can take on 2 possible outcomes, Y1a and Y1b. For THIS SET OF OBSERVABLES, I can now define a probability space with distribution given by P(X1,Y1), with 6 different probabilities, satisfying the Kolmogorov axioms. But let us now consider that we have ANOTHER set of observables, X2 and Y2. In fact, in our naivity, we think that X2 is the "same" observable as X1, but more finegrained. But that would commit the mistake of assigning a kind of ontological existence to a measurement apparatus and to what it is going to measure. As only observations are to be considered "real", and we have of course a DIFFERENT measurement for the observable X2 than for X1 (we have to change scale, or resolution, on the hypothetical measurement apparatus), we can have a totally DIFFERENT probability distribution. Consider that X2 has 5 possible outcomes: smaller than -2, between -2 and -1, between -1 and +1, between +1 and +2, and bigger than 2. We would be tempted to state that, X2 measuring the "same" quantity as X1, the probability to measure X2a + the probability to measure X2b should equal the probability to have measured X1a. (smaller than -2, and between -2 and -1, is equivalent to smaller than -1). But THAT SUPPOSES A KIND OF ONTOLOGICAL EXISTENCE of the "quantity to be measured" independent of the measurement act, which is of course against the spirit of our purely algorithmic approach. Hence, a priori, there's no reason not to accept that the probability distribution for X2 and Y2 is totally unrelated to the one for X1 and Y1. This situation can easily be recognized as "contextuality".

    [*]We now have to incorporate a compatibility criterion. It is readily shown (later, if you are in the mood for it) that the outcomes of compatible measurements must correspond to commuting projectors.

    Yes, but we have placed ourselves already in a very restrictive class of probability algorithms for measurement outcomes. The contextual situation I sketched will not necessarily be incorporated in this more restrictive scheme. So postulating this is not staying open to "a probability algorithm in general".

    [*]Last but not least we require: if the interval C is the union of two disjoint intervals A and B, then the probability of finding the value of an observable in C is the sum of the probabilities of finding it in A or B, respectively.

    Ok, this is an explicit requirement of non-contextuality. Why ?

    [*]We now have everything that is needed to prove Gleason's theorem, according to which the probability of an outcome represented by the projector P is the trace of WP, where W (known as the "density operator") is linear, self-adjoint, positive, has trace 1, and satisfies either WW=W (then we call it a "pure state") or WW<W (then we call it "mixed"). (We are back to the topic of this thread!)

    Indeed. However, I had the impression you wanted to show that quantum theory is nothing else but a kind of "general scheme of writing down a generator for probability algorithms of observations", but we've made quite some hypotheses along the way! Especially the non-contextuality requirement, which requires us to HAVE A RELATIONSHIP BETWEEN THE PROBABILITIES OF DIFFERENT OBSERVATIONS (say, those with high, and those with low resolution), goes against the spirit of denying an ontological status to the "quantity to be measured outside of its measurement". If the only thing that makes sense, are measurement outcomes, then the resolution of this measurement makes integral part of it. As such, a hypothetical measurement with another resolution, is intrinsically entitled to a TOTALLY DIFFERENT and unrelated probability distribution. It is only when we say that what we measure has an independent ontological existence that we can start making assumptions about different measurements of the "same" thing: in order for it to be the "same" thing, it has to have ontological status.
    For instance, if what we measure is "position of a particle", but we say that the only thing that makes sense are *measurement outcomes", then the only thing that makes sense is "ruler says position 5.4 cm" and not "particle position is 5.4cm". Now, if we replace the ruler by a finer ruler, then the only thing that makes sense is now "fine ruler says position 5.43cm". There is a priori no relationship between the outcome "ruler says position 5.4cm" and "fine ruler says 5.43cm", because these are two DIFFERENT measurements. However, if there is an ontology behind it, and BOTH ARE MEASUREMENTS OF A PARTICLE POSITION, then these two things are related of course. But this REQUIRES THE POSTULATION OF SOME ONTOLOGICAL EXISTENCE OF A QUANTITY INDEPENDENT OF A MEASUREMENT - which is, according to your view, strictly forbidden.

    BTW, the above illustrates the "economy of concept" that results from postulating an ontology, and the intuitive help it provides. The unrelated statements "ruler says position 5.4cm" and "fine ruler says 5.43cm" which are hard to make any sense of, become suddenly almost trivial concepts when we say that there IS a particle, and that we have tried to find its position using two physical experiments, one with a better resolution than the other.
At this point we have all the axioms of your list (you missed a few) but with one crucial difference: we know where these axioms come from. We know where quantum mechanics comes from, whereas you haven’t the slightest idea about the origin of your axioms.

As I tried to point out, I don't see where your axioms come from, either. Why this projection thing to generate probability algorithms, which restricts their choice ? And why this non-contextuality ?

This made it possible to transmogrify the algorithm for calculating the electromagnetic effects of matter on matter into a physical mechanism or process by which matter acts on matter.
Later Einstein's theory of gravity made it possible to similarly transmogrify the algorithm for calculating the gravitational effects of matter on matter into a mechanism or physical process.

Let's separate the facts from the fictions (assuming for the moment that facts about the world of classical physics are facts rather than fictions).
Fact is that the calculation of effects can be carried out in two steps:
  1. Given the distribution and motion of charges, we calculate six functions (the so-called "electromagnetic field"), and given these six functions, we calculate the electromagnetic effects that those charges have on other charges.
  2. Given the distribution and motion of matter, we calculate the stress-energy tensor, and given the stress-energy tensor, we calculate the gravitational effects that matter here has on matter there.
Fiction is
  1. that the electromagnetic field is a physical entity in its own right, that it is locally generated by charges here, that it mediates electromagnetic interactions by locally acting on itself, and that it locally acts on charges there;
  2. that spacetime curvature is a physical entity in its own right, and that it mediates the gravitational action of matter on matter by a similar local process.


  1. Well, these fictions are strong conceptual economies. For instance, if I have a static electrostatic field, I'm not really surprised that a charge can accelerate one way or another, but that the DIRECTION of its acceleration at a certain position is always the same: the electric field vector is pointing in one and only one direction ! Now, if I see this as an ALGORITHM, then I don't see, a priori, why suddenly charges could not decide to go a bit in all possible directions as a function of their charge. I can imagine writing myself any algorithm that can do that. But when I physically think of the electric field at a point, I find a natural explanation for this single direction.

    I'll stop here, because I'd like to watch a movie on TV :-)
 
  • #21
koantum said:
Let me tell you in a few steps why we all use a complex vector space. (I can give you the details later if you are interested.) I use this approach when I teach quantum mechanics to higher secondary and undergraduate student. ...

This was the most beautiful post I've read on physics forums to date. I agree that the heart of QM is the process of measurement.

The most elegant description of QM I've seen is the Schwinger measurement algebra, and I've been busily trying to geometrize this for the last few years. It turns out that when one does this, one ends up having to associate a geometric (i.e. Clifford algebraic) constant with the imaginary unit. (This is similar to David Hestenes' geometrization of the Dirac equation back in 1982.) It turns out that there are many ways of doing this, and they correspond to gauge transformations. Basically, to get spinors from a density matrix formalism (where the states are pure density matrices or projection operators), you have to choose what Schwinger called a "vacuum" state. His calling it a vacuum was from what it shows up as when you go to a quantum field theory based on these ideas.

Carl
 
Last edited:
  • #22
The proper way of dealing with a fuzzy observable is to assign probabilities to the possible outcomes of a measurement of this observable.
Why?

This goes directly against what I remember about fuzzy sets and fuzzy logic.
 
Last edited:
  • #23
Hurkyl said:
Why?
For one thing, because nobody ever has come up with a different way of dealing with a fuzzy observable. Or am I misinformed? But I should have been more precise: the proper way of dealing with a fuzzy observable O is to assign probabilities to the possible outcomes of an unperformed measurement of O. If no measurement is actually made, all we can say about a quantum system is with what probability this or that outcome would be obtained if the corresponding measurement were made. If the probability is >0 for the possible outcomes v1,v2,v3..., then the value of O is fuzzy in the sense that the propositions "the value of O is vi" (i=1,2,3,...) are neither true nor false but meaningless.
This goes directly against what I remember about fuzzy sets and fuzzy logic.
And what would that be?
 
  • #24
And what would that be?
That it's not based at all on probability.

Let me share with you some quotes from Fuzzy Sets and Fuzzy Logic Theory and Applications in the introduction to its chapter on possibility theory:


... possibility theory, a theory that is closely connected with fuzzy set theory... [fuzzy measures] will allow us to explicate differences between fuzzy set theory and probability theory

and later in that chapter:


It is shown that probability and possibility theory are distinct theories, and neither is subsumed under the other.
 
  • #25
koantum said:
the proper way of dealing with a fuzzy observable O is to assign probabilities to the possible outcomes of an unperformed measurement of O. If no measurement is actually made, all we can say about a quantum system is with what probability this or that outcome would be obtained if the corresponding measurement were made. If the probability is >0 for the possible outcomes v1,v2,v3..., then the value of O is fuzzy in the sense that the propositions "the value of O is vi" (i=1,2,3,...) are neither true nor false but meaningless.

Accepting the above, why would there be a relationship between the "probability of outcome" of the "measurement of O if the corresponding measurement would be made with resolution D" and of the "measurement of O if the corresponding measurement would be made with resolution d" ?
These being two DIFFERENT measurements, and O itself not having any existence outside of its revelation by a measurement, there is no a priori requirement for these two probability distributions to be related in any way, no ?

As an example, let us say that measurement M1 of O takes on the possible outcomes {A,B,C}, with A standing for "O is 1 or 2", B standing for "O is 3 or 4" and C standing for "O is 5 or 6".
Measurement M2 has 6 possible outcomes, {a,b,c,d,e,f}, with a standing for "O is 1", b standing for "O is 2" etc...

Now, you want a probability distribution to be assigned to a potential measurement. Fine: potential measurement M1 of O: p(A) = 0.6, p(B) = 0.4, p(C) = 0.0

Potential measurement M2 of O: p(a) = 0.1, p(b) = 0.1, p(c) = 0.1, p(d)= 0.1, p(e)=0.1, p(f) = 0.5

I have assigned probabilities to the outcomes of measurements M1 and M2. You cannot reproduce this with standard quantum theory, so it is NOT a universal probability-of-potential-compatible-measurements description algorithm.

And if you now say that p(f) = 0.5 with p(C) is IMPOSSIBLE because "O cannot be at the same time NOT in {5,6} and equal to 6", then you have assigned a measurement-independent reality (ontology) to the quantity O.
 
Last edited:
  • #26
In a complete world picture, there is no room for intuitive and common sense concepts at the foundations…. The exercise consists in building up, WITHOUT USING common sense concepts at the foundations, a mental picture of the world, AND SEE IF OUR COMMON SENSE and less common sense observations can be explained by it.
An intuitive concept is one thing, a commonsense concept is quite another. Time is an intuitive concept. So is space. Like pink and turquoise, spatial extension is a quale that can only be defined by ostentation - by drawing attention to something of which we are directly aware. While the intuition of space can lend a phenomenal quality to numerical parameters, it cannot be reduced to such parameters.
If you are not convinced, try to explain to my friend Andy, who lives in a spaceless world, what space is like. Andy is good at maths, so he understands you perfectly if you tell him that it is like the set of all triplets of real numbers. But if you believe that this gives him a sense of the expanse we call space, you are deluding yourself. We can imagine triplets of real numbers as geometrical points embedded in space; he can't. We can interpret the difference between two numbers as the distance between two points; he can't. At any rate, he can't associate with the word "distance" the remoteness it conveys to us.
So without using intuitive concepts at the foundations, you cannot even talk about space (and this should be even more obvious for time).
I'm not saying that you cannot come up with a mathematical construct and call it "space". You can define "self-adjoint operator" = "elephant" and "spectral decomposition" = "trunk", and then you can prove a theorem according to which every elephant has a trunk. But please don’t tell me that this theorem has anything to do with real pachyderms.
Why is it important to try to derive a complete world picture? Firstly, to see where it fails!
Agreed. (But then one mustn't sweep under the rug all those data that don’t fit.) In fact, I said something to this effect in several of my papers. Permit me to quote myself:
Science is driven by the desire to know how things really are. It owes its immense success in large measure to its powerful "sustaining myth" [this is reference to an article by Mermin] - the belief that this can be discovered. Neither the ultraviolet catastrophe nor the spectacular failure of Rutherford's model of the atom made physicists question their faith in what they can achieve. Instead, Planck and Bohr went on to discover the quantization of energy and angular momentum, respectively. If today we seem to have reason to question our "sustaining myth", it ought to be taken as a sign that we are once again making the wrong assumptions, and it ought to spur us on to ferret them out." Anything else should be seen for what it is - a cop-out.​
I wrote this in response to Bernard d'Espagnat's claim that without nonlinear modifications of the Schrödinger equation (or similar adulterations of standard quantum mechanics) we cannot go beyond objectivity in the weak sense of inter-subjective agreement. I wrote something similar in response to the claim by Fuchs and Peres (in their opinion piece in Physics Today, March 2000) that QM is an epistemic theory and does not yield a model of a "free-standing" reality.
I think it is already fairly clear here, that there is an appeal to a mixture of intuitive ontological concepts. But an "algorithmic" theory cannot take for granted the ontological existence of any such "ordinary" object: their existence must be DERIVABLE from its fundamental formulation.
I have found that students (higher secondary and undergraduate) are much happier if I can show them where exactly the quantum formalism comes from and why it has the form that it does, than if I confront them with a set of abstruse axioms and tell them that that's the way it is! What value does an explanation have if it is based on something nobody comprehends? You may call my approach teleological. I ask, what must the laws of physics be like so that the "ordinary" objects which surround us can exist? You stop at the fundamental laws and take them for God-given. If you want to go further and understand a fundamental theory, the teleological (not theological!) approach is the only viable one: explaining why (in the teleological sense) the laws of physics are just so.
how does a "measurement apparatus" link to an observable? Does the measurement apparatus have ontological existence? Or does only the observation of the measurement apparatus (by a person?) make sense…
It ought to be clear by now that I reject the view that measurements have anything to do with conscious observations. Measurements are presupposed by the quantum formalism since all it does is correlate measurement outcomes. Attempts to make the quantum formalism consistent with the existence of measurements are therefore misconceived. Since it presupposes measurements, it is trivially consistent with their existence. Any notion to the contrary arises from misconceptions that must be identified and eliminated.
So what are measurements? Any event or state of affairs from which the truth or the falsity of a statement about the world can be inferred, qualifies as a measurement, regardless of whether anyone is around to make that inference.
How the "apparatus" links to an observable? It defines it. Consider an electron spin associated with the ket |z+>. What do we know about this spin? All we know is how it behaves in any given measurement context, that is, we know the possible outcomes and we can calculate their probabilities. By defining - and not just defining but realizing - an axis, the setup makes available two possible values; it creates possibilities to which probabilities can be assigned. In the absence of an apparatus that realizes a particular axis, the properties "up" and "down" do not even exist as possibilities. The idea that |z+> represents something as it is, all by itself, rather than as it behaves in possible measurement situations, is completely vacuous.
And the same applies to all quantum states, wave functions, etc.
Does the measurement apparatus have ontological existence? Certainly. Any macroscopic object has, and so has everything that can be inferred from a measurement (as defined above).

To prevent our posts from becoming interminable, I'll return to the rest of your post later.
 
  • #27
Hurkyl said:
probability and possibility theory are distinct theories, and neither is subsumed under the other.
So? In quantum mechanics we have measurement outcomes (possibilities) and an algorithm that assigns to them probabilities.
 
  • #28
koantum said:
An intuitive concept is one thing, a commonsense concept is quite another. Time is an intuitive concept. So is space. Like pink and turquoise, spatial extension is a quale that can only be defined by ostentation - by drawing attention to something of which we are directly aware.

I couldn't formulate that better myself...

While the intuition of space can lend a phenomenal quality to numerical parameters, it cannot be reduced to such parameters.
If you are not convinced, try to explain to my friend Andy, who lives in a spaceless world, what space is like. Andy is good at maths, so he understands you perfectly if you tell him that it is like the set of all triplets of real numbers. But if you believe that this gives him a sense of the expanse we call space, you are deluding yourself. We can imagine triplets of real numbers as geometrical points embedded in space; he can't. We can interpret the difference between two numbers as the distance between two points; he can't. At any rate, he can't associate with the word "distance" the remoteness it conveys to us.

But we are in absolute agreement here !

So without using intuitive concepts at the foundations, you cannot even talk about space (and this should be even more obvious for time).
I'm not saying that you cannot come up with a mathematical construct and call it "space". You can define "self-adjoint operator" = "elephant" and "spectral decomposition" = "trunk", and then you can prove a theorem according to which every elephant has a trunk. But please don’t tell me that this theorem has anything to do with real pachyderms.

Exactly ! So at a certain point, you have to link your formal terms in your mathematical formalism to qualia, to subjective experiences. *This* is the essence of the interpretation of ANY theory, classical, quantum or otherwise. It is why I always insist on the fact that there is no fundamental difference between the "measurement problem" in quantum theory, and the one in classical theory ; although the POSTULATE that assigns qualia to formal mathematical elements is simpler in classical theory.

The kind of argument you are putting forward - and with which I agree entirely up to now - is essentially another indication of the unfalsifiability of solipsism, as you seem to point out yourself:

Agreed. (But then one mustn't sweep under the rug all those data that don’t fit.) In fact, I said something to this effect in several of my papers. Permit me to quote myself:
Science is driven by the desire to know how things really are. It owes its immense success in large measure to its powerful "sustaining myth" [this is reference to an article by Mermin] - the belief that this can be discovered. Neither the ultraviolet catastrophe nor the spectacular failure of Rutherford's model of the atom made physicists question their faith in what they can achieve. Instead, Planck and Bohr went on to discover the quantization of energy and angular momentum, respectively. If today we seem to have reason to question our "sustaining myth", it ought to be taken as a sign that we are once again making the wrong assumptions, and it ought to spur us on to ferret them out." Anything else should be seen for what it is - a cop-out.​

What this statement means, to me, is that a proof for the falsity of solipsism (and hence of the ontological existence of what so ever) is a myth. Agreed. I think we know that already for a few centuries. However, it is not because one cannot PROVE the existence of an ontology, that this is a proof of its falsity either. And this is the point where we seem to differ in opinion:

the *hypothesis* (and it will never be anything else, granted) of an objective ontology IS a useful hypothesis. It guides us in our quest for what is "acceptable" and what is not. Its denial doesn't lead anywhere useful: you just open the bag of possibilities. What we need are ideas that *constrain* the possibilities of physical theories, not open it up, in order to guide us. As long as one CAN make the hypothesis of an objective ontology, one should do so, because of its conceptual power.
Dropping the hypothesis of an objective ontology is not productive: ANY algorithm could do. Astrology could do ; it is an algorithm like any other to "predict outcomes of measurements". It surely performs worse in lab conditions, but it doesn't perform badly in the "everyday world" of social events, happiness and so on. Astrology does not seem compatible with any ontology of a physical theory, but it surely is an algorithm like any other. This is where one can see the power of the hypothesis of an ontology over its denial. With an ontological interpretation, there are grounds to reject astrology ; in a purely algorithmic concept, no such grounds exist. Anything goes.

I wrote this in response to Bernard d'Espagnat's claim that without nonlinear modifications of the Schrödinger equation (or similar adulterations of standard quantum mechanics) we cannot go beyond objectivity in the weak sense of inter-subjective agreement. I wrote something similar in response to the claim by Fuchs and Peres (in their opinion piece in Physics Today, March 2000) that QM is an epistemic theory and does not yield a model of a "free-standing" reality.

I'd agree with you to reject these requirements. They make overly severe hypotheses of the thing that is missing: the link between objective ontology and the subjective experience - which, we both seem to agree upon, is a necessary part of any physical theory, classical or otherwise. But it is not because of these unnecessary requirements that one needs to go to the other extreme and reject the possibility of an ontology.

I have found that students (higher secondary and undergraduate) are much happier if I can show them where exactly the quantum formalism comes from and why it has the form that it does, than if I confront them with a set of abstruse axioms and tell them that that's the way it is! What value does an explanation have if it is based on something nobody comprehends?

It's a sleight of hand what you present. You DIDN'T present any REASON why the quantum formalism has the form it has, although you seem to claim so. Once we are in the world of "algorithms that calculate probabilities of possible outcomes of measurements", the class of algorithms such defined is LARGER than quantum theory can generate. You need extra assumptions which you are sneaking in, such as the requirement of non-contextuality ; which is totally incomprehensible from a purely algorithmic viewpoint (although it does make some more sense if there is a postulated ontology).

You may call my approach teleological. I ask, what must the laws of physics be like so that the "ordinary" objects which surround us can exist? You stop at the fundamental laws and take them for God-given. If you want to go further and understand a fundamental theory, the teleological (not theological!) approach is the only viable one: explaining why (in the teleological sense) the laws of physics are just so.

But, as I said, you DIDN'T derive the laws of quantum theory. You sneaked in all necessary conditions to ARRIVE at them. I gave a few examples of algorithmic possibilities, which are NOT realisable by a quantum formalism.

It ought to be clear by now that I reject the view that measurements have anything to do with conscious observations. Measurements are presupposed by the quantum formalism since all it does is correlate measurement outcomes.

But I (think I) understand your viewpoint, which is "minimalistic", and which is the "shut up and calculate" approach: you say, intuitively, we arrive at setting up our quantum formalism for most lab situations, this gives us the structure of the hilbert space and so on, we have some intuitive hockus pokus to think up the correct hamiltonian that corresponds to the case at hand, and we intuitively think we know what we are measuring at the end. We now turn the mathematical handle of the quantum formalism, and out come probabilities of outcomes. They fit (or they don't fit) with experiment. Period.
Sure. But now's my question: why do you think that a voltmeter is "measuring volts" and not "particle position" or "color of your eyes" ? Because the salesman told you that it is a voltmeter ? The only answer you can provide is probably that voltmeters are exactly that: things that measure volts. But imagine I sneak into your lab, and change your voltmeter in a bolometer, without you noticing. Suddenly you find strange results. But a "general probability generating algorithm" should have no difficulties adapting to the situation, no ? So what's going to be your reaction to the readings of your changed "voltmeter" ? Worse, if "volts" are what your "voltmeter" is measuring, there is no way for you to find out that I fiddled with your apparatus, because it is the *defining entity* of what are volts, according to your statement.

Attempts to make the quantum formalism consistent with the existence of measurements are therefore misconceived. Since it presupposes measurements, it is trivially consistent with their existence. Any notion to the contrary arises from misconceptions that must be identified and eliminated.

How do you use the quantum formalism then in the design of measurement apparatus ?

So what are measurements? Any event or state of affairs from which the truth or the falsity of a statement about the world can be inferred, qualifies as a measurement, regardless of whether anyone is around to make that inference.
How the "apparatus" links to an observable? It defines it. Consider an electron spin associated with the ket |z+>. What do we know about this spin? All we know is how it behaves in any given measurement context, that is, we know the possible outcomes and we can calculate their probabilities. By defining - and not just defining but realizing - an axis, the setup makes available two possible values; it creates possibilities to which probabilities can be assigned. In the absence of an apparatus that realizes a particular axis, the properties "up" and "down" do not even exist as possibilities. The idea that |z+> represents something as it is, all by itself, rather than as it behaves in possible measurement situations, is completely vacuous.

But how do you know distinguish then between different measurement apparatus ? How are you going to analyse that apparatus, and make sure that there is not simply a coin-flipping device inside, while it is written on the display: "spin-measurement apparatus: used axis: Z."

What IS a measurement apparatus ? How do you make one ? And how do you determine what it measures ?

And the same applies to all quantum states, wave functions, etc.
Does the measurement apparatus have ontological existence? Certainly. Any macroscopic object has, and so has everything that can be inferred from a measurement (as defined above).

So the position of a particle "exists" ? And its momentum "exists" ? What does that mean, for a particle to have a position and a momentum ? Does that mean that my particle IS really there somewhere, and is MOVING in a certain direction ? At any moment ? And if we analyse a double-slit experiment with that ? Does this mean that my particle has an ONTOLOGICALLY EXISTING POSITION at any moment in time (because it could POTENTIALLY be measured) ? But because of its fuzzyness, it is at several places at once ? In other words, it is ontologically, at every moment in time, in a superposition of precise position states ? And at the same time, ontologically, it has several momentum values as a fuzzy quantity ? In other words, it is at the same time in a superposition of precise momentum states ?
But didn't we just give an ONTOLOGICAL EXISTENCE to the wavefunction then ?? So what's all the fuzz then of "one should not give ontological existence to the wavefunction" ?

In conclusion: any physical theory that takes on this special status that "measurements are given", makes it impossible to DESIGN measurement apparatus. As it is my professional activity, I can indicate that this is an annoying feature of a physical theory, that I'm not entitled to analyse the physics of a measurement apparatus!
 
Last edited:
  • #29
So? In quantum mechanics we have measurement outcomes (possibilities) and an algorithm that assigns to them probabilities.
That's not how possibility theory works.

Evidence theory studies something called a belief measure and a plausibility measure. These are nonadditive measures, but they do live in [0, 1], and map the empty set to zero and the whole space to 1.

A belief measure Bel is a measure where \text{Bel}(A \cup B) \geq Bel(A) + Bel(B) if A and B are both disjoint.

And we associate with it a plausibility measure Pl by, if B is the complement of A, then Pl(A) + Bel(B) = 1.

Possibility theory deals with the case when:

\text{Bel}(A \cap B) = \min \{ \text{Bel}(A), \text{Bel}(B) \}
\text{Pl}(A \cup B) = \max \{ \text{Pl}(A), \text{Pl}(B) \}

In this case, we call them necessity and possibility measures, respectively.


I have no idea if it's possible, in general, to come up with a reasonable way to take a necessity and a possibility measure and produce a probability measure.

But even if you can, you would generally lose information in the translation.


The reason to use probabilities is because probabilities seem to work well -- AFAIK there is no higher reason. I do begin to wonder, now, if the reason probabilities work well is because we design experiments that look for probabilities. :smile: To quote another sentence from the text:

probability theory is an ideal tool for formalizing uncertainty in situations where class frequencies are known or where evidence is based on outcomes of a sufficiently long series of independent random experiments.​

so that probabilities are good for talking about the kinds of experiments we do. On the other hand...

Possibility theory, on the other hand, is ideal for formalizing incomplete information expressed in terms of fuzzy propositions​

which sounds a lot like the fundamental uncertainty posited by quantum mechanics.


Of course, this book is not about physical foundations -- it would be talking about subjective probability/possibility, so these comments may not be applicable at all.
 
  • #30
vanesch said:
I'd think that there are two ways of doing what you want to do.
Great. Then only one needs to be eliminated.
One can say that, to each "compatible" (to be defined at will) set of observables corresponds a different probability space, and the observables are then random variables over this space. THIS is the most general random algorithm. The projection of a ray in a vector space is way more restrictive, and I don't see why this must be the case.
This is indeed the most general algorithm but it can be narrowed down (via Gleason's theorem) to the conventional Hilbert space formalism. This is shown in J.M. Jauch, Foundations of Quantum Mechanics (Reading, MA: Addison-Wesley, 1968). Also, "compatible" is not defined at will. Once you have the Hilbert space formalism, it is obvious how to define compatibility.
Ok, this is an explicit requirement of non-contextuality. Why?
I admit that this requirement is not inevitable. As you pointed out, probabilities can depend on measurement contexts; in a different context the same outcome need not have the same probability. In the context of composite systems contextual observables are indeed readily identified, as they are if we allow probability assignments based on earlier and later outcomes using the ABL rule (so named after Aharonov, Bergmann, and Lebowitz) instead of the Born rule, which assigns probabilities on the basis of earlier or later outcomes.
However, my first aim is to make quantum mechanics comprehensible to bright kids (something that is sorely needed) rather than to hardened quantum mechanicians (for whom there is little hope anymore), and those kids are as happy with this commonsense requirement as they are astonished by the contextualities that arise when systems are combined or when probabilities are assigned symmetrically with respect to time.
My second aim is to find the simplest set of laws that permits the existence of "ordinary" objects, and therefore I require non-contextuality wherever it is possible at all. Nature appears to take the same approach.
I had the impression you wanted to show that quantum theory is nothing else but a kind of "general scheme of writing down a generator for probability algorithms of observations", but we've made quite some hypotheses along the way!
Sorry if I gave the wrong impression. Not a "general scheme, period" but a general scheme for dealing with the objectively fuzzy observables that we need if we want to have "ordinary" objects. We started out with a discussion of objective probabilities, which certainly raises lots of questions. To be able to answer these questions consistently, I have to repudiate more than one accepted prejudice about quantum mechanics.
the non-contextuality requirement goes against the spirit of denying an ontological status to the "quantity to be measured outside of its measurement"…. [it] REQUIRES THE POSTULATION OF SOME ONTOLOGICAL EXISTENCE OF A QUANTITY INDEPENDENT OF A MEASUREMENT - which is, according to your view, strictly forbidden.
Whereas non-contextuality is implied by an ontology of self-existent positions (or values of whatever kind), it doesn’t imply such an ontology.
BTW, the above illustrates the "economy of concept" that results from postulating an ontology, and the intuitive help it provides. The unrelated statements "ruler says position 5.4cm" and "fine ruler says 5.43cm" which are hard to make any sense of, become suddenly almost trivial concepts when we say that there IS a particle, and that we have tried to find its position using two physical experiments, one with a better resolution than the other.
Have you now turned from an Everettic into a Bohmian? How come you seem to be all praise for intuitive concepts when a few moments ago you spurned them? And how is it that "ruler says position 5.4cm" is hard to make sense of for non-Bohmians? I find statements about self-existing positions or "regions of space" harder to make sense of. If I have a detector monitoring the interval from 5.4 to 5.6 (or from 5.40 to 5.41 for that matter) then I know what I am talking about. The detector is needed to realize (make real) this interval or region of space. It makes the property of being in this interval available for attribution. Then it only takes a click to make it "stick" to a particle.
When we come to the non-contextuality requirement, I ask my students to assume that p(C)=1, 0<p(A)<1, and 0<p(B)<1. (Recall: A and B are disjoint regions, C is their union, and p(C) is the probability of finding the particle in C if the appropriate measurement is made.) Then I ask: since neither of the detectors monitoring A and B, respectively, is certain to click, how come it is certain that either of them will click? The likely answer: "So what? If p(C)=1 then the particle is in C, and if it isn’t in A (no click), then it is in B (click)." Economy of concept but wrong!
At this point the students are well aware that (paraphrasing Wheeler) no property is a possessed property unless it is a measured property. They have discussed several experiments (Mermin's "simplest version" of Bell's theorem, the experiments of Hardy, GHZ, and ESW) all of which illustrate that assuming self-existent values leads to contradictions. So I ask them again: how come either counter will click if neither counter is certain to click? Bafflement.
Actually the answer is elementary, for implicit in every quantum-mechanical probability assignment is the assumption that a measurement is made. It is always taken for granted that the probabilities of the possible outcomes add up to 1. There is therefore no need to explain this. But there is a lesson here: not even probability 1 is sufficient for "is" or "has". P(C)=1 does not mean that the particle is in C but only that it is certain to be found in C provided that the appropriate measurement is made. Farewell to Einstein's "elements of reality". Farewell to van Fraassen's eigenstate-eigenvalue link.
You say "there IS a particle". What does this mean? It means there is a conservation law (only in non-relativistic quantum mechanics, though) which tells us that every time we make a position measurement exactly one detector clicks. If every time exactly two detectors click, we say there are two particles.
Well, these fictions are strong conceptual economies.
It might be better to call them visual aids or heuristic tools.
For instance, if I have a static electrostatic field, I'm not really surprised that a charge can accelerate one way or another, but that the DIRECTION of its acceleration at a certain position is always the same: the electric field vector is pointing in one and only one direction ! Now, if I see this as an ALGORITHM, then I don't see, a priori, why suddenly charges could not decide to go a bit in all possible directions as a function of their charge. I can imagine writing myself any algorithm that can do that. But when I physically think of the electric field at a point, I find a natural explanation for this single direction.
I don’t deny that thinking of the electromagnetic field as a tensor sitting at every spacetime point is a powerful visual aid to solving problems in classical electrodynamics. If you only want to use the physics, this is OK. But not if you want to understand it. There just isn’t any way in which one and the same thing can be both a computational tool and a physical entity in its own right. The "classical" habit of transmogrifying computational devices into physical entities is one of the chief reasons why we fail to make sense of the quantum formalism, for in quantum physics the same sleight of hand only produces pseudo-problems and gratuitous solutions.
You also get pseudo-problems in the classical context. Instead of thinking of the electromagnetic field as a tool for calculating the interactions between charges, you think of charges as interacting with the electromagnetic field. How does this interaction work? We have a tool for calculating the interactions between charges, but no tool for calculating the interactions between charges and the electromagnetic field. With the notable exception of Roger Boscovich, a Croatian physicist and philosopher of the 18th Century, nobody seems to have noticed that local action is as unintelligible as the ability of material objects to act where they are not. Why do we stop worrying once we have transmuted the mystery of action at a distance into the mystery of local action? Is this the answer?:
Physicists are, at bottom, a naive breed, forever trying to come to terms with the 'world out there' by methods which, however imaginative and refined, involve in essence the same element of contact as a well-placed kick. (B.S. DeWitt and R.N. Graham, Resource letter IQM-1 on the interpretation of quantum mechanics, AJP 39, pp. 724-38, 1971.)
As an example, let us say that measurement M1 of O takes on the possible outcomes {A,B,C}, with A standing for "O is 1 or 2", B standing for "O is 3 or 4" and C standing for "O is 5 or 6".
Measurement M2 has 6 possible outcomes, {a,b,c,d,e,f}, with a standing for "O is 1", b standing for "O is 2" etc... Now, you want a probability distribution to be assigned to a potential measurement. Fine:
potential measurement M1 of O: p(A) = 0.6, p(B) = 0.4, p(C) = 0.0
Potential measurement M2 of O: p(a) = 0.1, p(b) = 0.1, p(c) = 0.1, p(d)= 0.1, p(e)=0.1, p(f) = 0.5
I have assigned probabilities to the outcomes of measurements M1 and M2. You cannot reproduce this with standard quantum theory, so it is NOT a universal probability-of-potential-compatible-measurements description algorithm.
As I have pointed out, there are additional factors that narrow down the range of possible algorithms. I never claimed that kind of arbitrariness for the quantum-mechanical algorithm.
And if you now say that p(f) = 0.5 with p(C) is IMPOSSIBLE because "O cannot be at the same time NOT in {5,6} and equal to 6", then you have assigned a measurement-independent reality (ontology) to the quantity O.
But I never say that! I wouldn't even consider O in the M1 context to be the same observable as O in the M2 context. Observables are defined by how they are measured, what are the possible outcomes, and what other measurements are made at the same time.
 
  • #31
Hurkyl said:
That's not how possibility theory works. Evidence theory studies something called a belief measure and a plausibility measure... Of course, this book is not about physical foundations -- it would be talking about subjective probability/possibility, so these comments may not be applicable at all.
I think your hunch is correct. The quantum-mechanical assignments of observable probabilities have nothing to do with belief or plausibility. Let me requote Mermin: "in a non-deterministic world, probability has nothing to do with incomplete knowledge. Quantum mechanics is the first example in human experience where probabilities play an essential role even when there is nothing to be ignorant about."
 
  • #32
So at a certain point, you have to link your formal terms in your mathematical formalism to qualia, to subjective experiences. *This* is the essence of the interpretation of ANY theory, classical, quantum or otherwise. It is why I always insist on the fact that there is no fundamental difference between the "measurement problem" in quantum theory, and the one in classical theory ; although the POSTULATE that assigns qualia to formal mathematical elements is simpler in classical theory.
My http://xxx.lanl.gov/abs/quant-ph/0102103"to d'Espagnat was that his argument for weak objectivity = inter-subjective agreement is a cop-out. (I take it that d'Espagnat's weak objectivity corresponds to what you call solipsism.) My point was that it is our duty as physicists to find what Fuchs and Peres called a "freestanding reality" (which they claim quantum mechanics doesn’t allow). According to d'Espagnat, the elision of the subject is not possible within unadulterated, standard quantum mechanics. I maintain that it is possible. I want a conception of the quantum world to which the conscious subject is as irrelevant as it was to the classical view of the world. It's rather like a game I like to play: let's go find a strongly objective conception of the quantum world that owes nothing to subjects or conscious observations. It is precisely for this reason that I reject the naïve quantum realism that identifies reality with symbols of the mathematical formalism.
this is the point where we seem to differ in opinion: the *hypothesis* (and it will never be anything else, granted) of an objective ontology IS a useful hypothesis.
As you can see, we are in perfect agreement even here.
With an ontological interpretation, there are grounds to reject astrology; in a purely algorithmic concept, no such grounds exist.
While I'm certainly no believer in astrology, what you're saying is that your grounds for rejecting astrology are not scientific but metaphysical. That's not good enough for me.
It's a sleight of hand what you present. You DIDN'T present any REASON why the quantum formalism has the form it has, although you seem to claim so.
What I show is that if the quantum formalism didn’t have the form that it does then the familiar objects that surround us couldn’t exist. I pointed out that this is a teleological reason, and you are free to deny that teleological reasons are REASONS. But keep in mind that this is the only possible reason a fundamental physical theory can have. Our difference in opinion is that, for me, a mathematical structure that exists without any reason is not an acceptable reason for the existence of everything else.
But I (think I) understand your viewpoint, which is "minimalistic", and which is the "shut up and calculate" approach.
Absolutely not. I say: stop the naïve transmogrification of mathematical symbols into ontological entities in order to be finally in a position to see the true ontological implications of the quantum formalism.
How do you use the quantum formalism then in the design of measurement apparatus ?
As I implied earlier, using physics is not the same as understanding it. Keep in mind that technological applications invariably use approximate laws, the classical laws not being the poorest of them all, and remember Feynman's insistence that "philosophically we are completely wrong with the approximate law" (Feynman's emphasis).
What IS a measurement apparatus ? How do you make one ? And how do you determine what it measures?
I could certainly answer these question, but why should I be the first? How do you answer them?
So the position of a particle "exists"? And its momentum "exists"?
If, when, and to the extent that it is measured.
What does that mean, for a particle to have a position and a momentum?
It has a position (or momentum) if, when, and to the extent that its position (or momentum) can be inferred from something that qualifies as a measurement device (see above definition).
Does that mean that my particle IS really there somewhere, and is MOVING in a certain direction?
Nothing is there unless it is indicated by a measurement outcome.
Does this mean that my particle has an ONTOLOGICALLY EXISTING POSITION at any moment in time (because it could POTENTIALLY be measured)?
It has a position if, when, and to the extent that its position is measured. Between measurements (and also beyond the resolution of actual measurements) we can describe the particle only in terms of the probabilities of the possible outcomes of unperformed measurements. The particle isn’t like that "by itself", of course. Nothing can be said without reference to (actual or counterfactual=unperformed) measurements.
But didn't we just give an ONTOLOGICAL EXISTENCE to the wave function then ??
NO WAY!
any physical theory that takes on this special status that "measurements are given", makes it impossible to DESIGN measurement apparatus.
Nonsense.
As it is my professional activity, I can indicate that this is an annoying feature of a physical theory, that I'm not entitled to analyze the physics of a measurement apparatus!
Analyze away to your heart's content! You will be using approximate laws, and you won't be bothered about where the underlying laws come from or what their ontological implications are. You, as a professional magician, don’t need to know how the magic formulas work. You just need to use them. Contrariwise, no amount of ontological wisdom will help you even build a mousetrap.
 
Last edited by a moderator:
  • #33
koantum said:
This is indeed the most general algorithm but it can be narrowed down (via Gleason's theorem) to the conventional Hilbert space formalism. This is shown in J.M. Jauch, Foundations of Quantum Mechanics (Reading, MA: Addison-Wesley, 1968). Also, "compatible" is not defined at will. Once you have the Hilbert space formalism, it is obvious how to define compatibility.

I must have completely misunderstood you then. I thought you wanted to show the *naturalness* of the quantum-mechanical formalism, in the sense that you start by stating that we had it wrong all the way, that physical theories do not describe anything ontological, but are algorithms to compute probabilities of measurements, and that that single assumption is sufficient to arrive at the quantum-mechanical formalism.
In other words, that once we say that a physical theory is an algorithm to arrive at probabilities of measurements, then that the general framework is NECESSARILY the hilbert space formalism.
I thought that that was your whole point, and I tried to point out that this has not only not been demonstrated, but is bluntly not true. But apparently this is NOT what you want to say. I'm then at loss WHAT you want to say ? You give me a hint here:

However, my first aim is to make quantum mechanics comprehensible to bright kids (something that is sorely needed) rather than to hardened quantum mechanicians (for whom there is little hope anymore), and those kids are as happy with this commonsense requirement as they are astonished by the contextualities that arise when systems are combined or when probabilities are assigned symmetrically with respect to time.

Bright kids are amazing. They still believe what people tell them, because they don't realize they might be smarter than the guy/gal who's in front of them :-p

Seriously, now. Your approach is a valuable approach, as are many others, but I don't think you have made any _clearer_ quantum mechanics. I think that an introduction to quantum theory should NOT talk about these issues, and should limit itself to a statement that there ARE issues, but that these issues can only reasonably discussed once one understands the formalism. I think that anyone FORCING upon the novice a particular view is not doing any service to the novice.

As you see, I think I'm relatively well versed in quantum theory and I don't completely agree with your view (although I can respect it, on the condition that you can be open-minded to my view too). So you should leave open that possibility to your public too, no ?

My second aim is to find the simplest set of laws that permits the existence of "ordinary" objects, and therefore I require non-contextuality wherever it is possible at all. Nature appears to take the same approach.

Ha, the simplest set of laws, to me, would be an overall probability distribution (hidden variable approach). THAT is intuitively understandable, this is what Einstein taught should be done, and this is, for instance, what Bohmians insist upon. This is the simplest, and most intuitive approach to the introduction of "ordinary" objects, no ?

Sorry if I gave the wrong impression. Not a "general scheme, period" but a general scheme for dealing with the objectively fuzzy observables that we need if we want to have "ordinary" objects. We started out with a discussion of objective probabilities, which certainly raises lots of questions. To be able to answer these questions consistently, I have to repudiate more than one accepted prejudice about quantum mechanics.

Don't you think that a Kolmogorov overall probability distribution of all potential measurement outcomes is the most obvious "general scheme for dealing with the objectively fuzzy observables" ? And then you end up for sure with something like Bohmian mechanics, no ?

Whereas non-contextuality is implied by an ontology of self-existent positions (or values of whatever kind), it doesn’t imply such an ontology.

As I said before, NOTHING implies any ontology. An ontology is a mental concept, it is a working hypothesis. This follows from the non-falsifiability of solipsism. Nothing implies any dynamics either. There can be a great lookup table in which all past, present and future events are written down, and we're just scrolling down the lookup table. Any systematics discovered in that lookup table, which we take for "laws of nature" are also a working hypothesis which is not implied.
But these considerations do not lead us anywhere.

Have you now turned from an Everettic into a Bohmian?

As you can see, I do have some sympathy for the Bohmian viewpoint too, but I was hoping you realized that my examples of rulers and so on were taken in a classical context. I wanted to indicate that if you have postulated an ontological concept from which you DERIVE observations, that this is more helpful than to stick to the observations themselves, and that such an ontology makes certain aspects, such as the relationship between different kinds of observations, more obvious.
We could apply your concept also to the classical world, and say that "matter points in space" and so on are just algorithmic elements from which we calculate probabilities, or in this case, certainties of observations. But if you take that viewpoint, it is hard to consider that one cannot modify the algorithm a little bit, and make the observations contextually dependent (so that there is no relationship between the position measurement with a ruler with 1mm resolution, and one with 0.1 mm resolution). If, on the contrary, you make the hypothesis of an existing ontology, which, in the classical context, is to posit that there REALLY IS a particle at a certain point in space, then the relationship between the reading on the 1mm ruler, and the 0.1 mm ruler, is evident: you're measuring twice the same ontological quantity "position of the particle".
So, in a classical context, your approach of claiming that we should only look at an algorithm that relates outcomes of measurement, and not think of anything ontological behind it, is counter productive.

How come you seem to be all praise for intuitive concepts when a few moments ago you spurned them? And how is it that "ruler says position 5.4cm" is hard to make sense of for non-Bohmians? I find statements about self-existing positions or "regions of space" harder to make sense of.

In a classical context ?? You have difficulties imagining there is an Euclidean space in classical physics ?

Again, I was talking about the classical version. But you seemed to imply that there was also a kind of "existence" to POTENTIAL outcomes of measurement in the quantum case: it was a "fuzzy" variable, but as I understood, it DID exist, somehow. I had the impression you said that there WAS a position, even unmeasured, but that it was not a real number, but a "fuzzy variable".

Now, I take on the position that there is no such thing as a "fuzzy position" as such, but that there REALLY is a wavefunction. As there IS a matter point in Euclidean space in classical physics, there IS a wavefunction in quantum physics. This is a simplifying ontological hypothesis, as was the point in Euclidean space, no ?
A measurement apparatus ALSO has a wavefunction, and a measurement is nothing else but an interaction, acting on the overall wavefunction of the measurement apparatus and the system ; this changes the (part of) the wavefunction that is representing the measurement apparatus. What's wrong with that ? As the measurement apparatus' wavefunction is now usually in a superposition of different states, of which you happen to see one, this explains your observation. What's wrong with that ? At no point, I needed to introduce the concept of a "potential measurement which I didn't perform", as you need to do. I just recon that, when I DO perform a measurement, then this is the result of an interaction (just as any other interaction, btw), which puts my measurement apparatus' wavefunction in a superposition of different outcomes, of which I see one. And I don't have to say what "would" happen to a measurement that I DIDN'T perform.
I have to say that I find this viewpoint so closely related to the formal statements of quantum theory, that I wonder why it meets so much resistance, and that people need to invent strange things such as "fuzzy potential measurement results" and things like that.
Well, ok, I know why. It is the idea that "your measurement apparatus can be in a superposition but you only see one term of it" ; we're not used to think that there may be things "existing" which we don't "see". I agree that this has some strangeness to it, but, when considering the alternatives, I find this the least of all difficulties, and not at all conceptually destabilizing, on the contrary. The entire difficulty of quantum theory resides simply in the extra requirement that ONLY exists what we see, of "ordinary" objects.

When we come to the non-contextuality requirement, I ask my students to assume that p(C)=1, 0<p(A)<1, and 0<p(B)<1. (Recall: A and B are disjoint regions, C is their union, and p(C) is the probability of finding the particle in C if the appropriate measurement is made.) Then I ask: since neither of the detectors monitoring A and B, respectively, is certain to click, how come it is certain that either of them will click? The likely answer: "So what? If p(C)=1 then the particle is in C, and if it isn’t in A (no click), then it is in B (click)." Economy of concept but wrong!
At this point the students are well aware that (paraphrasing Wheeler) no property is a possessed property unless it is a measured property. They have discussed several experiments (Mermin's "simplest version" of Bell's theorem, the experiments of Hardy, GHZ, and ESW) all of which illustrate that assuming self-existent values leads to contradictions. So I ask them again: how come either counter will click if neither counter is certain to click? Bafflement.

Of course, bafflement, because you make the (IMO) erroneous implicit assumption of measurements of "existing" or "non-existing" quantities. But "the position of a particle" as a "potential measurement outcome" has no meaning in a quantum context. THIS is the trap.

Isn't a simpler answer: the system is in state |A> + |B> ; the detector at A, D1, interacts in the following way with this state:

|D1-0> |A> --> |D1-click> |A>
|D1-0> |B> --> |D1-noclick> |B>

D2 (detector at B) interacts in the following way with the same state:

|D2-0>|A> --> |D2-noclick> |A>
|D2-0>|B> --> |D2-click>|B>

Both together:

Initial state: |D1-0>|D2-0>(|A> + |B>)/sqrt2

--> (using linearity of the evolution operator)

(|D1-click>|D2-noclick>|A> + |D1-noclick>|D2-click>|B>)/sqrt2

There are two terms, of which you are going to observe one:
the first one is |D1-click>|D2-noclick> and the second one is |D1-noclick>|D2-click>, which you pick using the Born rule (that's the famous link between conscious observation and physical ontology).
Each branch has, according to that Born rule, a probability of 1/2 to be experienced by you.

So you have one "branch" or "world" or whatever, where you observe that D1 clicked and D2 didn't, and you have another one where D1 didn't click and D2 did. You don't have a world where D1 and D2 did click, or didn't click, so that's not an observational possibility.

No bafflement.

Interference ? No problem.

DA is a detector after the two slits, placed at a position of a peak in the interference pattern.
It evolves hence according to:

|DA-0> (|A> + |B>) ---> |DA-click> (|A>+|B>)

|DA-0> (|A> - |B>) ---> |DA-noclick> (|A> - |B>)

Now, first case, D1 and D2 are not present: we have the first line. The only "branch" that is present contains |DA-click>, so it clicks always.

The second case: D1 and D2 are ALSO present (the typical case where one tries to find out through which slit the particle went).

We had, after our interaction of the particle with D1 and D2, but before hitting DA:

|DA-0> (|D1-click>|D2-noclick>|A> + |D1-noclick>|D2-click>|B>)/sqrt2

now, we're going to interact with DA. By the superposition principle, we can write the interaction of DA on |A>:

|DA-0> |A> ---> (|DA-click> (|A>+|B>)+ |DA-noclick>(|A>-|B>)) /2

and:

|DA-0> |B> --> (|DA-click>(|A>+|B>) - |DA-noclick>(|A>-|B>))/2

So this gives us:

(|D1-click>|D2-noclick>(|DA-click> (|A>+|B>)+ |DA-noclick>(|A>-|B>)) /2 + |D1-noclick>|D2-click>(|DA-click>(|A>+|B>) - |DA-noclick>(|A>-|B>))/2 )/sqrt2

If we expand this, we obtain:

1/sqrt8 {
|D1-click>|D2-noclick>(|DA-click>|DA-click> (|A>+|B>)
+ |D1-click>|D2-noclick>|DA-noclick>(|A>-|B>)
+ |D1-noclick>|D2-click>|DA-click>(|A>+|B>)
- |D1-noclick>|D2-click> |DA-noclick>(|A>-|B>)
}

There are 4 branches, of which you will experience one, using the Born rule:
1/4 probability that you will experience D1 clicking, D2 not clicking and DA clicking;
1/4 probability that you will experience D1 clicking, D2 not clicking and DA clicking;
1/4 probability that you will experience D1 not clicking ...

So, always one of the two D1 or D2 clicked, and DA has one chance out of 2 to click.
We could naively and wrongly conclude from this that the particle "went" through one of the two slits.

All observational facts are explained this way. There's no "ambiguity" or "fuzzyness" as to the state of the system: it has always a clearly defined wavefunction, and so do the measurement apparati.
There's no "bafflement" concerning the apparent clash between the "position" of the particle, and the interference pattern.
Note also that it wasn't necessary to introduce an "unavoidable disturbance" due to the measurement at the slits to make the interference pattern "disappear".

Actually the answer is elementary, for implicit in every quantum-mechanical probability assignment is the assumption that a measurement is made. It is always taken for granted that the probabilities of the possible outcomes add up to 1. There is therefore no need to explain this. But there is a lesson here: not even probability 1 is sufficient for "is" or "has". P(C)=1 does not mean that the particle is in C but only that it is certain to be found in C provided that the appropriate measurement is made.

Entirely correct. This is because there IS no such thing as a "potential position measurement result" ontology.

Farewell to Einstein's "elements of reality". Farewell to van Fraassen's eigenstate-eigenvalue link.

Well, Einstein's elements of reality are simply the wavefunction, and everything becomes clear, no ? The error is to think that there is some reality to "potential measurement outcomes".

You say "there IS a particle". What does this mean? It means there is a conservation law (only in non-relativistic quantum mechanics, though) which tells us that every time we make a position measurement exactly one detector clicks. If every time exactly two detectors click, we say there are two particles.

No, my example was taken from classical physics.
Look at the above for the view on the quantum version. "potential position measurement" has no meaning there. Interaction with measurement apparatus has, and the wavefunction has a meaning.

I don’t deny that thinking of the electromagnetic field as a tensor sitting at every spacetime point is a powerful visual aid to solving problems in classical electrodynamics. If you only want to use the physics, this is OK. But not if you want to understand it. There just isn’t any way in which one and the same thing can be both a computational tool and a physical entity in its own right.

This is a strange statement, because I'm convinced of the opposite. To me, the fundamental dogma of physics is the assumption that all of nature IS a mathematical structure (or, if you want to, that maps perfectly on a mathematical structure). Up to us to discover that structure. It's a Platonic view of things.

The "classical" habit of transmogrifying computational devices into physical entities is one of the chief reasons why we fail to make sense of the quantum formalism, for in quantum physics the same sleight of hand only produces pseudo-problems and gratuitous solutions.

No, I don't think so. I think what is really making for all these pseudoproblems is our insistence of "what we see is (only) what is there", instead of "what we see can be derived from what is there". The naive realism view.

You also get pseudo-problems in the classical context. Instead of thinking of the electromagnetic field as a tool for calculating the interactions between charges, you think of charges as interacting with the electromagnetic field. How does this interaction work? We have a tool for calculating the interactions between charges, but no tool for calculating the interactions between charges and the electromagnetic field.

I don't follow what you're talking about ? We have no tool for calculating the interactions between charges and the EM field ?


Physicists are, at bottom, a naive breed, forever trying to come to terms with the 'world out there' by methods which, however imaginative and refined, involve in essence the same element of contact as a well-placed kick. (B.S. DeWitt and R.N. Graham, Resource letter IQM-1 on the interpretation of quantum mechanics, AJP 39, pp. 724-38, 1971.)

Indeed, "naive realism"!
 
  • #34
koantum said:
(I take it that d'Espagnat's weak objectivity corresponds to what you call solipsism.

Not at all. Solipsism is the denial of the existence of an objective ontology and the idea that you are the one and only sole subjective experience ; in other words, that all you ever sensed are nothing else but illusions of a subjective experience. Your body doesn't exist, your brain doesn't exist, the world doesn't exist ; only your subjective experience exists.
This is as undeniable a possibility as it is useless as a working hypothesis.

) My point was that it is our duty as physicists to find what Fuchs and Peres called a "freestanding reality" (which they claim quantum mechanics doesn’t allow). According to d'Espagnat, the elision of the subject is not possible within unadulterated, standard quantum mechanics. I maintain that it is possible. I want a conception of the quantum world to which the conscious subject is as irrelevant as it was to the classical view of the world. It's rather like a game I like to play: let's go find a strongly objective conception of the quantum world that owes nothing to subjects or conscious observations. It is precisely for this reason that I reject the naïve quantum realism that identifies reality with symbols of the mathematical formalism.

Well, unless I misunderstood you, I don't see how you are constructing a conception of the quantum world which is strongly objective, if you START by saying that we only have an algorithm, and no description!
(or I must have seriously misunderstood you)

While I'm certainly no believer in astrology, what you're saying is that your grounds for rejecting astrology are not scientific but metaphysical. That's not good enough for me.

It is very difficult to reject astrology *empirically* (with the usual vagueness of the used terms, and the complexity of the subjects adressed, such as your happiness in love or something).

What I show is that if the quantum formalism didn’t have the form that it does then the familiar objects that surround us couldn’t exist.

Ahum. Well, you consider quantum theory then solidly PROVEN beyond doubt, and by pure reasoning ?? And what if one day, quantum theory is falsified ? Do familiar objects disappear in a puff of logic then ?

Our difference in opinion is that, for me, a mathematical structure that exists without any reason is not an acceptable reason for the existence of everything else.

So you seem to claim that, from the pure observation of the existence of ordinary objects, the ONE AND ONLY POSSIBLE PHYSICAL THEORY that makes logical sense is quantum theory ? No need for any empirical input then ? If only we would have been thinking harder, it would have been OBVIOUS that quantum theory is the ultimate correct theory ?

and remember Feynman's insistence that "philosophically we are completely wrong with the approximate law" (Feynman's emphasis).

But of course we are completely "wrong". I'd bet that even today, we are "completely wrong", and that 500 or 1000 years from now, quantum theory is an old and forgotten theory (except maybe for simplified calculations, as is classical physics today). Quantum theory being the current paradigm, it is only waiting to be falsified, no ? And to be replaced by something else. Which will then also be falsified. Or then maybe not. So of course the metaphysical, ontological picture that is suggested by our current theories are "completely wrong". And so will be the next ones, etc... In other words, we will NEVER know what is "really out there" (IF there even is such a thing, cfr solipsism). We will always be wrong. But we will have more and more refined mental pictures (= ontologies) of nature.

But that doesn't mean that in the mean time, we should not build up an ontological picture of what we have, now, today, in order to make sense of it. With a formalism comes an ontology. You change the formalism, you change the ontology. You work in classical physics: take a classical ontology. You use quantum theory: take the ontology that goes with it. But trying to force upon a certain formalism, the ontology of another one, and you have troubles. Trying to force upon quantum theory, the ontology of classical physics, and you create a whole lot of pseudoproblems. The rule: the fundamental formal elements of a theory dictate the ontologically existing elements according to that theory. It gives you the most useful mental picture to work with and to devellop an intuition for.
 
  • #35
The quantum-mechanical assignments of observable probabilities have nothing to do with belief or plausibility. Let me requote Mermin: "in a non-deterministic world, probability has nothing to do with incomplete knowledge. Quantum mechanics is the first example in human experience where probabilities play an essential role even when there is nothing to be ignorant about."
They're just names, and you shouldn't read things into them -- just like the fact the "rational numbers" are not somehow more logical than the "irrational numbers", and the "real numbers" are no more real than the "imaginary numbers".

There's no evident reason why the underlying physical measure should be a probability measure -- why isn't it possible, for example, for

P(particle in (0, 2))

to be bigger than

P(particle in (0, 1)) + P(particle in (1, 2))

? Or maybe that there isn't some sort of fundamental measure on the outcome space?


At the moment, in a non-MWI type interpretation, I see no possible theoretical or intuitive justification for the use of probabilities. If I understand my history correctly, we actually have the following:

(1) We use subjective probabilities in classical physics.
(2) Quantum came along, and we used it to compute probabilities.
(3) We failed to come up with a classical interpretation of QM.
(4) So, we promote probabilities to a fundamental status in QM.

so we actaully have quite the opposite -- probabilities have achieved a fundamental status in QM because it was doing a good job predicting the outcomes of our frequency-counting experiments... not because there was some theoretical or intuitive reason to do so.


In MWI, though, there is at least the possibilitiy of deriving probabilities as emergent phenomena, by considering a limit of the resulting states of frequency-counting experiments of increasing length.
 
  • #36
Hurkyl said:
There's no evident reason why the underlying physical measure should be a probability measure -- why isn't it possible, for example, for

P(particle in (0, 2))

to be bigger than

P(particle in (0, 1)) + P(particle in (1, 2))

I like it :smile: This is this famous "non-contextuality" requirement!
 
  • #37
koantum said:
I don’t deny that thinking of the electromagnetic field as a tensor sitting at every spacetime point is a powerful visual aid to solving problems in classical electrodynamics. If you only want to use the physics, this is OK. But not if you want to understand it. There just isn’t any way in which one and the same thing can be both a computational tool and a physical entity in its own right. The "classical" habit of transmogrifying computational devices into physical entities is one of the chief reasons why we fail to make sense of the quantum formalism, for in quantum physics the same sleight of hand only produces pseudo-problems and gratuitous solutions.

You also get pseudo-problems in the classical context. Instead of thinking of the electromagnetic field as a tool for calculating the interactions between charges, you think of charges as interacting with the electromagnetic field. How does this interaction work? We have a tool for calculating the interactions between charges, but no tool for calculating the interactions between charges and the electromagnetic field. With the notable exception of Roger Boscovich, a Croatian physicist and philosopher of the 18th Century, nobody seems to have noticed that local action is as unintelligible as the ability of material objects to act where they are not. Why do we stop worrying once we have transmuted the mystery of action at a distance into the mystery of local action?

and later

According to d'Espagnat, the elision of the subject is not possible within unadulterated, standard quantum mechanics. I maintain that it is possible. I want a conception of the quantum world to which the conscious subject is as irrelevant as it was to the classical view of the world. It's rather like a game I like to play: let's go find a strongly objective conception of the quantum world that owes nothing to subjects or conscious observations. It is precisely for this reason that I reject the naïve quantum realism that identifies reality with symbols of the mathematical formalism.

I wish I had been taught this way back in 1977 when I took my first QM class and was blown away by the lack of explanation for the meaning of the theory.

This seems related to Mach's program and the foundations of relativity. Instead of interpreting the Lorentz and Poincare calculations as indications of how calculations depend on frame of reference, the inclination is to suppose that these formulas are spacetime itself. Poincare's version of relativity still supposed an ether, but made it unobservable. And now, for most physicists, the absence of the necessity of a preferred frame of reference in Einstein's theory is considered as proof that no such thing can exist.

Carl
 
  • #38
hurkyl said:
probabilities have achieved a fundamental status in QM because it was doing a good job predicting the outcomes of our frequency-counting experiments... not because there was some theoretical or intuitive reason to do so.
As a rule, a lucky guess comes first; the reason why it was lucky is found later.
In MWI, though, there is at least the possibility of deriving probabilities as emergent phenomena, by considering a limit of the resulting states of frequency-counting experiments of increasing length.
No, there isn’t. Even vanesch agreed with that https://www.physicsforums.com/showpost.php?p=948676&postcount=16".
 
Last edited by a moderator:
  • #39
vanesch said:
I must have completely misunderstood you then.
If I talk to quantum-state realists, they think I belong to the shut-up-and-calculate (SUC) sect. If I talk to members of this sect, they take me for a quantum-state realist. As far as I am concerned, they are both wrong.
The SUCers assert that the quantum world cannot be described; its features are forever beyond our ken. All we can usefully talk about is statistical correlations between measurement outcomes. Indeed, nothing of relevance can be said without reference to measurements, but this does not mean that the features of the quantum world are beyond our ken. A great deal can be learned by analyzing the quantum-mechanical probability assignments in various experimental contexts, as I do http://thisquantumworld.com" .
The quantum-state realists aspire to describe the objective features of the quantum world. So do I. But they insist on describing these features without reference to measurements, and this doesn’t work.
I thought you wanted to show the *naturalness* of the quantum-mechanical formalism, in the sense that you start by stating that we had it wrong all the way, that physical theories do not describe anything ontological, but are algorithms to compute probabilities of measurements, and that that single assumption is sufficient to arrive at the quantum-mechanical formalism.
Gosh, I wish I could do that. But on second thoughts, to what avail?
I don't think you have made any _clearer_ quantum mechanics. I think that an introduction to quantum theory should NOT talk about these issues, and should limit itself to a statement that there ARE issues, but that these issues can only reasonably discussed once one understands the formalism. I think that anyone FORCING upon the novice a particular view is not doing any service to the novice.
I have given you just a glimpse of the way I teach quantum mechanics. You are not in a position to judge on that basis. I certainly show the students how quantum physics is taught elsewhere. There you are confronted with a set of axioms like the following:
  1. There are quantum systems.
  2. The states of system S are the unit vectors of a Hilbert space.
  3. Between measurements, states evolve unitarily, as dictated by a Schrödinger equation involving a Hermitean operator called Hamiltonian.
  4. Every observable O is associated with a Hermitean operator O.
  5. The possible outcomes of a measurement of O are the eigenvalues of O.
  6. Immediately after a measurement yields a particular eigenvalues, the system is in the eigenstate corresponding to that eigenvalue.
  7. If the system is in state |u>, then a measurement of O yields the value v with probability |<v|u>|2.
  8. Composite quantum systems are described by the tensor product of the Hilbert spaces of their component systems.
Next you are told that this is the way it is. Admittedly, there are problems with this set of axioms, but you won't be able to understand them. So for now, shut up and calculate. If you want to know, for example, why energy is a Hermitean operator, you have to figure it out for yourself. You are not even told that the concept of energy in quantum mechanics is totally different from the corresponding classical concept. Since you had the misfortune of learning classical physics first, you come to quantum physics with a heavy load of inadequate concepts, and nobody tells you this. Then you try to make sense of it all. Poor chap. Small wonder that generation after generation of physics students is told:
Do not keep saying to yourself, if you can possibly avoid it, "But how can it be like that?" because you will go "down the drain" into a blind alley from which nobody has yet escaped. Nobody knows how it can be like that. (Feynman, The Character of Physical Law, 1967)​
If those poor students were told that the quantum formalism is nothing but an algorithm for assigning probabilities to possible measurement outcomes on the basis of actual outcomes, all of the above axioms would at once make sense to them. The difference between the quantal and the classical probability algorithm is readily understood as a consequence of Nature's fuzziness, and the latter is readily understood as Nature's means to "fluff out" matter: to create stable objects that occupy space out of finite numbers of particles that don’t.
The first-year students of my class thankfully have very little prior exposure to any physics. We start with quantum physics (experiments and the rules by which we calculate the probabilities of the possible outcomes) and only later come to classical physics, by learning how quantum physics degenerates into classical physics and under which conditions classical physics provides useful approximations. I emphasize at this point that such terms as mass, energy, momentum, etc. acquire new and entirely different meanings in the transition from quantum to classical, and that care should be taken to keep them apart.
the simplest set of laws, to me, would be an overall probability distribution (hidden variable approach). THAT is intuitively understandable, this is what Einstein taught should be done, and this is, for instance, what Bohmians insist upon. This is the simplest, and most intuitive approach to the introduction of "ordinary" objects, no ?
You know well enough that the contextuality of the more general situations I mentioned rules out such a probability distribution. In Bohmian mechanics every observable except position is contextual. The relevant Einstein quote here is: "Everything should be made as simple as possible, but not simpler."
I wanted to indicate that if you have postulated an ontological concept from which you DERIVE observations, that this is more helpful than to stick to the observations themselves, and that such an ontology makes certain aspects, such as the relationship between different kinds of observations, more obvious.
If Alice and Bob perceive a teapot on the table between them, it is useful to assume that the reason this is so is that there is a teapot on the table between them. I agree. But quantum mechanics has many experimentally confirmed consequences that are totally inconsistent with the conception of a world of self-existent objects with self-existent properties (which could then be considered as the causes of our perceptions). Moreover, it has nothing to do with perceptions. It has everything to do with measuring devices (recall my definition) and their relation to the rest of the world.
in a classical context, your approach of claiming that we should only look at an algorithm that relates outcomes of measurement, and not think of anything ontological behind it, is counter productive.
I am not concerned with classical contexts. I want to understand the quantum world.
You have difficulties imagining there is an Euclidean space in classical physics ?
Of course not in classical physics. But the classical world is a fiction. I want to understand the real world which is quantum. (Sounds familiar, eh?)
But you seemed to imply that there was also a kind of "existence" to POTENTIAL outcomes of measurement in the quantum case: it was a "fuzzy" variable, but as I understood, it DID exist, somehow. I had the impression you said that there WAS a position, even unmeasured, but that it was not a real number, but a "fuzzy variable".
What I'm saying is that if you want to describe the unmeasured quantum world, the only way to do it is in terms of the probabilities of the possible outcomes of unperformed measurements. How do you describe, say, a hydrogen atom in a stationary state? This stationary state is a probability algorithm, which is based on the outcomes of three measurements: energy, total angular momentum, and one angular momentum component. It assigns probabilities to the possible outcomes of every possible measurement. Now if you want this algorithm to be a description of the hydrogen atom, then it’s a description in terms of the possible outcomes of unperformed measurements, right? And what is the salient feature of the atom thus described? Unschärfe or fuzziness!
I take on the position that… there REALLY is a wave function.
That's why I won't be able even to make you see (let alone accept) my point of view. As said, to me it simply makes no sense to think of a probability algorithm as something that REALLY exists, and if the wave function is not really a probability algorithm, I haven’t the faintest notion what it could be, nor can you tell me.
It is the idea that "your measurement apparatus can be in a superposition but you only see one term of it"
No, it's the idea that a probability algorithm REALLY exists, or the fact that you can't tell me what the wave function is if it's not a probability algorithm.
So you have one "branch" or "world" or whatever, where you observe that D1 clicked and D2 didn't, and you have another one where D1 didn't click and D2 did…. No bafflement.
??!?
Well, Einstein's elements of reality are simply the wavefunction,
Good Lord!
and everything becomes clear, no ?
Not to me.
To me, the fundamental dogma of physics is the assumption that all of nature IS a mathematical structure (or, if you want to, that maps perfectly on a mathematical structure). Up to us to discover that structure. It's a Platonic view of things.
You are right to call it a dogma, as in religion.
I don't follow what you're talking about ? We have no tool for calculating the interactions between charges and the EM field ?
Sorry. According to you, Maxwell's equations allow us to calculate the effects of charges on the electromagnetic field (as well as the effect of the electromagnetic field on itself), and the Lorentz force law allows us to calculate the effects of the electromagnetic field on charges. According to me, who has never seen an electromagnetic field but only interacting charges, the electromagnetic field is a tool for calculating the action of charges on charges. I'm not quite on my own here.
The electromagnetic field is, after all, a mental construct introduced for the purpose of discussing interactions between charges. (E.H. Wichmann, Berkeley Physics Course Vol. 4, 1967; original emphasis.)​
 
Last edited by a moderator:
  • #40
vanesch said:
I don't see how you are constructing a conception of the quantum world which is strongly objective, if you START by saying that we only have an algorithm, and no description!
Maybe I should explain this in a new thread. Maybe I will!
Well, you consider quantum theory then solidly PROVEN beyond doubt, and by pure reasoning ?? And what if one day, quantum theory is falsified ? Do familiar objects disappear in a puff of logic then ?
Not by pure reasoning alone. I assume that those "ordinary" objects (which "occupy" space and neither collapse nor explode the moment they are created) are composed of a (large but) finite number of objects that do not "occupy" space, either because they are pointlike or because they have no form at all. Like many physicists these days, particularly quantum field theorist, quantum theory in general and the standard model in particular are here to stay as effective theories. In this sense Wilczek (Nobel 2004) refers to the standard model simply as "the theory of matter" (Wilczek, "Future summary", International Journal of Modern Physics A 16, 1653-78, 2001).
So you seem to claim that, from the pure observation of the existence of ordinary objects, the ONE AND ONLY POSSIBLE PHYSICAL THEORY that makes logical sense is quantum theory ? No need for any empirical input then ? If only we would have been thinking harder, it would have been OBVIOUS that quantum theory is the ultimate correct theory ?
Not the ONE AND ONLY POSSIBLE PHYSICAL THEORY that makes logical sense, but an effective theory that will survive whatever underlying theory might one day be unearthed. (My own intuition tells me that all there is is effective theories. The ultimate mathematical theory is a myth invented by the Pythagoreans.)
I'd bet that… 500 or 1000 years from now, quantum theory is an old and forgotten theory (except maybe for simplified calculations, as is classical physics today).
My bet is that not only quantum theory but the entire Pythagorean mind set, which thinks of reality in terms of mathematical structures, will be dead and gone. (Which is one of the reason why I'm looking beyond the quantum formalism for the relation between it and a non-mathematical reality.
Quantum theory being the current paradigm, it is only waiting to be falsified, no ?
The mathematical formalism stands. The claptrap about evolving real wave functions won't survive.
But that doesn't mean that in the mean time, we should not build up an ontological picture of what we have, now, today, in order to make sense of it. With a formalism comes an ontology.
Wish it would. The quantum formalism has come of age. Its ontology is as yet nowhere in sight, thanks to the Pythagorean bias of most physicists.
But trying to force upon a certain formalism, the ontology of another one, and you have troubles. Trying to force upon quantum theory, the ontology of classical physics, and you create a whole lot of pseudoproblems.
But this is exactly what I'm saying. Quantum physics, taken seriously as a probability algorithm, implies a spacetime ontology that is inconsistent with that of classical physics. Yet this classical spacetime ontology is taken for granted by virtually every physicist. This is another reason why we find it so hard to beat sense into quantum mechanics.

The so-called Pythagoreans, who were the first to take up mathematics, not only advanced this subject, but saturated with it, they fancied that the principles of mathematics were the principles of all things. (Aristotle, Metaphysics 1-5)
 
  • #41
koantum said:
No, there isn’t. Even vanesch agreed with that https://www.physicsforums.com/showpost.php?p=948676&postcount=16".

Indeed, but my statement should not be misunderstood. I claim that unitary quantum theory, as such, does not make any probabilistic statements, given that it is a deterministic theory: we have a state vector evolve in a hilbert space according to a unitary flow. No deterministic theory ever gives intrinsically rise to a probabilistic statement what so ever: this needs to be AN EXTRA POSTULATE in one way or another.
Also - this is what I tried to outline several times already - no deterministic or other theory without an interpretation which links the formal elements to (subjective?) experience/observation can be tested in the lab either. As such, unitary quantum theory doesn't say anything about observation - this is in the same situation as classical physics, btw.

So, to turn a formalism into a "physical theory related to empirical observation" one NEEDS TO STATE, in any case, how both are related ; this is called a physical interpretation of the formalism.

Nor unitary quantum theory, with its unitary flow through hilbert space (= unitary time evolution), nor classical physics with its hamiltonian flow through phase space, are "ready to be confronted with experiment". You STILL need to say how this formal element maps upon observation. IN THIS MAPPING, one can, eventually, MAKE USE OF PROBABILISTIC CONCEPTS or not. This is then the "interpretative postulate" that LINKS the formal theory to the empirical (subjective?) observation.

What I prefer to do then, is to call the formal elements in the theory "the ontology of the theory" and picture that these things are "out there", and the interpretative postulate as the "link between subjective experience and the ontology". No more, no less.

Now, I have the "luck", in quantum theory, to find SEVERAL possible probabilistic "interpretative postulates" which don't introduce inconsistencies ; that means, for me then, that there is no hope to DERIVE IT from the formal framework. I like to compare that to the 5th postulate of Euclidean geometry, which cannot be derived from the 4 others: this was established by showing that hyperbolic geometry was consistent. When you find two different, compatible schemes with a certain axiomatic system, then for sure you cannot derive one (or the other) from the axiomatic system.

That said, there can be more "natural" probabilistic interpretations than others. Most MWI proponents (not I) seem to have a preference for an "equi-probable" measure over orthogonal states in one way or another.
It has to be said that there is some "naturalness" to this choice. However, if you do this strictly, you then run into some problems to derive the Born rule, but can try to get around that by POSTULATING extra requirements. A nice such approach is partly worked out by Robin Hanson, with his "mangled worlds" concept, for instance. It's not clear yet whether it will turn out to be viable, but the main idea is that terms with small hilbert norm are NOT stable under decoherence interactions (while terms with big hilbert norms are): they continuously merge, re-split etc. If you postulate that you cannot observe such an instable "world", and you place a lower cutoff on the considered relative hilbert norms, something close to the Born rule emerges (or at least, there are indications that this happens).

Nevertheless, all these considerations require extra postulates of "perceptive interpretation", in one way or another. These postulates will introduce the probabilistic aspects if the theory is to be seen that way.
 
Last edited by a moderator:
  • #42
koantum said:
If I talk to quantum-state realists, they think I belong to the shut-up-and-calculate (SUC) sect. If I talk to members of this sect, they take me for a quantum-state realist. As far as I am concerned, they are both wrong.

Maybe you're simply in a superposition of both situations ? Or maybe your belonging to either is a fuzzy variable ? :wink: :smile:
The quantum-state realists aspire to describe the objective features of the quantum world. So do I. But they insist on describing these features without reference to measurements, and this doesn’t work.

I don't know why you say that it doesn't work. MWI works. You may not like it, but I surely don't see why you say that it doesn't work. And there IS an explanation which contains a reference to "measurements" which are features of subjective experience in this view. Again, you may not like it, but that doesn't mean that it is not possible.

Next you are told that this is the way it is. Admittedly, there are problems with this set of axioms, but you won't be able to understand them. So for now, shut up and calculate. If you want to know, for example, why energy is a Hermitean operator, you have to figure it out for yourself. You are not even told that the concept of energy in quantum mechanics is totally different from the corresponding classical concept.

I don't understand this. The hamiltonian is the generator of the postulated unitary time translation one-parameter group. We call it "energy" but that doesn't mean anything. We could have called it "smurkadosh".

If those poor students were told that the quantum formalism is nothing but an algorithm for assigning probabilities to possible measurement outcomes on the basis of actual outcomes, all of the above axioms would at once make sense to them.

Well, I don't know what ELSE they are told, with the above axioms ! OF COURSE that the axioms you wrote down are "an algorithm to calculate probabilities of outcomes"! That's pretty evident, no ? Ok, I prefer "formalism" and not "algorithm" because it is not directly runnable on a Turing machine. It's a mathematically written down formalism. A machinery. As is the formalism of hamiltonian or lagrangian classical mechanics.

And next, you have to use your common sense, intuition and all that to fill in the interpretational issues, such as: "given an apparatus, what measurement operator goes with it and why ?" but which is usually not an issue because people (salesmen, the problem statement, habits in the field) TELL you to what it is supposed to correspond. So when you talk about the "position measurement" of a particle, then you use X, for instance. But this is usually made so evident that you even don't think about it.

So with some common sense, "cultural habits", and "because they told me so", you arrive at knowing what apparatus goes with what hermitean operator, and the formalism can be used.

What we are discussing about, however, is: can we assign some REALITY to this formalism ? Does the formalism indicate us an ontology ? And *this* metaphysical question should not be directly addressed: I think that it is good to say that you should first understand the formalism, as a way to calculate outcomes of measurements. And many people stop short of it, and for them it is just fine to have this mixture of "common sense" assignments of formal elements to "things in the lab", and to crank the handle of the formalism, without a deeper interpretational scheme. This is the "shut up and calculate" crowd. I respect that attitude. The problem only comes about when these people are then suddenly confronted with issues like Bell's theorem and so on, and have an INCONSISTENT picture.

The difference between the quantal and the classical probability algorithm is readily understood as a consequence of Nature's fuzziness, and the latter is readily understood as Nature's means to "fluff out" matter: to create stable objects that occupy space out of finite numbers of particles that don’t.

Well, everybody his opinion of course, but personally, if one would have told me THIS as a student, it would not have been very enlightening - I would even have taken my professor for someone chatting out of the back of his head. "nature's fuzziness that is meant to stabilize ordinary objects taking up a finite amount of space". Come on ! Nature has no "aim to produce stabilized ordinary objects" does it ?

I fail to see what such a statement brings about beyond the "hey, use your common sense, tradition, what they tell you..." to associate lab stuff to formal elements of the quantum formalism, and turn the handle of the formalism to find outcomes of measurement probabilities.
If Alice and Bob perceive a teapot on the table between them, it is useful to assume that the reason this is so is that there is a teapot on the table between them. I agree. But quantum mechanics has many experimentally confirmed consequences that are totally inconsistent with the conception of a world of self-existent objects with self-existent properties (which could then be considered as the causes of our perceptions). Moreover, it has nothing to do with perceptions. It has everything to do with measuring devices (recall my definition) and their relation to the rest of the world.

Well, an MWI-er such as I does NOT have "many experimentally confirmed consequences that are totally inconsistent with the conception of a world of self-existent objects with self-existent properties", you know.
Only, the properties are not the directly perceived ones (= outcomes of measurement), but rather the state of the wavefunction, from which the perceptions can be DERIVED (contextually).

Of course not in classical physics. But the classical world is a fiction. I want to understand the real world which is quantum. (Sounds familiar, eh?)

I think that this is a mistake. If you do classical physics, then the classical world is "real". If you do quantum physics, then the quantum world is "real". And the day that quantum physics will be falsified, yet another theory will be "real".

What I'm saying is that if you want to describe the unmeasured quantum world, the only way to do it is in terms of the probabilities of the possible outcomes of unperformed measurements.

I think that this is NOT the right vision. This is clinging too much onto naive realism, in that "measurement-observation-perception = reality". It is the basic tenet of the classical world view: the state of a system is given by what I could possibly measure of it: all potential measurement outcomes "are" there. In other words, the total refusal of contextuality: that a "measurement" is the result of an interaction of an observer and a system, and that, without this interaction, the "measurement result" is not a property of the system. It is funny that you insist upon most people being too much loaded with classical concepts, and then you fall for this classical idea!

How do you describe, say, a hydrogen atom in a stationary state? This stationary state is a probability algorithm, which is based on the outcomes of three measurements: energy, total angular momentum, and one angular momentum component. It assigns probabilities to the possible outcomes of every possible measurement. Now if you want this algorithm to be a description of the hydrogen atom, then it’s a description in terms of the possible outcomes of unperformed measurements, right? And what is the salient feature of the atom thus described? Unschärfe or fuzziness!

Yes, you're forced into this corner because you insist upon the existence of "measurement results" as "existing" for the system. You cannot imagine, apparently, that a "measurement result" is just the perceived outcome of an interaction, but it must be "already in the system". This is an extremely classical view. Measurements = "showing things that exist of the system" and not "perceived results of interactions between two systems, one which we call "measurement apparatus" and the other which we call "system".

That's why I won't be able even to make you see (let alone accept) my point of view. As said, to me it simply makes no sense to think of a probability algorithm as something that REALLY exists, and if the wave function is not really a probability algorithm, I haven’t the faintest notion what it could be, nor can you tell me.

The wave function is a mathematical object that maps upon the ontology of quantum theory, and to me, the ONLY thing that you can use to describe an ontology, is a mathematical object. Your reaction to this is:

You are right to call it a dogma, as in religion.

But I wonder, what ELSE but a mathematical object could possibly satisfy "a description of an ontology" ? How could you precisely say what something is, and NOT use a mathematical object to do so ?

Sorry. According to you, Maxwell's equations allow us to calculate the effects of charges on the electromagnetic field (as well as the effect of the electromagnetic field on itself), and the Lorentz force law allows us to calculate the effects of the electromagnetic field on charges. According to me, who has never seen an electromagnetic field but only interacting charges, the electromagnetic field is a tool for calculating the action of charges on charges. I'm not quite on my own here.

In fact, you've never SEEN anything ELSE but some electromagnetic radiation (which is called "light in your eyes").

The electromagnetic field is, after all, a mental construct introduced for the purpose of discussing interactions between charges. (E.H. Wichmann, Berkeley Physics Course Vol. 4, 1967; original emphasis.)​

I've, for my part, never SEEN a charge... so I could turn this around and say that charges are mental constructs introduced for the purpose of allowing field modes to interact... Some of which I've seen directly.
 
Last edited:
  • #43
vanesch said:
MWI works.
If a theory mistakes possibilities for actualities I don’t say it works. I say it's a silly category mistake.
If someone says the wave function is a probability algorithm on weekdays and Ultimate Reality on Sundays, I say: make up your mind.
Some MWI enthusiasts (Vaidman and Plaga come to mind) have claimed that it is an in principle testable interpretation. Imagine a test for the validity of MWI. There will then be worlds in which it is true as well as worlds in which it is false. Something for everyone. :biggrin:
The hamiltonian is the generator of the postulated unitary time translation one-parameter group. We call it "energy" but that doesn't mean anything. We could have called it "smurkadosh".
You would tell your students that energy doesn’t mean anything?
I prefer "formalism" and not "algorithm" because it is not directly runnable on a Turing machine.
The word "algorithm" was used centuries before Turing.
"given an apparatus, what measurement operator goes with it and why ?"
Rather, we first define our concepts and then we see how we measure them. Energy, for instance, is the quantity whose conservation is implied by the homogeneity of time. For position it's of course obvious how we define and measure it.
What we are discussing about, however, is: can we assign some REALITY to this formalism ? Does the formalism indicate us an ontology ? And *this* metaphysical question should not be directly addressed: I think that it is good to say that you should first understand the formalism, as a way to calculate outcomes of measurements.
I fully agree. Unfortunately the fact that we use this formalism to calculate the probabilities of measurement outcomes is usually mentioned almost as an afterthought. I'd be very happy if the first thing students are told is that the quantum formalism is a probability algorithm.
Perhaps I should tell you something about our students at the Sri Aurobindo International Centre of Education and my class. In the so-called "free-progress" section of our higher secondary and undergraduate levels, students are free to choose their subjects and their teachers (from a list of available teachers and subjects offered, of course). Everyone of my students has chosen to be in my class, which is offered (i) to physics students as a philosophical complement to their regular physics classes and (ii) to students more interested in philosophy than science per se.
"nature's fuzziness that is meant to stabilize ordinary objects taking up a finite amount of space". Come on ! Nature has no "aim to produce stabilized ordinary objects" does it ?
Asking what the laws of physics must be like if they are to allow the existence of "ordinary" objects (as defined by me) is one thing. Asking what Nature has in mind is quite another.
the state of the wave function, from which the perceptions can be DERIVED (contextually).
Really? You can take a wave function and derive sensory perceptions from it? Wishful thinking! Neuroscientists can't even derive sensations from the neurobiology.
It would make more sense to take sense perceptions as your starting point and then discover the correlations between sensory data we call measurement outcomes. This way you don’t have to (and of course can't) derive perceptions because the quantum formalism is embedded in and presupposes sensory perceptions. My approach differs from this in that it makes not sensory perceptions but measurements primary. Recall: any event or state of affairs from which the truth or the falsity of a statement about something can be inferred qualifies as a measurement outcome. You want to derive from the wave function the existence of events or states of affairs from which something can be inferred? IMO this is as impossible as explaining why there is anything rather than nothing at all.
What you do is to transmogrify the correlations among perceptions into What Exists, after which you are left with the pseudo-problem of deriving the perceptions from What Exists.
This is clinging too much onto naive realism, in that "measurement-observation-perception = reality". It is the basic tenet of the classical world view: the state of a system is given by what I could possibly measure of it: all potential measurement outcomes "are" there.
Where did I ever say this? I insist that no property or value is possessed (by a system or an observable) unless the possession is indicated by a measurement!
You cannot imagine, apparently, that a "measurement result" is just the perceived outcome of an interaction
A measurement outcome is the indicated outcome of a system apparatus interaction. I cannot imagine anything else.
The wave function is a mathematical object that maps upon the ontology of quantum theory, and to me, the ONLY thing that you can use to describe an ontology, is a mathematical object.
What IS the ontology of quantum theory? You are stating a tautology. What you are saying is as plausible as a baker's claim that the ONLY thing that you can use to describe an ontology is dough.
But I wonder, what ELSE but a mathematical object could possibly satisfy "a description of an ontology" ?
Lack of imagination, I'd say.
In fact, you've never SEEN anything ELSE but some electromagnetic radiation (which is called "light in your eyes").
Careful with that. Light (electromagnetic radiation) is instrumental in my seeing things. I don’t see the light by which I see. If I saw all that electromagnetic radiation that according to the classical story is crisscrossing the space between me and the objects I perceive, I couldn’t perceive these objects.
I've, for my part, never SEEN a charge
Comb your hair in dry air in front of a mirror and you'll see a charged object – yourself.
field modes ... Some of which I've seen directly.
No you haven’t, not directly. What you've see is material things from which you infer (rightly or wrongly) the presence of field modes.
 
  • #44
koantum said:
If a theory mistakes possibilities for actualities I don’t say it works.

And vice-versa. Just a matter of opinion.

If someone says the wave function is a probability algorithm on weekdays and Ultimate Reality on Sundays, I say: make up your mind.

They only say that it is ultimate reality ; they don't say it is a probability algorithm. Of course, with the "ultimate" comes the caveat: until we know better (the eternal ephemere nature of scientific knowledge, which is only there to be falsified).

Some MWI enthusiasts (Vaidman and Plaga come to mind) have claimed that it is an in principle testable interpretation. Imagine a test for the validity of MWI. There will then be worlds in which it is true as well as worlds in which it is false. Something for everyone. :biggrin:

You always make the same mistake: imagine that there is such a test (which is: to observe quantum interference between what's postulated to be a "measurement process"). Now the OBSERVED OUTCOME of the test is not necessarily the truth value of the test, because there is only a finite probability to observe that outcome. Unless the test is "sure" (100% probability), in which case, there are NO universes in which the outcome is false. Terms that are NOT present in the wavefunction are not "possible worlds".

You would tell your students that energy doesn’t mean anything?

Energy doesn't mean anything BEYOND its defining property, which is the generator of time translations. For instance, the fact that "energy is conserved" is ONLY the result of the fact that it is the generator of time translations. And energy doesn't have much other uses except for the fact that it is a conserved quantity over time.

The word "algorithm" was used centuries before Turing.

Yes, but Turing gave it its mathematical definition.

Rather, we first define our concepts and then we see how we measure them. Energy, for instance, is the quantity whose conservation is implied by the homogeneity of time. For position it's of course obvious how we define and measure it.

No, it isn't. That's the whole point I wanted to outline. The "obvious" comes simply from "habits, what they told you, how people used to do it...", and from the fact that we have rather good visual impressions which give us an impression of "seeing positions".
How do you determine, say, the width of 10^(-23) m ? Or of 10^17 m ?
"position" is simply a "common sense" (probably biologically evolved) mental concept, not more than "light or dark" or something of the kind.

I fully agree. Unfortunately the fact that we use this formalism to calculate the probabilities of measurement outcomes is usually mentioned almost as an afterthought. I'd be very happy if the first thing students are told is that the quantum formalism is a probability algorithm.

Funny, but that's what they told me, and how I understood most intro texts to the matter. Well, not a "probability algorithm", but a "formalism that allows you to calculate probabilities of outcomes of measurement". Which I think is the obvious, correct, and interpretation-neutral statement which allows people to set this issue asside until they know enough about the formalism.

(to be continued...)
 
  • #45
(continuation...)


koantum said:
Perhaps I should tell you something about our students at the Sri Aurobindo International Centre of Education and my class. In the so-called "free-progress" section of our higher secondary and undergraduate levels, students are free to choose their subjects and their teachers (from a list of available teachers and subjects offered, of course). Everyone of my students has chosen to be in my class, which is offered (i) to physics students as a philosophical complement to their regular physics classes and (ii) to students more interested in philosophy than science per se.

Well, there's not much philosophy in your approach, I'd say. If I understand you correctly, what you try to give as an ontological picture is that potential measurement outcomes cannot be precise values, but must have some "objective" fuzziness to them and that this somehow suggests the form the quantum formalism has. I even fail to see EXACTLY what your approach is about, except to vaguely say that one should NOT assign any reality to the wavefunction, but to the "fuzziness" of potential measurement outcomes.

Asking what the laws of physics must be like if they are to allow the existence of "ordinary" objects (as defined by me) is one thing. Asking what Nature has in mind is quite another.

That's postulating a strange ontology, which gives existence to "ordinary objects in your list" and not to anything else, without precisely specifying WHAT this ontology is about.

It would make more sense to take sense perceptions as your starting point and then discover the correlations between sensory data we call measurement outcomes. This way you don’t have to (and of course can't) derive perceptions because the quantum formalism is embedded in and presupposes sensory perceptions.

This is the OBVIOUS, minimalistic approach of a scientific theory, and it is entirely based upon a solipsist viewpoint: a scientific theory is an organizing scheme of subjective experiences, and all what exists are your subjective experiences.
All the rest is hypothesis. Everything which goes beyond solipsism is hypothesis. The "existence" of ordinary objects is hypothesis. So in ANY case you'll have to make a hypothesis about what is existing, because - a you point out yourself - the starting point of all knowledge are subjective experiences - perceptions. We've evolved biologically (well, that's also a hypothesis, that we even have a body!) probably in such a way, that we mentally organize our perceptions in such a way that we make the almost inconscious experience of assigning an ontological reality to what we perceive. It is "common sense" in a very litteral way: we're almost automatically organizing our subjective experiences in such a way to make the ontological hypothesis of the existence of "ordinary objects". You could call this a pre-build in "scientific theory" from which our brains "boot up". The internal perceptions make us also make the hypothesis that we have a physical body, included in the list of "ordinary objects".
But now, our physical theories may become more sophisticated, in such a way that they conflict with the "build in" common sense ontological hypotheses (the "existence of ordinary objects"). So one has then to make a choice, at least when one WANTS to make a hypothesis of ontology: should we stick to our "common sense" build in ontological hypotheses, or should we consider that these are what they are, namely just organizing principles of our daily subjective experiences, and can we accept that more sophisticated theories (which are of course not build into our brains) need more sophisticated ontologies ?
The ENTIRE interpretational difficulty of quantum theory resides in the refusal to let go our primary common sense ontological hypotheses ; the refusal to let go the idea that "ordinary objects are real".


My approach differs from this in that it makes not sensory perceptions but measurements primary. Recall: any event or state of affairs from which the truth or the falsity of a statement about something can be inferred qualifies as a measurement outcome.

There is no absolute way to infer the truth or falsity of something, apart from the truth or falsity of a subjective experience. This is the entire basis of the non-falsifiability of solipsism. So when you do the above, you are MAKING IN ANY CASE ontological assumptions. If, by doing so, you introduce problems, then that's of course more a problem of the assumptions than of the theory at hand.

You want to derive from the wave function the existence of events or states of affairs from which something can be inferred? IMO this is as impossible as explaining why there is anything rather than nothing at all.
What you do is to transmogrify the correlations among perceptions into What Exists, after which you are left with the pseudo-problem of deriving the perceptions from What Exists.

Ha, but that is EXACTLY the nature of the hypothesis of an ontology! You have the choice between that, and solipsism.

As I said before, you're not obliged to make the hypothesis of an ontology, and keep to solipsism, claiming that ALL there is, are your subjective experiences. BUT, BUT, it is not very helpful to do so. You can stick to the common sense ontology, and then you run into problems with quantum theory, unless you REMOVE some ontology for some things which are not on your list of common sense objects. It's where ALL the interpretation problems of quantum theory come from: quantum theory, as a universal theory, is NOT compatible with common sense ontology.

A measurement outcome is the indicated outcome of a system apparatus interaction. I cannot imagine anything else.

But there is no quantum-mechanical way to do that! You know as well as I that if you track down the physical interaction of the system and the apparatus, that both end up in an entangled state, not in a "definite outcome" state.

Careful with that. Light (electromagnetic radiation) is instrumental in my seeing things. I don’t see the light by which I see. If I saw all that electromagnetic radiation that according to the classical story is crisscrossing the space between me and the objects I perceive, I couldn’t perceive these objects.

Que ? You only see the light that enters your eyes, no ? From that, you INFER that there must be sources of them. But if I had a perfect hologram just in front of your eyes that sent exactly the same electromagnetic radiation in your eyes, you wouldn't be able to make the difference. That's why I claim that the only thing you've ever SEEN, was light impinging on your eyes.

Comb your hair in dry air in front of a mirror and you'll see a charged object – yourself.

I'll "see" an EM field penetrating my eyes, which has such a structure that it makes, according my build-in common sense ontological hypotheses, me postulate that there is an object in front of me, called mirror, and that this mirror projects light upon me that must come from what's in front of the mirror, namely my body.

No you haven’t, not directly. What you've see is material things from which you infer (rightly or wrongly) the presence of field modes.

I think you've already made some ontological hypotheses along the way, namely that the light you see "comes from material sources". It's normal. We're wired up that way to think so. It's responsible for the success of 3D virtual reality goggles.
 
  • #46
They only say that it is ultimate reality ; they don't say it is a probability algorithm.
We went through this: you then need some extra postulate to get the only things (probabilities) that you can compare with experiments.
Unless the test is "sure" (100% probability), in which case, there are NO universes in which the outcome is false.
It could of course be the other way: there are NO universes in which the outcome is true.
Energy doesn't mean anything BEYOND its defining property, which is the generator of time translations. For instance, the fact that "energy is conserved" is ONLY the result of the fact that it is the generator of time translations. And energy doesn't have much other uses except for the fact that it is a conserved quantity over time.
I almost agree. Energy means a lot of things in the limit in which quantum mechanics degenerates into classical mechanics, and it's worth discussing (I'm not proposing to do that here!) how and why these classical significances arise from the defining property.
Besides, I like Feynman's image of rotating arrows moving along the alternative spacetime paths that are summed in his path-integral calculation of a particle propagator. Based on this intuition, I tend to think of the energy associated with a spacetime path as the rate at which the particle "ticks" (in cycles per second) as it travels along this path. This helps to understand, for instance, why there is such a thing as inertial time: an inertial time scale is one in which freely moving particles tick at constant rates. (Don’t protest: I know you favor visual aids that have heuristic value).
Yes, but Turing gave it its mathematical definition.
What? Before it didn’t have a mathematical meaning? But OK. What if I say that quantum states are "probability measures" instead of "probability algorithms". Is this more acceptable? A measure on the set of projectors? I'm afraid this term sounds too Boolean.
No, it isn't. That's the whole point I wanted to outline. The "obvious" comes simply from "habits, what they told you, how people used to do it...", and from the fact that we have rather good visual impressions which give us an impression of "seeing positions". How do you determine, say, the width of 10^(-23) m ? Or of 10^17 m ? "position" is simply a "common sense" (probably biologically evolved) mental concept, not more than "light or dark" or something of the kind.
I am with you most of the way. But this is another topic that is best treated separately. I'll get back to it.
Funny, but that's what they told me, and how I understood most intro texts to the matter.
Perhaps it was different in my time (the 70s) and my country (then Germany). People talked and wrote as if state vectors or wave functions represented systems somehow directly, not merely as tools for calculating the probabilities of measurement outcomes.
And even when the relation between wave functions and probabilities was discussed, it was biased and unbalanced. The fact that wave functions determine probabilities of measurement outcomes was emphasized, whereas the fact that measurement outcomes determine wave functions was blunted by an embarrassed "it seems so".
To be continued...​
 
  • #47
(Continuation)
vanesch said:
Well, there's not much philosophy in your approach, I'd say.
Please allow for the fact that we touched on only a small part of the philosophy so far.
I even fail to see EXACTLY what your approach is about, except to vaguely say that one should NOT assign any reality to the wave function, but to the "fuzziness" of potential measurement outcomes.
I purposely didn’t commit myself as to the ontological status of fuzzy observables. All I said is: if you want to describe a quantum system between measurements, the only way to do this is in terms of assignments of probabilities to the possible outcomes of unperformed measurements. All right, I characterized the resulting probability distributions as descriptive of an objective fuzziness in order to distinguish it from a merely subjective ignorance. Our ignorance of the electron's position doesn’t stabilize the hydrogen atom. Its objective fuzziness does.
But this fuzziness is just the beginning. It leads to the incomplete spatiotemporal differentiation of the world, which in turn allows one to define macroscopic objects rigorously, which in turn allows one to understand how a fundamental physical theory that is formally nothing but a probability algorithm (OK, a tool for calculating probabilities) can be complete: how it can encompass the measurement outcomes which it presupposes.
we're almost automatically organizing our subjective experiences in such a way to make the ontological hypothesis of the existence of "ordinary objects". You could call this a pre-build in "scientific theory" from which our brains "boot up". The internal perceptions make us also make the hypothesis that we have a physical body, included in the list of "ordinary objects".
You assume too much about yourself as creator of your world. You won't find your way in Paris using a map of London. There is a correspondence in the sense that Nature will knock you on the nose if you use the wrong theory. In this sense there is a correct theory.
But now, our physical theories may become more sophisticated, in such a way that they conflict with the "build in" common sense ontological hypotheses (the "existence of ordinary objects"). So one has then to make a choice, at least when one WANTS to make a hypothesis of ontology: should we stick to our "common sense" build in ontological hypotheses, or should we consider that these are what they are, namely just organizing principles of our daily subjective experiences
OK so far but, as said, there is agreement (up to a point) between these organizing principles and whatever they are working on. The assumption of a self-existent world with self-existent objects with self-existent properties works so well that it may well be MEANT to work that well. If you can hold on to that for a microsecond, then we can understand the quantum formalism very well as the necessary mechanism by which what is MEANT to be is manifested, brought into being, or realized. Enough ontology for the day?
The ENTIRE interpretational difficulty of quantum theory resides in the refusal to let go our primary common sense ontological hypotheses ; the refusal to let go the idea that "ordinary objects are real".
Not the ENTIRE. There are about as many solutions to the problem of interpretation as there are ways of formulating the problem. We've already noticed that the way one formulates the problem half predetermines the solution, and if the formulation produces a pseudo-problem, the solution is gratuitous.
I see no reason to let go the idea that ordinary objects are real. There is much more to them than commonsense has it but that doesn’t make them unreal. The problem is to understand the relation between the quantum formalism and ordinary objects. That you need the quantum formalism in order to have them is just one of them. Another is that the quantum formalism describes how ordinary objects are manifested, brought into being, made to exist, whatever. I mentioned elsewhere that it should be expected that the microworld doesn’t have the same features as the macroworld, for if it did, one could never understand how these features are realized, brought into being….
There is no absolute way to infer the truth or falsity of something, apart from the truth or falsity of a subjective experience.
You shouldn’t stop here. Only pre-linguistic experience can be trusted. The moment it is formulated, it is enmeshed in a system of questionable hypotheses.
you are MAKING IN ANY CASE ontological assumptions. If, by doing so, you introduce problems, then that's of course more a problem of the assumptions than of the theory at hand.
Apparently there is a mirror symmetry between what I say about your approach and what you say about mine.
Ha, but that [to transmogrify the correlations among perceptions into What Exists] is EXACTLY the nature of the hypothesis of an ontology! You have the choice between that, and solipsism.
Nonsense. There are plenty more choices. You are limited to these choices only because you confuse mathematics with ontology.
Me: A measurement outcome is the indicated outcome of a system apparatus interaction.
You: But there is no quantum-mechanical way to do that!
This just means we have different ideas about what a quantum-mechanical way is.
You know as well as I that if you track down the physical interaction of the system and the apparatus, that both end up in an entangled state, not in a "definite outcome" state.
Sorry, I know exactly the opposite (to the extent that you allow me to know anything).
You only see the light that enters your eyes, no ?
Sorry, haven’t seen light entering my eyes. Only see things with the help of the light entering my eyes.
I'll "see" an EM field penetrating my eyes, which has such a structure that it makes, according my build-in common sense ontological hypotheses, me postulate that there is an object in front of me, called mirror, and that this mirror projects light upon me that must come from what's in front of the mirror, namely my body.
The bottom line: everyone sees what they want to see.
 
  • #48
What? Before it didn’t have a mathematical meaning? But OK. What if I say that quantum states are "probability measures" instead of "probability algorithms". Is this more acceptable? A measure on the set of projectors? I'm afraid this term sounds too Boolean.

Well, you know that strictly speaking, quantum states are NOT probability measures over the potential measurement outcomes, at least not in the Kolmogorov sense - we've been through that already a few times.
And if you tell me now that quantum states are "mathematical entities that contain the simultaneous description of the (fuzzy) quantities which can be outcomes of measurements", yes of course. You can see the "ensemble of fuzzy quantities" which are possessed by the system as mathematically equivalent to the wavefunction. And then, in the way that you assign "reality" to this set of fuzzy quantities, you assign reality to that mathematically equivalent structure, which is the wavefunction.
I mean:
{set of fuzzy quantities} <==> wavefunction

If you trace through the entire chain, then, your body also possesses FUZZY QUANTITIES. After all, there's no distinction between a quantum system under study and your body, right ? Your eyes have seen, and have not seen, the bulb flash: this quantity "have seen bulb flash" is JUST AS WELL a fuzzy quantity. You happen to perceive *A* precise outcome, not a fuzzy one. Why ? You tell me.
What's the difference with the MWI viewpoint ?

Perhaps it was different in my time (the 70s) and my country (then Germany). People talked and wrote as if state vectors or wave functions represented systems somehow directly, not merely as tools for calculating the probabilities of measurement outcomes.
And even when the relation between wave functions and probabilities was discussed, it was biased and unbalanced. The fact that wave functions determine probabilities of measurement outcomes was emphasized, whereas the fact that measurement outcomes determine wave functions was blunted by an embarrassed "it seems so".

I think it is an equivalent viewpoint, once you assign the same "fuzzyness" to your body state quantities as you do to the "potential measurement outcomes".
 
  • #49
koantum said:
Not the ENTIRE. There are about as many solutions to the problem of interpretation as there are ways of formulating the problem. We've already noticed that the way one formulates the problem half predetermines the solution, and if the formulation produces a pseudo-problem, the solution is gratuitous.

I'm preferring MWI exactly because, apart from the mental step needed to let go the idea that "ordinary objects are real", there are no pseudoproblems left. The usual pseudoproblems being that there IS no fundamental microscopic ontology, or that there are two different processes in nature (measurements and interactions), or that there is "immediate action at a distance" (killing off special relativity).
In MWI, there IS a fundamental ontology, there is no distinction between "measurements" and "interactions" and locality is respected. You only need a very weird relationship between "ontological reality" and "subjective experience" but if you can mentally get over that idea, all the usual difficulties with QM are gone.

Now, I can also accept the idea that people DON'T want to go into these considerations, and just think of quantum theory as some tricks to calculate outcomes of measurements.

I can again accept the idea (I'm even half convinced of it) that quantum theory as it stands, will, one day, find a completion/falsification/whatever, which will change the picture and formalism completely or partially and that all the speculation of quantum theory being universal in the form it has today is premature to say the least. (my only response to that is that, given that we don't know yet what will be the eventual modifications, as we don't have anything else, we'll have to do with what we have today, as a mental exercise).

However, I jump up and down, each time I read that one CANNOT GIVE an ontological description to the quantum-mechanical formalism without running severely in contradictions, because this is not true. You can do so and then there are no problems left (except for the counter-intuitive idea of letting go the ontological existence of ordinary objects).

I see no reason to let go the idea that ordinary objects are real. There is much more to them than commonsense has it but that doesn’t make them unreal.

Again, THIS is the fundamental EXTRA hypothesis which makes all "ontological" views on quantum theory run into havoc. And it is pretty obvious that you will arrive at troubles when you take ordinary objects for real: that is the very superposition principle of quantum theory, which cannot be applied to ordinary objects, if they are to be real.

The problem is to understand the relation between the quantum formalism and ordinary objects. That you need the quantum formalism in order to have them is just one of them. Another is that the quantum formalism describes how ordinary objects are manifested, brought into being, made to exist, whatever. I mentioned elsewhere that it should be expected that the microworld doesn’t have the same features as the macroworld, for if it did, one could never understand how these features are realized, brought into being….


Now, everybody has his/her own cherished "pre-postulates" of course. Mine is clear: it is the necessity to assign a 1-1 relationship between a mathematical object and the ontological world. Call it reductionism, or platonic realism or whatever. Yours is that the ordinary objects you see are the "fundamental ontological entities" in some sort of way. This colors one's picture of the world.
 
  • #50
vanesch said:
I jump up and down, each time I read that one CANNOT GIVE an ontological description to the quantum-mechanical formalism without running severely in contradictions, because this is not true.
I fully agree with you.
it is pretty obvious that you will arrive at troubles when you take ordinary objects for real: that is the very superposition principle of quantum theory, which cannot be applied to ordinary objects, if they are to be real.
You will get into trouble if you take wave functions for real, for then a cat can be both alive and dead, which is pure nonsense. You won't arrive at this nonsense of you understand that the quantum formalism does nothing but correlate measurement outcomes, whose existence it presupposes. To make your approach consistent with the existence of measurement outcomes, you need many worlds. I want to understand this one world. The plural of "world" is (for me) a contradiction in terms.
Call it reductionism, or platonic realism or whatever.
How about Pythagorean mysticism?
Yours is that the ordinary objects you see are the "fundamental ontological entities" in some sort of way.
Absolutely not. I haven’t yet told you what my fundamental ontological entities are. To be able to conceive of them, you need to accept the quantum formalism as being fundamentally a probability algorithm.
Well, you know that strictly speaking, quantum states are NOT probability measures over the potential measurement outcomes, at least not in the Kolmogorov sense
Apropos of Kolmogorov. There are two misconceptions about quantum-mechanical probabilities:
  • that they are subjective rather than objective,
  • that they are absolute rather than conditional. "Every probability is a conditional probability" - Hans Primas, "Time–Entanglement Between Mind and Matter" in Mind and Matter 1 , 81–119, 2003.
Why is the wave function commonly regarded as the primary object and the propagator as derivative? Partly because of the historical precedence of Schrödinger's "wave mechanics" over Feynman's propagator-based formulation of quantum mechanics, in which the propagator is the primary object from which the wave function is derived. But there is a more insidious reason, namely the primacy of absolute over conditional probabilities in Kolmogorov's mathematical foundation of probability theory. This too is a fortuitous case of historical precedence, for an alternative to Kolmogorov's axiomatic formulation of probability theory has been developed by A. Rényi, and every result of Kolmogorov's theory can be translated into Rényi's formulation, which is based entirely on conditional probabilities.
If you take the wave function (rather than the propagator) as the primary object, you will take the probabilities it defines in an absolute sense, as depending on nothing but the wave function. If you take the propagator as the primary object, then it is obvious that the wave function is only a tool for calculating conditional probabilities – probabilities that are determined by the outcomes of actual measurements and the time of the measurement to the possible outcomes of which they are assigned.
Besides, you know the enormous advantage of the propagator formalism over the wave function formalism – its explicit relativistic invariance. With the wave function formalism you schlep with you the useless burden of a preferred reference frame, which of course is as unobservable as your evolving wave function.
After all, there's no distinction between a quantum system under study and your body, right ?
Wrong. There is a crucial difference between macroscopic objects and all the rest. But (once again) to be able to understand it, you need to accept the quantum formalism as being fundamentally a probability algorithm.
You happen to perceive *A* precise outcome, not a fuzzy one. Why ? You tell me.
There happens to be a world. Why? You tell me?
 

Similar threads

Replies
35
Views
3K
Replies
16
Views
2K
Replies
3
Views
2K
Replies
6
Views
2K
Replies
3
Views
2K
Replies
2
Views
2K
Back
Top