# Bohm trajectories and "protective" measurements?

by bohm2
Tags: bohm, measurements, protective, trajectories
P: 298
 Quote by Quantumental I am very interested in grasping this, but first I want to ask you if you've considered the "new" approach by Wallace and Timpson? It seems they no longer support the view of wavefunction realism in configuration space, but rather promote an idea called Space-Time state realism. They have written a paper on it: http://philsci-archive.pitt.edu/4621...aterealism.pdf
My first impression is that this accepts some of the arguments by Maudlin. So, they accept not only that the wave function is not only an element in a Hilbert space, but a complex function on the configuration space (introducing the first additional structure) but also that the configuration space itself has a complex structure of something living on ordinary space.

Of course, because of the relativistic background, all this on spacetime instead of space.

And, once it is a many worlds variant, only with a wave function, not with a configuration itself.

So, now we have a configuration space, moreover, with a structure which makes sense for a space of configurations living in a space - and all this with all the properties we would use to describe the actual configuration of the world as we see it - but without any configutions.

I would say there was, in the past, some interesting research program: Is it possible to start, with only a Hilbert space and the Hamilton operator on it, to derive everything else, including all the physics? My argument was that this program fails because one needs additional structure, already in the first step where one wants to recover the configuration space from the Hamilton operator, a step which, from mathematical point of view, was the most promising, because the usual Hamilton operator, looking like $p^2 + V(q)$, looks very different for p and q.

As far as I understand, the very program has been given up. The subdivision into systems, which was a central element, always had a weak point: There is no natural fundamental subdivision, and the subdivisions we have in real life, into observers, devices and so on have no fundamental origin, they make sense only in an environment of a particular configuration which contains at least the Solar system with the Earth.

Maybe looking for a replacement of the subdivision into subsystems they use the subdivision into spacetime regions? That would be fine with me, it makes sense. Unfortunately not for the aim of saving relativity, because
it would be an introduction of a background in a situation where the quantum gravity guys hope for background-independent theories. But it doesn't seem to be the case - at other places, it sounds like all the decoherence stuff is used as it is, without worrying about the definition of subsystems.

I see a circularity here: To define the real objects we observe, we have to apply this decoherence machine, which depends on the subdivision into systems. But the usual subdivisions into systems come from the real objects we observe.

A problem which is absent in dBB. There we always have a configuration, and if we consider the evolution of this configuration, we have to consider its environment in the configuration space. In this environment all the visible subsystems are already present and can be used as they are.

So far some ideas immediately after reading the paper, so, not very deep.
PF Gold
P: 670
 Quote by Quantumental I tried making sense of the transcripts, but I couldn't. It seems noone did, the questions etc. made it seem that noone agreed with Maudlin and not even Maudlin seemed to be able to pin down any technical faults with WF realism.
I'm not sure if this paragraph by Jill North makes it any easier but here's a pretty good summary (I think) of Maudlin's argument:
 Think of it this way. The relation between the wave function’s space and its ontology, on the one hand, and three-dimensional space and its ontology, on the other, is analogous to the relation between particles, on the one hand, and tables and chairs, on the other. Compare: isn’t it remarkable, if particles are fundamental, that they should conspire to make it seem as though there really are tables and chairs? But of course particles conspire to form themselves into tables and chairs, if particles really are in the fundamental level of reality and the nonfundamental stuff includes tables and chairs. Since the apparent existence of tables and chairs is the starting point for our theorizing, of course the fundamental theory we are led to is one that predicts the appearances (and existence) of tables and chairs. To put it another way, our evidence for the theory, in the first place, is what we observe. But what we observe, everyone agrees, is a parochial reflection of our own situation: we are familiar with tables and chairs. It is then no great coincidence that we end up with a fundamental theory that has the power to predict the appearances for us.
The Structure of a Quantum World
http://philsci-archive.pitt.edu/9347...for_volume.pdf
PF Gold
P: 670
I'm still having some difficulty understanding the difference between weak versus protecive measurements although the authors in some of these papers seem to be suggesting that unlike weak measurements:
 In protective measurements we obtain this value not as a statistical average, but as a reading of a measuring device coupled to a single system. A sufficient number of protective measurements performed on a single system allow measuring its quantum wave function. This provides an argument against the claim that the quantum wave function has a physical meaning only for an ensemble of identical systems.
Protective Measurements
http://lanl.arxiv.org/pdf/0801.2761.pdf

So if I'm understanding this, then, it is different than weak measurements as summarized here by Demystifier:

Weak measurements in quantum mechanics and 2.6 children in an American family
http://www.physicsforums.com/blog.php?b=1226

I'm still not sure if this scheme of protective measurements is universally accepted primarily because of some critical papers on the topic but in a recent paper by Gao, he suggests that protective measurements rule out ψ-epistemic models as per PBR:
 In particular, the actual physical state of the measured system can be measured by a series of protective measurements, and the wave function turns out to be a one-to-one representation of the physical state. Therefore, the ψ-epistemic models, in which the wave function or quantum state is not uniquely determined by the underlying physical state, can be ruled out without resorting to nontrivial assumptions beyond those required for a well-formed ontological model.
Comment on "Distinct Quantum States Can Be Compatible with a Single State of Reality"
http://philsci-archive.pitt.edu/9457..._on_PRL_v9.pdf
P: 4,491
 Quote by bohm2 I'm still having some difficulty understanding the difference between weak versus protecive measurements
Both weak measurement (WM) and protective measurement (PM) measure an observable without destroying the state. But they achieve it in a different way.

PM does it with a single measurement. WM does it with a large number of measurements, each on another member of an ensemble of equally prepared systems.

For WM, the prepared state before the measurement may be arbitrary (but must be the same for each member of the ensemble). For PM, the prepared state before the measurement cannot be arbitrary; it must be an eigenstate of the observable which will be measured.

In a perfect measurement, one would measure an observable:
1. for an ARBITRARY initial state,
2. WITHOUT DESTROYING it, and
3. with only ONE measurement performed.
But in QM such a perfect measurement is not possible. Standard strong measurement violates 2, WM violates 3, and PM violates 1.

EDIT:
For the sake of completeness, let me also explain two additional kinds of measurement:
- first kind (FK) measurement and
- quantum non-demolition (QND) measurement.
Both FK and QND are types of standard strong measurement, so they both destroy the initial state. However, they have a nice property if, after the measurement, you measure the same observable again; they both give the same value of the observable which you obtained by the first measurement. The difference is that FK achieves this only immediately after the first measurement, while QND achieves this at an arbitrary later time. FK measurements are measurements which can be described by a wave-function collapse. Many (but not all !) actual measurements are FK. However, not many actual measurements are also QND.
PF Gold
P: 670
I don't fully understand this argument but this author tries to use protective measurement to rule out the MWI:
 It is argued that the components of the superposed wave function of a measuring device, each of which represents a definite measurement result, do not correspond to many worlds, one of which is our world, because all components of the wave function can be measured in our world by a serious of protective measurements, and they all exist in this world.
An Exceptionally Simple Argument Against the Many-worlds Interpretation: Further Consolidations
http://philsci-archive.pitt.edu/9494...further_v9.pdf
PF Gold
P: 670
Another paper came out today suggesting that ψ is ontic but relying on the concept of protective measurement:
 Recently Lewis et al (2012) demonstrated that additional assumptions such as preparation independence are always necessary to rule out a ψ-epistemic model, in which the quantum state is not uniquely determined by the underlying physical state. Their conclusion is based on an analysis of conventional projective measurements. Here we will demonstrate that protective measurements (Aharonov and Vaidman 1993; Aharonov, Anandan and Vaidman 1993), which are distinct from projective measurements, already shows that distinct quantum states cannot be compatible with a single state of reality... In conclusion, we have demonstrated that, without resorting to nontrivial assumptions such as preparation independence, the wave function or quantum state is uniquely determined by the underlying physical state, and thus distinct quantum states cannot be compatible with a single state of reality. This improves the interesting result obtained by Pusey, Barrett and Rudolph (2012). Certainly, the quantum state also plays an epistemic role by giving the probability distribution of the results of projective measurements according to the Born rule. However, this role is secondary and determined by the complete quantum dynamics that describes the measuring process, e.g. the collapse dynamics in dynamical collapse theories.
Distinct Quantum States Cannot Be Compatible with a Single State of Reality.
http://philsci-archive.pitt.edu/9609/1/dqs_v6.pdf
PF Gold
P: 670
 Quote by bohm2 Another paper came out today suggesting that ψ is ontic but relying on the concept of protective measurement...
An excellent summary by Schlosshauer and Claringbold criticising authors who try to infer interpretational insights not just from protective measurements but also from decoherence theory, Bell's and PBR theorem:

 We suggest that the failure of protective measurement to settle the question of the meaning of the wave function is entirely expected, for protective measurement is but an application of the standard quantum formalism, and none of the hard foundational questions can ever be settled in this way... Of course this is not to say that by milking the quantum formalism we cannot produce something fresh. Quantum information theory and decoherence theory are good examples, but they, just like protective measurement, have not answered the hard interpretive questions; and they, too, could not be expected to do so. Quantum information theory may have motivated new information based interpretations of quantum mechanics, but there are quantum information theorists who are Bohmians and others who are Everettians...Thus if we understand the quantum measurement problem as the question of how to reconcile the linear, deterministic evolution described by the Schrodinger equation with the occurrence of random, definite measurement outcomes, then decoherence has certainly not solved this problem, as is now widely recognized. What decoherence rather solves is a consistency problem: the problem of explaining why and when quantum probability distributions approach the classically expected distributions. But this is a purely practical problem, not a game-changer for quantum foundations. To be sure, the picture associated with the decoherence process has sometimes been claimed to be suggestive of particular interpretations of quantum mechanics or to pinpoint internal consistency issues. But it might be safer to say that certain interpretations(such as the Everett interpretation) are simply more in need of decoherence to define their structure... Another example is Bell’s theorem , although what exactly the experimentally measured violations of Bell’s inequalities tell us about nature remains a matter of debate. Like Bell’s theorem, the PBR theorem is based on the consideration of hidden-variables models and accommodates a variety of conclusions (Colbeck and Renner 2012, Hardy 2012, Schlosshauer and Fine 2012, 2014).
Entanglement, scaling, and the meaning of the wave function in protective measurement
http://arxiv.org/pdf/1402.1217.pdf

 Related Discussions Quantum Physics 5 Quantum Physics 47 Quantum Physics 16 Quantum Physics 1 Science & Math Textbook Listings 1