# Can continuously (strongly) measured quantities ever change state?

1. Oct 3, 2013

### James MC

Alan Turing made the following claim:

"It is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, 1 second, tends to one as N tends to infinity; i.e. that continual observation will prevent motion."

But is he actually right about every single possible case?
E.g. what if one continuously measures the position of a particle, can it move?

2. Oct 4, 2013

### Simon Bridge

Does Alan claim that this is true for "every single possible case"? Or is he talking about a specific class of cases?
How would you set up a particle in a single eigenstate of position?

3. Oct 4, 2013

### James MC

On the face of it, he appears to be talking about every measurable property. Do you think there is a property to which the basic idea does not apply to?

You could perform a (strong) measurement. And theories that postulate collapse stipulate that particles sometimes collapse in accordance with compact support collapse functions.

4. Oct 4, 2013

### f.wright

Say you measured the position of a particle and found it to be at x1, then you waited a period of time, Δt, and measured it again and find it at x2 and kept repeating your measurements over the same time period finding positions x3, x4 etc. In this case the observable is particle momentum. After your first measurement the system was in a definite eigenstate of kinetic energy with momentum probability m(<x2> -<x1>)/Δt. Repeated measurements leave the system in the same eigenstate. If the system is non stationary and Δt tends to zero each measurement causes the system to collapse back to its original eigenstate.

5. Oct 4, 2013

### Demystifier

What we have here is a competition between two continuous processes: the continuous measurement of position and the continuous free wave-packet spreading in the position space. But in reality, the measurement is never really completely continuous, in the sense that it takes a finite amount of time to actually measure the position. If that time is much shorter than the time needed for wave packet to significantly spread in the position space, then the motion of the particle can be made very "small", i.e., much smaller than without the measurement.

But there should be nothing counterintuitive about that. Think of fast measurement of position as a strong interaction with the environment that tends to keep the particle at one place. For example, a potential V(x) with many narrow deep wells will keep the particle in one of the wells, thus preventing its motion. To use it for a fast measurement of position, you must be able to switch off and on that potential very fast.

Last edited: Oct 4, 2013
6. Oct 4, 2013

### Simon Bridge

How might the paper apply to position (your example)?
How would you set something up in an eigenstate of position?

One system that would not apply would be if you made N measurements per second where you alternated measuring conjugate observables... so Alan was probably talking about only a single observable being measured in each process.

7. Oct 4, 2013

8. Oct 8, 2013

### James MC

You're right he was only talking about a single observable.

On how you set something up in an eigenstate of position: Well you can measure its position! At least on certain ways of understanding the collapse postulate, that will "set up" the state so that it's at least in a position region eigenstate.

But the issue is also of interest in the context of defining collapse processes in physical terms, thus moving on from the Copenhagen interpretation and it's vague measurement postulate. So one question is whether you could define dynamics for special "collapse triggering" properties so that it's as if those properties are being continuously measured.

9. Oct 8, 2013

### kaplan

This is called the quantum Xeno effect (sometimes Zeno, sometimes paradox). It does seem to be a real effect - for example, the lifetime of unstable atoms can be extended by observing them frequently.

Does it apply in every single possible case? The way Turing phrased it, maybe so - but position is not a good example, because you cannot start or ever be in a position eigenstate. More generally you'd have to be careful to make sure you're really measuring the eigenvalue with your experiment.

For position, let's try modifying Turing's statement a little. Instead of a position eigenstate (which is unphysical), consider a sharply peaked Gaussian as the initial state, and a device that measures position with a precision that's roughly the same as the width of the Gaussian. Now because the Gaussian is sharply peaked, the particle has a lot of momentum, so you have to measure it fast to keep it from getting away. If you do measure it that fast, each time you'll get a value that's almost certainly within a few standard deviations of the mean of the Gaussian, and then I would say your measurement will project onto a new Gaussian centered on your last result.

So, it looks to me that the wavefunction of the particle will be a Gaussian with a mean that takes a random walk with step size the precision of your measurement. Then in the limit of rapid measurements, the particle will wander all over the place very rapidly - so Turing's conclusion would be wrong (or, perhaps there's an error in my analysis). In any case since we didn't start in a position eigenstate, it doesn't really falsify what he said.

Last edited: Oct 8, 2013
10. Oct 8, 2013

### James MC

What hold on, I was following you until that bit - how did you derive that result? Why would it wonder all over the place at the limit?? Why wouldn't it freeze at the limit?

11. Oct 8, 2013

### kaplan

I'm saying that because our measuring device has finite precision, the wavefunction after the measurement should be a Gaussian (or other peaked shape) with a mean (i.e. center position) equal to the result of the measurement, and with width (i.e. standard deviation) set by the uncertainty of the measurement. Then the next measurement will return the previous result plus/minus a random error of order the width, and then the new wavefunction will again be a Gaussian with the same width, but now centered on whatever the new measurement was, etc.

So the mean of the wavefunction will perform what's called a "random walk" - if we're in one dimension for simplicity after each measurement it will move left or right with equal likelihood, and the distance it moves each time will be about equal to the uncertainty in the measurement.

After N such steps, the typical distance traveled by the random walker is the square root of N times the step size - so it's slower than you might think, because a lot of the steps tend to cancel out, but still, it moves a distance that increases with N. If you make N measurements per second and send N to infinity, the center of the wavefunction will move really fast.

Maybe there's something wrong with that argument, but I'm not sure what it is.

12. Oct 8, 2013

### James MC

Oh I see, by "all over the place" I thought you mean "all over space" but you actually meant "all over the small region defined by measurement uncertainty". Actually no since you're thinking of the particle wave function as a Gaussian then because the Gaussian goes to zero nowhere, the more frequently you measure the more likely you will get a collapse to one of the tails, thus sending the particle across the other side of the universe?

Also, what happens (theoretically) if there is no measurement uncertainty, so that measurement is not like multiplying the wave function by a Gaussian, but by a delta function? I take it that this is the (theoretical) situation in which the particle would actually freeze, and never move, given continuous measurement?

13. Oct 9, 2013

### kaplan

That could happen, but it's exponentially unlikely and I think can be ignored. My argument would work if instead of a Gaussian we used a rectangular distribution, or anything else that's precisely zero outside some range.

That can't happen. Zero uncertainty in position means infinite momentum, so the particle would instantly escape. Moreover position eigenstates are not normalizable, so they cannot represent the state of a particle before or after a measurement. Instead, there is always some uncertainty. The less uncertainty in position the larger the momentum and the faster you have to do your measurements for my argument to hold. But I think for any fixed uncertainty, my argument should be valid in the limit of very rapid measurements.

14. Oct 11, 2013

### James MC

Sorry I'm still missing something here. I understand that zero uncertainty in position means that all possible states of momentum (from negative infinity to infinity in each direction) gets equal probability. Since momentum is mv then all possible changes in position over time (v) get equal probability, hence the infinite position wave function spread. But the thought is that the instant before the position wave function can spread to infinity, another perfectly strong position measurement is performed, thus multiplying the position wave function by a delta function again. So continuous delta function multiplication leaves the position wave function in a delta function, hence the freeze. Where do I go wrong in my reasoning?

Surely the rectangular distribution would have the exact same effect as the zero uncertainty case (generated by delta distribution)? After all, ANY finite localisation of position wave function generates instantaneous tails (http://arxiv.org/abs/quant-ph/9806036), and so instant escape?

Isn't a position eigenstate just a state in which that particle is at a specific point with probability 1 and at every other point in space with probability 0? In that case, isn't normalization taken care of, since probability distribution trivially adds to 1?

15. Oct 11, 2013

### kaplan

The spread is instantaneous, so there really is no "instant before it spreads".

There are tails, but they have finite (and decreasing) probability as you go out. In the case of a Gaussian I don't think they cause any problem with my argument. For a rectangle you're right that one might need to be more careful.

A position eigenstate is a state, not a probability density, and it's a distribution called a delta function. The probability density is the (absolute value of the ) wavefunction squared. A delta function squared is not normalizable.

That's easier to understand in the momentum representation. As you said, 0 uncertainty in position means all values of momentum are equally likely. But since any value is possible for momentum, the momentum space wavefunction clearly cannot be normalizable (it must be constant, but zero is not normalizable, and neither is non-zero).

16. Oct 12, 2013

### Jano L.

I think here we might have another example of a situation in which discrete and continuous variables behave differently. The original statement about the "wave function evolution freeze" seems to work for function $\chi(\sigma)$ that describes discrete variable, such as spin 1/2, with possible value $\sigma=1/2,-1/2$.

Such function can be thought of as unit vector whose orientation is given by two spherical angles. There are only two results of measurement of spin z, $\sigma=+1/2, -1/2$, corresponding to two poles of the sphere, so it makes sense that rapid subsequent measurements localize the vector always at the same pole of the sphere.

The position is a continuous variable: any value is possible. After localization the wave function begins to spread. Next interaction may localize the function at other point. It seems very similar to observation of Brownian motion of a marked particle after short periods of time. The observation does localize the probability distribution, but this does not prevent the particle from moving anywhere.

As Kaplan says, particle cannot be described by wave function that is localized in a point. Such function is not compatible with the Born interpretation.