# Wavefunction collapse and dirac delta functions

1. Sep 27, 2011

### ralqs

What is the experimental evidence that a wavefunction will collapse to a dirac delta function, and not something more 'smeared' out?

2. Sep 27, 2011

### xts

There is no such evidence, there can't be such evidence, and the wavefunction always collapse to something smeared (if we speak about continous properties, like position, momentum, etc.)
Collapse to delta would mean you measured the position with absolute precision - which is impossible.

3. Sep 28, 2011

### ralqs

I'm sorry, but I don't think that's right. The reason measurements yield smeared out results is because the measurement tools themselves obey the uncertainty relations, and so the exact position of, say, a measuring rod is ill-defined. I'm nearly certain that the wavefunction collapses to a single point. How else would a discussion of whether particles are point-like or not make sense?

4. Sep 28, 2011

### xts

Wavefunctions collapse to point (or delta) in an ideal Platonic world, where we perform ideal, absolutely precise measurements. Such world does not exist physically - not only because of technical limitations of our rulers and stopwatches. It is fundamentally impossible.

Take an example of plane wave (light coming from very distant source, or from a laser). You send this light through a slit of the adjustable width. Then you observe a Fraunhofer's diffraction at the screen. The pattern results from collapse - measurement of the position.

Start with very wide slit. Position is barely known, so the diffraction is barelly visible. Make a slit narrower - the pattern widens.

At what width of the slit would you call the wavefunction 'collapsed'? Would it be delta (0-width slit, thus no light coming through it at all)?

Last edited: Sep 28, 2011
5. Sep 28, 2011

### gvk

Immediately after collapse of a wavefunction, the second measurement gives you the same observable value as before.
All track detectors (Wilson chamber, bubble chamber, solid-state detectors, microchannel plate detector, etc.) confirm this.
You may consider each bubble as a one particular measurement in series measurements of one particle in the long bubble's track.
First bubble is collapse of the wavefunction, second bubble is the second measurement of the same particle, therefore it gives exactly the same transverse location. All other measurements give slightly different transverse locations indicating the classical trajectory of the particle.

Last edited: Sep 28, 2011
6. Sep 28, 2011

### xts

@gvk - sure!
We just should keep in minds the difference between an Euclidean point and a 0.5mm or so bubble (that was so long ago.... I forgot what were their real sizes...).

My beloved pions tracked in BEBC had their wavefunctions collapsed not to delta, but to some 0.5mm wide function.

They did not collapsed "fully" (to delta in position -> infinity in transvere momentum). Just contrary: their transverse momentum remained almost unchanged, so we could track them watching series of bubbles forming pretty regular arc...

Last edited: Sep 28, 2011
7. Sep 28, 2011

### gvk

xts

It depends on the size of the detector's cell. With modern solid detectors it can be on nanoscale.

8. Sep 28, 2011

### Ken G

I think the confusion here is that there is a difference between choosing a "position basis" like psi(x), and actually doing measurements with infinite precision. We can choose a position basis even if we are not doing position measurements, let alone position measurements with infinite precision, but if we are doing position measurements, the Born rule says the probability of finding the particle in a bin of size dx is |psi(x)|2dx. Note the dx-- to get a finite probability, we need a finite dx. After the measurement is over, if we found that the particle was in some bin of size X, we have a new wave function that is truncated outside X. It doesn't matter if we actually found it in X, or if we looked everywhere outside X and didn't find it, we still get the same truncated wave function. We can make X arbitrarily small in principle, but we lack the technology to make X=0. Thus, no wave functions are ever "really" delta functions, but whenever X is smaller than the scale of precision that we care about, we can treat it as a delta function without encountering any problems.

9. Sep 28, 2011

### xts

Doesn't matter if it is nano- or micro- scale: it is not delta. Delta would mean infinite uncertainity on momentum.
We must always be aware of using idealisations. For practical purposes we may assume that the bubble (hit in silicon strip, etc) is a point - that is what we do when we reconstruct the track.
But for other (or - rather the same) purposes we assume that the bubble is laaaarge - the measurement does not disturb the direction of the particle.
Those view contradict each other if taken seriously in QM.

But, of course, I fully agree - 100MeV pion in a bubble chamber is a snooker ball, we may forget about uncertainity principle and its wavefunction - it has simultaneously well defined both position and momentum.

10. Sep 28, 2011

### nonequilibrium

I see what you guys are saying, but where in the books is it stated that the wave doesn't actually become an eigenvector of the operator, but only "kind of"?
I'd very much so appreciate a reference to standerd literature on QM.

11. Sep 28, 2011

### xts

Nowhere. Virtually all QM textbooks ignore the interpretation and use the 'shut up and calculate' approach. It is up to you if you want to reduce the wavefunction at some stage or not yet. Of course - textbooks advice you to do this in common-sense justified situations.

Take the most horrible and boring book: Landau-Lifgarbages. Analysis of double slit experiment. You may collapse wavefunction to eigenstate as particle passes the slit. But if you want to get better accuracy - you shouldn't reduce it, as the slits have non-negligible width.

12. Sep 28, 2011

### nonequilibrium

Okay but then what are your arguments for not viewing it as a collapsing to a delta-function? (don't read this sentence as attacking you, it's hard to detect tone on a forum, I mean it as a genuine question) Conceptual ease? (of course it's debatable whether is indeed conceptually easier, but you might believe so) Or empirical grounds? (not that empirical grounds are the only thing you can judge a theory on)

13. Sep 28, 2011

### xts

I'll split this into two:
A. what is my argument against 'collapse' as something real;
B. what is my argument against 'collapse to delta' - practical;
C. what is my argument against 'collapse to delta' - fundamental;

A. There is no way (no textbook gives a recipe) to distinguish between 'collapsed' and 'uncollapsed' wavefunction. I see 'collapse' as substituting actual value for a variable in some equation or eliminating a variable from a set of equations. It is only a mathematical operation.

B. (assuming we use the term 'collapse' as reducing our equations) - 'collapse to delta' would mean that we know some variable with absolute precision. It may be (it is!) a good approximation, but in reality we never measure any value with perfect precision. So if we substitute the 'measured value' into our equations, we must, in fact, substitute not a point-sharp value, but rather some range of possible values. In terms of textbook QM (Landau+Lifgarbages - argh!) we don't collapse wf to Dirac delta, but rather to some narrow Gaussian function, reflecting our measurement.

C. Collapse to delta in position causes absolute uncertainity on momentum: up to infinity. Collapse to delta in momentum causes absolute uncertainity on position (particle is anywhere in the infinite Universe). Collapse to delta in energy causes infinite uncertainity on energy (the particle after measurement has any - up to infty - energy, so the measurement must cost infinite energy. And so on: any absolutely precise measurement leads to infinite uncertainity on something else.

14. Sep 28, 2011

### nonequilibrium

Okay, let's assume A:

I like your argument C, but about B, the core sentence is

which I don't agree with, or at least you need to explain why you believe that we do not. After all, if you really interpret psi as predicting the number appearing on the screen of your measuring apparatus (or at least the probability distribution thereof...), then I'd ask: but don't we always get a certain number? (you, as the experimentator, might add a(n un)certainty interval, but that's you doing that, not nature)

15. Sep 28, 2011

### xts

Ken G's post #8 and mine #9 should give an explanation.

We never get "exact number". We always get some span of possible values.
We always have "a 3 micrometers wide grain of AgBr in photo emulsion blackened" - you can't tell if it got hit on its left or right edge.
Or "a photomultiplier with 10micrometers wide slit" pinging a signal.
We can't have "infinitesimally narrow" detectors ringing.
If your apparutus displays exact value: 3.672V, it means, that the measured voltage was anywhere in between 3.6715 and 36725V.

EDIT>>>
I appreciate noone ironised on my "splitting into two: A,B, and C".

Last edited: Sep 28, 2011
16. Sep 28, 2011

### Ken G

I think it would help to better understand just what an operator is. An operator, like a position operator, is a very idealized entity-- it is the operator X whose eigenvalues are x, and it lives in a Hilbert space of possible operators. This is all the mathematics of quantum mechanics, but it doesn't have a whole lot of direct correspondence with reality. That's OK, the theory is intended to idealize reality, we do that all the time in physics. But in reality, we can only do a measurement with some precision, so the "eigenvalue" of a measurement is not actually an outcome x, which is continuous, it is a discrete outcome like "bin #47". All experiments must be like that, we do not actually have access to continuous outcomes.

Indeed, our apparatus might read outcomes that are more precise than the actual eigenvalues really are-- if you time a nuclear decay with a stopwatch that has many digits of accuracy, but you are just pushing the button on the stopwatch when you see a flash that some nucleus has decayed, clearly the number on the stopwatch reports a precision that simply does not exist in the experiment. But barring experimental error like that, which you are right to classify in a different category, every experiment has some true precision, and it means there is effectively only a finite number of discrete meaningfully different outcomes for that experiment.

But these discrete position operators are awkward to write out, it is much easier to just treat them as the X operator with eigenvalues x, even though that's a different measurement than the one we are doing. The distinction won't matter, we can use the replacement operators and imagine the replacement experiment, whenever we are planning on binning these outcomes to define the outcome of the actual experiment. In other words, a real experiment with finite precision can be obtained by binning imagined experiments that we are not actually doing, and that's what |psi(x)|2dx means-- binning outcomes of impossibly precise observations over a dx bin to account for the outcomes of possible observations. The same holds for expectation values-- if you want to know the expected x for a real observation, you can find the expectation value of the x eigenvalues of the impossible experiments, and expect that if the precision is high enough, the expectation values will give similar averages for either the impossibly precise idealized experiments, and the experiments you can actually do.

On the issue of "collapses", there seem to be two ways the term gets used. One is when you decohere the eigenstates of a given subsystem when a measurement is done (where by measurement we mean precisely the act of decohering a certain set of eigenstates), but that's not really a collapse because it's pure quantum mechanics-- it only yields a mixed state because it is a projection onto a substate. What I would mean by "collapse" can only occur when the outcome is considered as being registered, so that the mixed state "collapses" into a definite state of the subsystem. In other words, many people seem to use the term collapse to mean pure-->mixed, but I think a more appropriate meaning is mixed-->pure.

Last edited: Sep 28, 2011
17. Sep 29, 2011

### ralqs

This has been a very illuminating discussion, thank you very much.

18. Sep 29, 2011

### jfy4

Hi,

I have a similar question that's a little fluffy....

When a hydrogen atom absorbs a photon, (sorry to use the word 'know' here) does it know exactly how much energy it absorbed, or is this quantity also smeared according to the atom ( I could also imagine how such a question might not even be answerable...sorry)?

Thanks,

19. Sep 30, 2011

### xts

It is smeared - as the energy of photon was not known exactly. Emission/absorption lines of atoms are not point-sharp - they always have some width. And atom may absorb a photon of any energy within this range.

Only the ground state of the atom has sharp defined energy. All other levels are smeared (as the life time of those states is finite).

20. Sep 30, 2011

### nonequilibrium

Yes, xts and Ken, thank you for the interesting contribution. I am not yet wholly convinced, but maybe it is not bad to be unconvinced about much in QM, and I see the merit of your views.

I was wondering, has anyone ever tried to actually model such a discrete position operator? (making the inexact exact) If so, I'm interested in what it would look like.