# I On Wave Function Collapse and Accuracy of Energy Measurment

1. Jan 23, 2017

### Electric to be

I have a concern about having some wave function psi, that is originally a superposition of many eigenstates (energies). Traditionally, it is said that the square of the coefficient of each of the component eigenfunctions represents the probability of measuring this particular energy eigenstate. Once a measurement is done, the wavefunction is to collapse to one of these eigenstates.

My concern:

Does this mean that energy is somehow measured to an infinite precision? I know obviously position and momentum cannot have this happen since they are continuous observables. What if I'm not able to accurately measure the energy? Or is this somehow a picture of a measurement that is ideal? I feel like this would be prevented by quantum mechanics, but I don't necessarily see how, unlike momentum and position uncertainties, which result from the changing of wavefunctions.

I've seen other places that apparently inaccurate measurements result in another superposition, with more weighting of the eigenfunctions close to the region measured. If so, why is it commonly stated that the coefficient squared is straight up the probability of measuring an eigenstate?

2. Jan 23, 2017

### Staff: Mentor

Yes, but this is an idealized case. In practice any measurement will always have a finite limit to its precision. So an actual measurement of energy won't put the state into an energy eigenstate, but only into a state with a very narrow spread of energies, i.e., a superposition of energy eigenstates with eigenvalues that are very close together.

Because for many purposes the finite precision of the measurement does not have any significant effect, so the measurement can be modeled as an idealized measurement with infinite precision without affecting the answers significantly. The idealized case is often much easier to work with mathematically, which is why this is done.

3. Jan 23, 2017

### Staff: Mentor

I'm not sure exactly what you mean by this, but on a collapse interpretation, which you are using, any measurement "changes the wavefunction".

4. Jan 23, 2017

### Electric to be

Alright fair enough. I am fairly new to QM. However, how would one mathematically model the collapse to this new wavefunction then? It's easy to just say that as a postulate of QM upon measuring an eigenstate, the wavefunction collapses to this new eigenstate. I accept this. However, how do the postulates of QM provide some sort of mathematical basis for the transition from state 1 to state 2 based on what you measured and how accurately you measured it?

5. Jan 23, 2017

### Staff: Mentor

You would model the final state as a superposition of energy eigenstates where the eigenvalues are very close together, as I said. The simplest way would be to have the wave function be a Gaussian peaked at the energy that, in the idealized case, would be the energy eigenvalue of the idealized measurement. The spread in energy is then just the variance of the Gaussian.

You compute the probabilities the same way, but now they are probabilities for the Gaussian you measure to be peaked at a particular energy. The variance of the Gaussian is not determined by the state of the measured system; it's determined by your measuring apparatus (and that's probably not something you're going to use QM to model, you're just going to determine it empirically).

6. Jan 23, 2017

### Electric to be

Maybe a physical scenario would help me understand. Say somehow I measured that the energy of a particle is + or - 1 Joule above some energy level E. How does this information allow me to start assigning coefficients/probabilities to energy level E, and nearby energy levels, in the way you said? How do I begin.

7. Jan 23, 2017

### Staff: Mentor

Do you know what a Gaussian is? The probability distribution of various energies after the measurement will be a Gaussian with a peak at energy E and a variance of +/- 1 Joule.

8. Jan 23, 2017

### bhobba

The OP probably hasn't studied mathematical statistics. The reason its Gaussian (another name is normal) has to do with the so called central limit theorem:
https://en.wikipedia.org/wiki/Central_limit_theorem

When you do a measurement a simple model would be the resulting error is determined by the sum of many different things each with their own probability distribution. The theroem says the resulting distribution will be Gaussian. Its one of the most useful and applicable results in all of statistics.

Thanks
Bill

9. Jan 23, 2017

### Electric to be

I know what it is indeed. This would make sense to me if the Gaussian was over a set of positions, a continuous operator. However, since the energies are discrete how would the probabilities of the energies fall on this continuous function?

10. Jan 23, 2017

### Staff: Mentor

Only if the system being measured is a bound system (e.g., an electron in an atom). If the system is a free particle, for example, the energy spectrum is continuous.

Even in the discrete case, there will still be a finite accuracy to energy measurements; for example, look up spectral line broadening. One way of viewing this is that the actual Hamiltonian of the system is not precisely the one we are using in the mathematical model (because the actual Hamiltonian will include a potentially unbounded set of interactions with the environment, which we don't model because it would be way too complicated). So where the mathematical model has discrete energy eigenstates, the actual Hamiltonian's spectrum will be a set of sharply peaked Gaussians.

11. Jan 23, 2017

### bhobba

Measurement is a result of interaction between the system and what is measuring it. Many of the errors are entirely of classical origin eg if its a needle readout the needle jiggles a bit. Some are of quantum origin eg in calculating the energies of the hydrogen atom you ignore the electrons interaction with the quantum EM field that permeates all space. This means its not precisely as calculated and indeed is not even in a stationary state - this has consequences such as spontaneous emission:
http://www.physics.usu.edu/torre/3700_Spring_2015/What_is_a_photon.pdf

Our models are far from perfect and that in itself introduces some error. All these factors, in this, and any other measurement, QM or not, leads to errors - its inevitable.

That's why the Central Limit Theorem is so important - we know errors are the result of many things so that assurs us it will overall be Gaussian. Be warned though there are excetptions. For example usually the distribution of exam results is fairly normal. But in grade 10 the statewide math results on the end of year exam we all had to take had two peaks. No one had any idea why - it before and since was reasonably Gaussian. Strange but true.

Thanks
Bill

Last edited: Jan 23, 2017
12. Jan 23, 2017

### Electric to be

Yes, I understand about the finite accuracy. I'm just asking, how will the probabilities be allocated with a Guassian in the discrete case? Doesn't the wavefunction have to still be a sum of the eigenfunctions no matter what? So after the measurement with finite accuracy, the central energy level will have the highest probability, and nearby eigenenergies will have decreasing probabilities. However how exactly will it be allocated. I've been told a Guassian, but this should work for the continuous case.\

Edit: I understand now what you're trying to say about the Hamiltonian. However, assuming for a second that the discrete energy levels do correctly model the system with the original Hamiltonian. Then, how do you go about assigning probability.

13. Jan 23, 2017

### bhobba

Because its not really discreet - that just what a simple model shows. Even more accurate models are far from perfect.

This whole business of discreet in QM is very complicated, but if you want to go into it, along with some advanced hairy math, then see the following:

Thanks
Bill

14. Jan 23, 2017

### bhobba

I think you need to study some QFT before making statements like that - its rather complicated eg the electrons in an atom are not really in a stationary state.

But even if it is discreet it can be, as was pointed out, finely discreet over a small bandwidth with each little discreet component Gaussian distributed..

Thanks
Bill

15. Jan 23, 2017

### Electric to be

Understood. I understand if I may be reaching, however, say for a moment without considering the true reality of the situation, we considered the flawed model where there are indeed discrete energy levels as accurately describing the system. I am just curious how, under this framework, if a measurement with certain uncertainty was taken, how would probabilities be assigned to these discrete levels? Obviously the central eigenvalue would have the highest, but since a continuous probability distribution such as the Gaussian couldn't be used in this situation, what would? Since this isn't the reality like you said, this is more of a math question now, but one that I am still curious in.

16. Jan 23, 2017

### bhobba

Hmmmmm. I will mention just one example - Fermi's Golden Rule:
https://en.wikipedia.org/wiki/Fermi's_golden_rule

Systems, quantum and classical, are perturbed all the time by all sorts of things.

Thanks
Bill

17. Jan 23, 2017

### Staff: Mentor

In this idealized model, there is no such thing as measurement uncertainty; you have idealized it away. In this idealized model, if the system is in a stationary state, and you measure its energy, there is no probability; you will always get the exact eigenvalue.

18. Jan 23, 2017

### Electric to be

Alright. I guess I'll settle on that. I've just read in other areas of the internet that these so called "inaccurate" measurements will be superpositions of several eigenfunctions.

I believe I understand though. This system of modeling assumes perfect measurments.

19. Jan 23, 2017

### bhobba

You realize that superposition is simply that the possible states form a vector space? That means that any state is the superposition of many other states in many (indeed an infinite) amount of ways. There are many states eg those of exact position that are a superposition of many different energy states - its a simple consequence of the mathematical structure of QM.

Thanks
Bill