maka89 said:
I can accept that one cannot make a measurement with absolute precision and have a usable wave function afterwards due to the uncertainty principle..
Well actually that isn't quite what QM says - it places no limit on the precision of any measurement - what it places a limit on is the precision of certain measurements that can be made at the same time. In practice of course, even though in principle there is no reason why you can't make a measurement with 100% accuracy, that's not possible. That observation is important regarding certain technical aspects of the theory concerned with things like the Dirac delta function and Rigged Hilbert Spaces and making sense of it but that's a story for another time.
maka89 said:
Is there some operator or some maneuver that when used on the wave function, reshapes it into a wave function with a smaller standard deviance, but with a different mean(chosen probabilisticly) than the original? (Measuring the the particle, but accepting/forcing some uncertainty in the measurement)
Sure - you simply use a measuring device with the precision you want the new variance to be. An example would be the slit in the double slit experiment - you vary the precision of the location of the object going through the slit by adjusting the width of the slit.
maka89 said:
If not: How do you think of an actual real physical measurement of a particle?
The way to think of an actual measurement, any measurement, is to go back to what an observation is. It will have some outcomes yi. The fundamental axiom of QM is those yi are associated with a set of disjoint positive operators Ei ∑ Ei = 1 called a resolution of the identity. You can combine those into an operator O = ∑yi Ei which is a Hermitian operator and via the spectral theorem you can recover the yi and Ei. O is called the observable associated with the operator.
Of course if we do the observation many times we will get an expected value E(O). That is determined by the so called Born rule, but just for the heck of it I will derive the Born Rule from a simple assumption to show its not something just pulled out of a hat. The assumption is its linear ie if O1 and O2 are observables E(c1*O1 + c2*O2) = c1*E(O1) + c2*E(O2).
First its easy to check <bi|O|bj> = Trace (O |bj><bi|).
O = ∑ <bi|O|bj> |bi><bj| = ∑ Trace (O |bj><bi|) |bi><bj|
Now we use our linearity assumption
E(O) = ∑ Trace (O |bj><bi|) E(|bi><bj|) = Trace (O ∑ E(|bi><bj|)|bj><bi|)
Define P as ∑ E(|bi><bj|)|bj><bi| and we have E(O) = Trace (O P).
P, by definition, is called the state of the quantum system. The following are easily seen E(1) = 1 so Trace (P) = 1. Thus P has unit trace. E(|u><u|) from the definition of an observable (since the outcomes are 0 and 1) is a potitive number >= 0. Thus Trace (|u><u| P) = <u|P|u> >= 0 so P is positive.
So we have the Born rule which says a positive operator of unit trace P exists such that the expected value of an observation O is Trace (PO). P is called the state of the system.
The point of the above is to bring home that the state of the system is not necessarily (it may be - but it doesn't have to be) something physical like an electric field - it, like probabilities, is simply something that helps us to calculate the expected value of observation.
Just as an aside Von Newuman used a similar argument to show hidden variables did not exist - the error he made however is hidden variables do not have to obey the linearity assumption. A deeper analysis also shows that linearity depends crucially on non contextuality as shown by an important theorem called Gleason's theorem - but that is a story for another time.
Thanks
Bill