San K said:
Trying to understand the above quote
We were explicitly discussing weak measurements here.
matrixrising said:
Again, it's like Lebron's average of PPG per season. The average is proportional to single games. In this case, Lundeen used a stream of single photons WITH THE SAME WAVE FUNCTION. So the average was proportional to the value of the shift in a single system.
What is so difficult to understand about variance?
A single measurement result will give values determined by some distribution. Let us consider the simplifying case of Gaussian ones. If the distribution is centered around the mean value of 10 with a standard deviation of 0.1, the single measurement gives you good and valid information about the mean. If the distribution is centered around 10 with a standard deviation of 150000, this is not the case and you may even get values that are not even sensible or allowed.
The question is, whether you necessarily get large variances in weak measurements and the answer is trivially yes. Weak values are defined as:
a_w=\frac{\langle f|A|i\rangle}{\langle f|i\rangle}
where f and i are initial and final (or preselected and postselected) states.
The strength of the measurement is given by the overlap of states f and i, so to get a weak and non-perturbative measurement, you need to get them almost orthogonal. So what you do is dividing by almost 0. This leads to the large variances and noisy distributions.
It also intrinsically(!) leads to the possibility of non-physical results for single results. Well known examples are the spin value of 100 and a weakly measured phase shift caused by a single photon which is out of the range of possible phase shifts for single photons (Phys Rev Lett 107, 133603 (2011)). This is not a question of which experiment to perform, but intrinsically present in any weak measurement.
matrixrising said:
So the average will not be proportional to single systems with unknown wave functions because of uncertainty. In the case of Lundeen, the wave function of a single system is known before the weak measurement.
The wave function was known beforehand? The wave function is the result of his experiment. He measured it. It was known what to expect, though, if that is what you mean: a standard mode prepared by a fiber. I also do not get why you think Lundeen's statement refers to uncertainty.
matrixrising said:
So it's like Lebron and his average PPG. If the individual games are unknown then you couldn't go back and see if the individual games are proportional to his seasons average.
I already told you before that this is not the case and the comparison is invalid. Lebron's case is the one with the small variance. Let's say he does 25 points per game and the results vary between 0 and maybe 45. In that case each of the single results can be associated with an element of reality.
A weak measurement of Lebron's PPG is rather working as follows: He has a PPG of 25 and we have a weak measurement sensitive to the deviation from the mean (the shift of the pointer in the experiment). Let us imagine he scored only 20 points in a single game, so the shift is -5 for this single game. We now measure an amplified version of this value. The amplification is given by dividing by the overlap between the initial and final state and this is usually very small and the initial and final states are only prepared to the optimal precision allowed by uncertainty. So the actual amplification also varies randomly from measurement to measurement. In this case it may be 100, so the actual value measured will be 25-(5*100)=-475.
So the weakly measured value of the points he scored in that game is -475. I have a hard time considering that as being a reasonable value for the actual points scored in a game, so it can not be considered an element of reality. The value is not even allowed.
Why does the procedure still work out in the end? You have two combined stochastic processes. The fluctuation of the value around the mean and the fluctuation of the amplification. You need to average over both distributions to get the mean value. If we had just the fluctuations around the mean, I would agree with you: the points scored in a single game are meaningful. However, this is the result of a strong measurement and it would necessarily be invasive. This is the point where weak measurements enter. By adding the second stochastic distribution in terms of the amplification, we keep the property that the mean will still converge to the value we want, but we lose the ability to give meaning to this single result as we would need to know the actual amplification ratio in each run to make that identification.
And yes, this is very simplifying. Please do not take this version too literally.