bhobba said:
OK guys here is the proof I came up with.
...
Now its out there we can pull it to pieces and see exactly what's going on.
I meant to get started on this a long time ago, but I've been distracted. Better late than never I suppose.
bhobba said:
Just for completeness let's define a POVM. A POVM is a set of positive operators Ei ∑ Ei =1 from, for the purposes of QM, an assumed complex vector space.
This is the definition that's appropriate for a finite-dimensional space, right? So is your entire proof for finite-dimensional spaces?
bhobba said:
Elements of POVM's are called effects and its easy to see a positive operator E is an effect iff Trace(E) <= 1.
I don't see how the trace enters the picture.
Is the start of that sentence your definition of "effect"?
bhobba said:
First let's start with the foundational axiom the proof uses as its starting point.
I would prefer if you could make a full statement of the theorem. You should at least explain where you're going with this.
bhobba said:
An observation/measurement with possible outcomes i = 1, 2, 3 ... is described by a POVM Ei such that the probability of outcome i is determined by Ei, and only by Ei, in particular it does not depend on what POVM it is part of.
OK, this suggests that there's a function that takes effects to numbers in the interval [0,1], and that this function should have properties similar to those of a probability measure on a σ-algebra.
bhobba said:
I will let f(Ei) be the probability of Ei. Obviously f(I) = 1 from the law of total probability. Since I + 0 = I f(0) = 0.
This is not so obvious unless you know exactly what the 0 operator and the identity operator represent: Yes-no measurements that always give you the answer "no" and "yes" respectively.
If you want to make this argument without using the "theorem + proof" structure, you should explain such things. If you want to make it in the form of a theorem, you should state the theorem in a way that includes a definition of a probability measure on the set of effects.
bhobba said:
First additivity of the measure for effects.
Let E1 + E2 = E3 where E1, E2 and E3 are all effects. Then there exists an effect E E1 + E2 + E = E3 + E = I. Hence f(E1) + f(E2) = f(E3)
By your definitions, if ##E_1## and ##E_2## are effects, there exist positive operators ##F_i## and ##G_j## (for each i in some set I, and each j in some set J) such that ##\sum_i F_i=I=\sum_j G_j##, and indices i and j such that ##F_i=E_1## and ##G_j=E_2##. But why should ##F_i+G_j## be an effect?
The set of effects as defined in my post #41 is not closed under addition. If your definition (or its generalization to spaces that may be infinite-dimensional) is equivalent to mine, then you can't assume that ##E_3## is an effect.
bhobba said:
Next linearity wrt the rationals - its the usual standard argument from additivity from linear algebra but will repeat it anyway.
f(E) = f(n E/n) = f(E/n + ... + E/n) = n f(E/n) or 1/n f(E) = f(E/n). f(m E/n) = f(E/n + ... E/n) or m/n f(E) = f(m/n E) if m <= n to ensure we are dealing with effects.
OK. My version: For all ##n\in\mathbb Z^+## (that's positive integers), we have
$$f(E)=f\left(n\frac{E}{n}\right) =nf\left(\frac{E}{n}\right),$$ and therefore $$f\left(\frac{E}{n}\right)=\frac{1}{n}f(E).$$
This implies that for all ##n,m\in\mathbb Z^+## such that ##n\geq m##, we have
$$f\left(\frac m n E\right)=m f\left(\frac 1 n E\right)=\frac m n f(E).$$ You should probably mention that this argument relies on a theorem that says that if E is an effect and ##\lambda\in[0,1]##, then λE is an effect.
bhobba said:
If E is a positive operator a n and an effect E1 exists E = n E1 as easily seen by the fact effects are positive operators with trace <= 1.
It took me several minutes to understand this sentence. It's very strangely worded. How about something like this instead: For each positive operator E, there's an effect ##E_1## and a positive integer n such that ##E=nE_1##.
bhobba said:
f(E) is defined as nf(E1). To show well defined suppose nE1 = mE2. n/n+m E1 = m/n+m E2. f(n/n+m E1) = f(m/n+m E1). n/n+m f(E1) = m/n+m f(E2) so nf(E1) = mf(E2).
I don't understand what you're doing. Did you mean multiplication when you wrote +? Are there parentheses missing or something. You really should start using LaTeX.
My version: The assumption implies that ##E_1=mE_2/n##. So we have
$$nf(E_1)=nf\left(\frac m n E_2\right) =n\frac m n f(E_2) =mf(E_2).$$
bhobba said:
From the definition its easy to see for any positive operators E1, E2 f(E1 + E2) = f(E1) + f(E2).
It doesn't follow from the definition. We have to do something like this: ##E_1=nE_1'## and ##E_2=mE_2'##, where ##E_1'## and ##E_2'## are effects such that ##E_1'+E_2'## is an effect. (This can be accomplished by choosing m and n large). If ##n\geq m##, we have
\begin{align}
&f(E_1+E_2)=f(nE_1'+mE_2') =f\left(n\left(E_1'+\frac m n E_2'\right)\right) =n f\left(E_1'+\frac m n E_2'\right) = n\left( f(E_1')+\frac{m}{n}f(E_2')\right)\\
& nf(E_1')+mf(E_2') =f(E_1)+f(E_2).
\end{align}
bhobba said:
Then similar to effects show for any rational m/n f(m/n E) = m/n f(E).
If you had shown that for all effects E and all ##p\in[0,1]##, we have ##f(pE)=pf(E)##, you wouldn't have had to do the thing with rational numbers twice. By the way, a comma after m/n would make that sentence more readable. A comma followed by words like "we have" would be even better, because a comma sometimes means "and".
I'm going to take a break here, and do the rest later.