Recognitions:
Gold Member
Staff Emeritus

## Hermite but not observable

 Quote by DarMM Basically the Hermitian subalgebra of the whole C*-algebra is meant to correspond to measuring devices.
This goes back to my earlier gripe -- there's nothing physically stopping me from making a measuring device that outputs complex numbers. Singling out those elements whose anti-Hermetian part is zero as being more "real" is just an extension of the old bias that the complex numbers with zero imaginary part are somehow more real than the rest of them.

Recognitions:
 Quote by Hurkyl This goes back to my earlier gripe -- there's nothing physically stopping me from making a measuring device that outputs complex numbers. Singling out those elements whose anti-Hermetian part is zero as being more "real" is just an extension of the old bias that the complex numbers with zero imaginary part are somehow more real than the rest of them.
I wouldn't so much see the "reality" of the measured output as being the reason for requiring Hermiticity. Of course if I take an observable $$A$$ and an observable $$B$$, I can measure $$A + iB$$ by just getting their values and put them into a complex number. Rather it has more to do with Unitarity. If an observable is not Hermitian then the transformation associated with it is not Unitary and it does not represent a good quantum number or even allow sensible quantum evolution. For example if $$H$$, the Hamiltonian wasn't Hermitian then time evolution wouldn't be Unitary, which would make the theory collapse. Similarly for momentum, linear and angular, rotations and translations wouldn't be unitary.
Hence Hermitian operators represent our observables, because only they represent good quantum numbers. For example only then will we be sure that when we obtain $$A = a$$ that we are in a specific state by the spectral theorem.
Another example would be that measuring $$A + iB$$ only really makes sense if $$A$$ and $$B$$ are compatible observables. So if the Hamiltonian, Linear Momentum and Angular Momentum have to be Hermitian, functions of them essentially exhaust all operators.
There are other reasons for Hermiticity, which I can go into if you want.
 Recognitions: Gold Member Science Advisor Staff Emeritus It's clear why translation and other automorphisms should be unitary: they have to preserve the C* structure. And it's clear why the inner automorphisms -- those of the form $T \mapsto U^* T U$ -- require U to be unitary element. But (a priori, anyways) that has absolutely nothing to do with whether an element of the C*-algebra should correspond to a measuring device. Actually, you bring up an interesting example. If you have a one-parameter family of unitary transformations U(t) -- such as time translation -- the corresponding infinitessimal element U'(0) is anti-Hermitian, not Hermitian. That we divide out by i to get a Hermitian element appears to me to be for no reason deeper than "people like Hermitian elements".

Hi.

 Quote by Fredrik Eigenvalue 0 doesn't cause any additional complications at all, so it doesn't need to be handled separately.
Thanks a lot, Fredrik.　Now I am fine with this eigenvalue 0 concern.

Now going back to my original question
 Quote by sweet springs Hi, In 9.2 of my old textbook Mathematical Methods for Physicists, George Arfken states, ------------------------------------------------------ 1. The eigenvalues of an Hermite operator are real. 2. The eigen functins of an Hermite operator are orthogonal. 3. The eigen functins of an Hermite operator form a complete set.* * This third property is not universal. It does hold for our linear, second order differential operators in Strum-Liouville (self adjoint) form. ------------------------------------------------------
Advice on * of 3. , showing some "not forming a complete set" examples, are open and appreciated.

Regards.

Recognitions:
 Quote by Hurkyl It's clear why translation and other automorphisms should be unitary: they have to preserve the C* structure. And it's clear why the inner automorphisms -- those of the form $T \mapsto U^* T U$ -- require U to be unitary element. But (a priori, anyways) that has absolutely nothing to do with whether an element of the C*-algebra should correspond to a measuring device. Actually, you bring up an interesting example. If you have a one-parameter family of unitary transformations U(t) -- such as time translation -- the corresponding infinitessimal element U'(0) is anti-Hermitian, not Hermitian. That we divide out by i to get a Hermitian element appears to me to be for no reason deeper than "people like Hermitian elements".
Well let's concentrate on just the Hamiltonian. As you said we could work with $$A = iH$$, but we choose to work with $$A = iH$$. However this isn't really a very interesting case, it's similar to the case I described before in terms of sticking an $$i$$ in front. So for example $$A + iB$$ is fine as an observable, get a machine that measures both and add them together inside the machine and the machine will have measured a + ib. Nothing wrong with that. However these are trivial complex observables, formed from Hermitian observables anyway. What about a genuine complex observable like a non-Hermitian Hamiltonian which can produce eigenstates like $$E + i\Gamma$$? The problem is that such eigenvalues are actually describe decaying non-observable particles and there won't be a conservation of probability.
My basic idea is that while we can have things like $$iH$$, it is difficult to justify an arbitrary non-Hermitian operator. In the case of the Hamiltonian its because of the loss of conservation of probability, in the case of other operators it's usually because non-Hermitian observables don't form an orthogonal basis and hence aren't good quantum numbers.

However I can see that what I've said is basically an argument as to why non-Hermitian operators would be bad things to measure. What I haven't explained is why they actually can't be measured physically. I'll explain that in my next post since it takes a bit of work to set up.

Recognitions:
Gold Member
Staff Emeritus
 However these are trivial complex observables, formed from Hermitian observables anyway.
For the record, by this definition of "trivial", all elements of a C*-algebra are trivial: you can compute the real and imaginary parts just like an ordinary scalar:
X = (Z + Z*) / 2
Y = (Z - Z*) / (2i)
giving
X* = X
Y* = Y
Z = X + iY

Recognitions:
 Quote by Hurkyl For the record, by this definition of "trivial", all elements of a C*-algebra are trivial: you can compute the real and imaginary parts just like an ordinary scalar:X = (Z + Z*) / 2 Y = (Z - Z*) / (2i)givingX* = X Y* = Y Z = X + iY
Yeah, true. Bad example on my part, hopefully I can explain why we restrict ourselves to Hermitian observables in the next post. Once I have outlined the idea from the algebraic point of view hopefully we can have a more fruitful discussion.
Also I should say that in the relativistic context not all Hermitian operators are observables.
 Recognitions: Science Advisor Okay, when we make a measurement, a quantum mechanical object in the state $$\psi$$ interacts with a classical measuring apparatus to record a value of some quantity $$\mathbb{A}$$. Mathematically this quantity is represented by an operator $$A$$. All the statistics for the observable such as the expectation, standard deviation, uncertainty e.t.c. can be worked out from the state and the observable. Let's take the expectation value, in your opinion should the expectation be represented as $$\langle \psi, A\psi\rangle$$ or $$\langle A\psi, \psi\rangle$$ Which one of these should represent an experiment to measure A?

Recognitions:
Gold Member
Staff Emeritus
 Quote by DarMM Let's take the expectation value, in your opinion should the expectation be represented as $$\langle \psi, A\psi\rangle$$ or $$\langle A\psi, \psi\rangle$$ Which one of these should represent an experiment to measure A?
Well, it would depend on how we chose to use the Hilbert space to represent states.

I'm going to go with the former, though. If $\psi$ is a ket corresponding to the expectation functional $\rho$, then I prefer to have $\rho(A) = \psi^* A \psi$, which corresponds to the convention relating duals to inner products I assume we're using.

Recognitions:
 Quote by Hurkyl Well, it would depend on how we chose to use the Hilbert space to represent states. I'm going to go with the former, though. If $\psi$ is a ket corresponding to the expectation functional $\rho$, then I prefer to have $\rho(A) = \psi^* A \psi$, which corresponds to the convention relating duals to inner products I assume we're using.
Funnily enough I should say, before I go on, that some people use my example above to argue why observables should not be Hermitian. That is they feel that physics should give the same answers regardless of which choice you use $$\langle \psi, A\psi\rangle$$ or $$\langle A\psi, \psi\rangle$$. Or to put it in loose language "experiments cannot test the inner product".

Anyway on to the more important fact. It is an observed consequence of atomic measurement that if we measure a physical quantity and then measure that quantity again with no other quantities measured in between, then the chance of us obtaining the same answer is 100%. Given that this is fact of measurement how can we model it? Well if we obtained the value $$a$$ we are in the state $$|a\rangle$$. Then since we know that we have no chance of measuring another value $$b$$ for the same observable we want the probability to vanish for transition from $$|a\rangle$$ to $$|b\rangle$$. That is we want $$\langle b,a\rangle = 0$$. So in order to match experiment observables must be represented by operators whose eigenvectors are orthogonal. Would you agree?
(Please tell me if something is incorrect.)
 Recognitions: Gold Member Science Advisor Staff Emeritus I ground through some calculations, and I'm pretty sure that transition amplitude from $\psi$ to $b$ ought to bethe coefficient of $|b\rangle$ in the representation of $|\psi \rangle$ relative to the eigenbasisand notthe inner product of b with $\psi$.Of course, if you have an orthonormal eigenbasis, they are the same.

Recognitions:
 Quote by Hurkyl I ground through some calculations, and I'm pretty sure that transition amplitude from $\psi$ to $b$ ought to bethe coefficient of $|b\rangle$ in the representation of $|\psi \rangle$ relative to the eigenbasisand notthe inner product of b with $\psi$.Of course, if you have an orthonormal eigenbasis, they are the same.
Really, why do you say? Perhaps I'm missing something, but I thought the usual definition of the transition probability was $$\langle b,a\rangle$$. How did you calculate what the transition probability was? Maybe I'm just being silly though!
 Recognitions: Gold Member Science Advisor Staff Emeritus Well, the heuristic calculation I went through was as follows: First, I want to make a toy example of a unitary operator that collapses the state in question. I chose the following one for no particular reason other than it was simple:T|a,e_b> = |a,e_{b+a}>The Hilbert state space here is the tensor product of the state space we are interested in, with a basis labeled by the eigenvalues of whatever operator we're interested in, and another state space representing a toy environment, with basis states labeled by complex numbers. ("e" for "environment") I chose a generic pure state in ket form:$$|\psi\rangle = \sum_a c(a) |a\rangle$$computed the density matrix of the state:$$T(|\psi \rangle \otimes |0 \rangle)$$and took the partial trace to get the resulting density matrix:$$\sum_{a,b} c(b)^* c(a) \langle e_a | e_b \rangle\, |a\rangle\langle b|$$Since this evolution was supposed to collapse into a mixture of the eigenstates, I convinced myself that implies the environment states do need to be orthogonal, giving the density matrix:$$\sum_{a} |c(a)|^2 |a\rangle\langle a|$$which is the statistical mixture that has probability $|c(a)|^2$ of appearing in state $|a\rangle$. This toy seems reasonable since it gives the statistical mixture I was expecting, and eigenstates (e.g. $|a\rangle$) remain fixed, so the mixture generally remains stable. So, if transition probabilities make sense at all, the transition probability from $|\psi\rangle$ to $|a\rangle$ has to be $|c(a)|^2$ -- in other words, the right computation for transition amplitude is the "coefficient of $|a\rangle$" function, rather than the "inner product with $|a\rangle$" function.

Recognitions:
Quote by George Jones
 Quote by strangerep A "linear combination" in an arbitrary vector space can certainly be an infinite sum.
For an arbitrary vector space, what does "infinite sum" mean? There is only one topology, the Euclidean topology, that can be given to a finite-dimensional vector space, but an infinite-dimensional vector space can be given various topologies.
That's why I tried to distinguish such an arbitrary vector space
from a Hilbert space in my post.

I should possibly have said formal linear combination. I was thinking of the
"universal" space mentioned in Ballentine section 1.4.

Certainly, one can't do very much useful stuff in such an arbitrary vector space
before equipping it with a topology.

Recognitions:
 Quote by Fredrik [...] I think it's more appropriate to define a linear combination to only have a finite number of terms.
Consider the usual kind of inf-dim Hilbert space which has an
orthonormal basis consisting of an infinite number of vectors.
In general, an arbitrary vector in that space can be expressed
as an infinite sum over the basis vectors. Surely such a sum
qualifies as a linear combination?

 This is why: If V is a vector space over a field F, and S is a subset of V, the "subspace generated by S" (or "spanned" by S) can be defined as any of the following a) the smallest subspace that contains S b) the intersection of all subspaces that contain S c) $$\Big\{\sum_{i=1}^n a_i s_i|a_i\in\mathbb F, s_i\in S, n\in\mathbb N\Big\}$$ These definitions are all equivalent, and the fact that every member of the set defined in c) can be expressed as $\sum_{i=1}^n a_i s_i$, with n finite, seems like a good reason to define a "linear combination" as having only finitely many terms. [...]
One can also find inf-dim subspaces in general, in which case those arguments about
finite sums don't apply.

Mentor
 Quote by strangerep ...an infinite sum over the basis vectors. Surely such a sum qualifies as a linear combination?
The theorem I mentioned is valid for arbitrary vector spaces. (I'm including the proof below). It implies that if we define "linear combination" your way, the following statement is false:

The subspace generated (=spanned) by S is equal to the set of linear combinations of members of S.

So let's think about what you said in the text I quoted. The definition of "subspace generated by" and the theorem imply that a vector expressed as

$$x=\sum_{n=1}^\infty \langle e_n,x\rangle e_n$$

with infinitely many non-zero terms does not belong to the subspace generated by the orthonormal basis. That's odd. I didn't expect that.

I think the explanation is that terms like "linear combination" and "subspace generated by" were invented to be useful when we're dealing with arbitrary vector spaces, where infinite sums may not even be defined. And then we stick to the same terminology when we're dealing with Hilbert spaces.

I haven't tried to prove it, but I'm guessing that the subspace spanned by an orthonormal basis is dense in the Hilbert space, and also that it isn't complete. But vectors like the x mentioned above can be reached (I assume) as a limit of a sequence of members of the subspace generated by the basis. (A convergent sum of the kind that appears on the right above is of course a special case of that).

 Quote by strangerep One can also find inf-dim subspaces in general, in which case those arguments about finite sums don't apply.
The theorem holds for infinite-dimensional vector spaces too. The proof is very easy. Let V be an arbitrary vector space, and let S be an arbitrary subset. Define $\bigvee S$ to be the intersection of all subspaces $W_\alpha$ such that $S\subset W_\alpha$. I'll write this intersection as

$$\bigvee S=\bigcap_\alpha W_\alpha$$

Define W to be the set of all linear combinations of members of S. (Both here and below, when I say "linear combination", I mean something with a finite number of terms).

$$W=\Big\{\sum_{i=1}^n a_i s_i|a_i\in\mathbb F, s_i\in S, n\in\mathbb N\Big\}$$

I want to show that $W=\bigvee S$. First we prove that $W\subset\bigvee S$.

Let x be an arbitrary member of W. x is a linear combination of members of S, but S is a subset of every $W_\alpha$. So x is a linear combination of members of $W_\alpha$ for every $\alpha$. The $W_\alpha$ are subspaces, so that implies that x is a member of every $W_\alpha$. Therefore $x\in\bigvee S[/tex], and that implies [itex]W\subset\bigvee S$.

Then we prove that $\bigvee S\subset W$. It's obvious from the definition of W that it's closed under linear combinations. That means that it's a subspace. So it's one of the terms on the right in

$$\bigvee S=\bigcap_\alpha W_\alpha$$

That implies that $\bigvee S\subset W$.

Recognitions:
Gold Member