Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Orientation and Hilbert space bra / ket factoring.

  1. Jan 29, 2006 #1

    CarlB

    User Avatar
    Science Advisor
    Homework Helper

    I've been thinking about the probability interpretation of quantum states.

    In the density matrix formalism, or in measurement algebra like Schwinger's measurement algebra, one makes the assumption that pure states can be factored into bras and kets, and that bras and kets can be multiplied together to produce complex numbers, and that the squared magnitudes of these complex numbers are the probabilities.

    But when you try to write spinors in a geometric manner, one finds that one inevitably ends up with a choice of orientation, or gauge. For a paper dealing with the attempt by one physicist to avoid this orientation gauge, see Baylis:
    http://www.arxiv.org/abs/quant-ph/0202060

    One can eliminate the orientation gauge by converting back to the density matrix form. (In the above paper, this comes about because if [tex]R[/tex] is a rotor, then [tex]R^\dagR = 1[/tex].) Even when one doesn't deal with geometric interpretations of wave states, the fact that density matrices eliminate unphysical gauge freedoms should be clear: The arbitary (global) complex phase of a wave function obviously disappears in its density matrix, but the density matrix contains all the physical information of the wave function.

    I suspect that one should attempt to avoid the factoring into spinors that led to the unphysical gauge freedom in the first place. That's all fine and good but doing this removes the complex numbers that gave the probabilities.

    As a first step towards getting an alternative probability interpretation, I've found that one can replace the vector norm [tex]|\psi|^2 = < \psi_A | \psi_A>[/tex] with a matrix norm:

    [tex]|A|^2_{N\times N} = \sum_j\sum_k |A_{jk}|^2[/tex].

    If one assigns [tex]A = |\psi><\psi|[/tex], then the calculations work out the same (if [tex]\psi[/tex] is normed). That is, [tex]|A|^2_{N\times N} = |\psi|^2[/tex]. But this norm isn't a geometric norm.

    To get it to a geometric norm, one can rewrite [tex]A[/tex] as a sum over Clifford algebra elements and then take the natural Clifford algebra norm (i.e. the Clifford algebra treated as a vector space):

    [tex]|\alpha + \alpha_0\gamma_0 + ...+ \alpha_{0123}\gamma_0\gamma_1\gamma_2\gamma_3}|^2 = |\alpha|^2 + |\alpha_0|^2 + ...|\alpha_{0123}|^2[/tex]

    Upon making this substitution, one finds, that for the cases of the Pauli matrices, or for the typical representations of the Dirac algebra (i.e. gamma matrices), one has that the geometic norm is proportional to the matrix norm with a constant of proportionality equal to the trace of the unit matrix.

    It turns out that every representation of the Pauli algebra gives a norm which is proportional to the geometric norm. But this is not the case with the Dirac algebra. If one chooses a representation which diagonalizes algebraic elements that mix space and time, one finds that the geometric norm differs from the usual spinor norm. This is essentially what one would get if one boosted a representation.

    Refusing to factor states into bras and kets amounts to rejecting Hilbert space (where norms are defined in terms of inner products of vectors) in favor of Banach space (where norms are defined in terms of a single state at a time) for the space of states. Yes, Hilber spaces have a lot of mathematical advantages over Banach spaces. But I suspect that there are Banach spaces that cannot be put into Hilbert form. What would be the physical interpretation of one of these?

    Anyone else interested in this sort of thing or remember anything about it?

    Carl
     
    Last edited: Jan 29, 2006
  2. jcsd
  3. Mar 5, 2006 #2

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I think you're describing a particular construction that appealed to me.


    If you have a C*-algebra [itex]\mathcal{A}[/itex], then it can be viewed as a complex vector space. In particular, it has a dual space [itex]\mathcal{A}^*[/itex]. Then, the "states over [itex]\mathcal{A}[/itex]" are defined to be the elements of [itex]\mathcal{A}^*[/itex] that are positive with unit norm.

    For a linear functional [itex]\omega[/itex] to be positive means that [itex]\omega(B^2) \geq 0[/itex] for any Hermitian B. I think this is equivalent to if B is Hermitian with largest and smallest "eigenvalues" u and v, then [itex]v \leq \omega(B) \leq u[/itex]. (By "eigenvalue" I really mean element of its spectrum)


    Combining
    http://en.wikipedia.org/wiki/Gelfand-Naimark-Segal_construction
    and
    http://en.wikipedia.org/wiki/Gelfand–Naimark_theorem

    say that you can always factor a state [itex]\mathcal{A}^*[/itex] into a bra and ket in some Hilbert space. (Now I'm curious if there's one Hilbert space that would work for the whole set of states)


    In particular, for any state [itex]\rho[/itex], there is a Hilbert space representation of [itex]\mathcal{A}[/itex] and a vector [itex]| \xi \rangle[/itex] such that:

    [tex]
    \rho(T) = \langle \xi | T | \xi \rangle
    [/tex]


    (It's convenient that your post https://www.physicsforums.com/showthread.php?p=928715&posted=1#post928715 reminded me of this, because I just recently learned about the above!)


    This spawns for me another interesting question: can any state [itex]\rho[/itex] over [itex]\mathcal{A}[/itex] be expressed in the form:

    [tex]
    \rho(T) = \mathop{\text{tr}} (\hat{\rho} T)
    [/tex]

    for a suitable [itex]\hat{\rho} \in \mathcal{A}[/itex]? (Maybe requiring that we extend [itex]\mathcal{A}[/itex] appropriately?)
     
    Last edited: Mar 5, 2006
  4. Mar 5, 2006 #3

    CarlB

    User Avatar
    Science Advisor
    Homework Helper

    I'm finishing up a paper on this and would love to include more references, if you can recall where you saw it.

    My whole problem with this sort of reasoning is that while it is true, it does not provide you with a unique Hilbert space. And the standard way of working with QM just assumes that whatever Hilbert space you happen to choose is the right one, and arranges for gauge transformations to get to the others. I'm claiming that there are physical reasons for sticking to the Banach space representation.

    The paper I'm writing shows that the masses of the leptons, while quite arbitrary in the usual Hilbert space formalism, are quite natural in a Banach space. This is the basis of the Koide mass formula. I found a version of the formula that fits into a Banach space representation of the leptons as composite particles. The resulting matrix equation is given in the last page of this paper:
    http://www.arxiv.org/abs/hep-ph/0505220

    In my paper, I eventually get back to the spinor representation, but it is explicitly done with the density matrix representation as the basis. I think it's quite beautiful and can hardly wait to go back to typing on it.

    Carl
     
  5. Mar 5, 2006 #4

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I distilled it from http://en.wikipedia.org/wiki/Local_quantum_field_theory. There's a bunch of other content in this construction that relates to the causal structure of Minowski space-time... but on any particular region of space-time, we have an algebra [itex]\mathcal{A}[/itex], and a corresponding space of states over [itex]\mathcal{A}[/itex].


    In the other post, I mentioned a "real Banach algebra" -- but that's irrelevant since you showed that complex numbers are necessary.

    Since you can model the algebra as matrices (which act on a finite dimensional complex space), we can equip your algebra with the operator norm, and turn it into a C*-algebra! (Taking the completion, if necessary)

    A C*-algebra requires [itex]||T^* T|| = ||T||^2[/itex], which, I believe, is satisfied by the matrix norm...

    I wouldn't have interpreted it that way -- that ||A|| = 1 for a projection operator merely means that some states survive unscathed, and that A doesn't make any state "bigger". For any state [itex]\rho[/itex], we have that the expectation of A satisfies [itex]0 \leq \rho(A) \leq 1[/itex] -- and that there are states that achieve either extreme. (Assuming our projection operator is neither the zero nor the identity operator)


    I hope I can follow. :smile:
     
  6. Mar 5, 2006 #5

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Oh, incidentally, there's another way to get rid of the arbitrary phase: construct a projective space! Each point of the projective Hilbert space corresponds to a one-dimensional subspace of the original subspace. In other words, all of the phase shifts of a given state all correspond to the same point in the projective Hilbert space. (Which is no longer a vector space, but some interesting topological space. The Bloch sphere is an example of the result of this construction, which would yield the complex numbers plus a point at infinity for a qubit)

    It was briefly discussed here on pf. I strongly suspect that, if the algebra is big enough, that this yields the same space of states as the construction I mentioned.
     
  7. Mar 6, 2006 #6

    CarlB

    User Avatar
    Science Advisor
    Homework Helper

    Well I put up what I've got on that paper on my website here:

    http://brannenworks.com/GEOPROB.pdf

    The above is defective in many ways. Some of the later sections having to do with the fundamental fermion are incomplete. I haven't even started the conclusion. The abstract and synopsis are in shambles. The second appendix, on astrophysics, is the result of only an hour of typing.

    Undoubtedly the whole paper is shot through with typographical errors and arithmetic mistakes which will make reading it difficult. And I'm doubtful about the whole concept involved with the generalization of probabilities that is included. Nevertheless, there it is, and any comments are appreciated.

    As I stress in the above paper, my whole problem with this line of logic is that it may be true, but it suffers from the defect that there is more than one matrix representation that works. Furthermore, as I show in the above paper, the matrix norm will be different, depending on the choice of representation, at least if you allow spooky representations.

    What I'm trying to do here is to follow Einstein's lead and use geometry. While you can show that any geometry can be represented by matrices, that does not mean that a particular representation is the correct geometry. The whole problem with spinors is the multiplicity of reprsentation.

    Carl
     
  8. Mar 8, 2006 #7

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I've been playing with it, and there is an intrinsic way to turn a complex Clifford algebra A into a C*-algebra. Actually, I have two equivalent ways:

    (unless I've seriously goofed with minimum polynomials)

    (1) I could define the norm of T to be the the largest |n| for which T - n1 is not invertible. (The collection of all such n would be the spectrum of T)

    (2) Since A is a vector space, there is a canonical reprsentation of A as linear operators acting on A itself (by left multiplication). We could then define the norm of T to be its operator norm in this representation.

    These are further equivalent in that every element of the spectrum of T actually appears as an eigenvalue of T in this canonical representation. So, in some sense, this representation isn't "missing" anything. (Whereas, an arbitrary rep might not have the appropriate eigenvectors, and thus the its operator norm in that rep might be too low)


    Where in your paper is this mentioned? I haven't managed to read all of it, and I couldn't find it by quickly glancing through.

    I'm becoming less sure that all of this is a useful construction though -- things might be better done by analogy, as opposed to translating into the C*-algebra setting.
     
  9. Mar 9, 2006 #8

    CarlB

    User Avatar
    Science Advisor
    Homework Helper

    Hmmmmm....

    Let's see.... (small amount of multiplication of Pauli matrices)

    Yes, and this would preserve the (1+cos(theta))/2 law for transition probabilities between two Pauli spinors, so it's in agreement with the standard model if you define P as equal to your norm (rather than its squared magnitude). The reason for this is that, in the Pauli algebra, any nontrivial state is a "primitive idempotent", and so its spectrum consists of a zero and a 1. That means that the trace is equal to the only nonzero eigenvalue.

    Yeah, I think that would work. Of course the problem with uniqueness is in getting to a unique Hilbert space from here. And this method would still assign a norm for a projection operator equal to the norm for unity, which is something that I have philosophical complaints about.

    I think that this is equivalent to the norm I write as [tex]|\;\;|_{N\times N}[/tex], or the Frobenius norm.

    I really need to cut that article down in size.

    Carl
     
  10. Mar 9, 2006 #9

    CarlB

    User Avatar
    Science Advisor
    Homework Helper

    Section XI has a method of modifying a representation by exponentials that will modify the matrix norm. It's easy enough to put a particular example of it, for the case of the Pauli algebra, here:

    [tex]\sigma_x' = \left(\begin{array}{cc}0&r\\1/r&0\end{array}\right),[/tex]

    [tex]\sigma_y' = \left(\begin{array}{cc}0&-ir\\i/r&0\end{array}\right),[/tex]

    [tex]\sigma_z' = \left(\begin{array}{cc}1&0\\0&-1\end{array}\right),[/tex]

    where [tex]r[/tex] is any nonzero complex number. These form a rep of CL(3,0) because the modification of matrices according to:

    [tex]\left(\begin{array}{cc}A&B\\C&D\end{array}\right) \to
    \left(\begin{array}{cc}A&rB\\C/r&D\end{array}\right)[/tex]

    is an isomorphism (or do I have the wrong word, I got my last math degree in 1982, what I'm looking for is that the above transformation preserves multiplication and addition), but does not preserve matrix norms.

    I've lost track of the line of reasoning, but if you're saying that we should "shut up and calculate" instead of spending our time connecting together disparate mathematical formalisms, I wholeheartedly agree.

    I think that as I rewrite the article, in addition to trimming the huge amount of excess stuff, I'll pitch it as an attempt to geometrize the foundations of QM. But rather than taking the foundations of QM to be the spinor states as Hestenes did, instead I'm using the density matrix states. And instead of considering the external states, as Schwinger did, I'm looking at the internal symmetries (I count spin as an internal symmetry, but some think of it as external, whatever).

    One of the later sections (XVI) gives a method of writing matrices that efficiently represent operators that cross family boundaries. The example operator is the one that gives the square roots of the masses of the charged leptons. This is really the most important part of the whole paper. The rest of the paper is just there to support this from a theoretical basis. One could logically begin with that section, or maybe the one or two before it, and the admonition to shut up and calculate. But without the previous stuff, there is little to justify that particular form for the operator.

    I'm busily working on putting other cross generation operators into the form described in section (XVI), and having a blast. Numbers, numbers, numbers. Right now I'm working on neutrino masses which famously, and somewhat surprisingly, do not satisfy the Koide mass equation.

    Carl
     
  11. Mar 9, 2006 #10

    CarlB

    User Avatar
    Science Advisor
    Homework Helper

    As an example of this sort of thing, in the Schwinger Measurement Algebra, the fundamental object is the measurement, for example M(e), which is a sort of Stern-Gerlach filter that only passes electrons. Such a measurement is called, mathematically, a "primitive idempotent". It is an idempotent in that M(e)M(e) = M(e), that is, since a repeated perfect filter is the same as one perfect filter, and they are primitive in that they cannot be broken into smaller idempotents (i.e. written as the sum of two nonzero idempotents).

    Since the Dirac algebra and the Pauli algebra are examples of Clifford algebras, but our calculations are usually done with matrix representations of these algebras, it is interesting to describe these representations in terms of primitive idempotents.

    As an example in the Dirac algebra, consider the diagonal primitive idempotents. These are just the diagonal matrices with a single entry 1, and the rest of the entries zero. Specifying these matrices is almost, but not quite, enough to specify the representation. Two define an off diagonal term requires one more primitive idempotent, and the natural one to use is the "democratic primitive idempotent", which is the matrix that has all elements equal to 1/N, where N is the dimension of the matrix.

    There are N diagonal primitive idempotents. Letting [tex]\iota_j[/tex] represent the diagonal primitive idempotent with a 1 in the jth diagonal position, and letting [tex]\iota_D[/tex] represent the diagonal primitive idempotent, then one can pick out the (j,k) element of a matrix in a representation, as a geometric element, by noting that:

    [tex]M_{jk} = \iota_j\;\;\iota_D\;\;\iota_k[/tex]

    where [tex]M_{jk}[/tex] is the matrix that is all zero except at position (j,k) where it is one. The thing to note here is that the RHS of the above is a purely geometric object, and so one can associate any particular representation with its geometry by knowing that set of N+1 primitive idempotents.

    This is the subject of the end of Section (VI) of that paper, but I think I could have explained it with a lot less verbage if I'd simply approached it as the problem of defining a representation in terms of primitive idempotents.

    The usual method of defining a representation is by listing its vectors, such as the Pauli algebra and its [tex]\sigma_x,\sigma_y,\sigma_z[/tex]. This requires defining only N elements, so the N+1 primitive idempotents are a bit wasteful. On the other hand, one could eliminate the last of the N diagonal primitive idempotents since it is redundant in that the sum of a set of diagonal primitive idempotents must give unity, and that would give just N defined elements.

    But more importantly, one can describe the primitive idempotents geometrically and thus one can see ways of choosing them that gives one a very natural way of finding representations. Oh, and I should mention that the figure 96, given in the text for the number of Weyl reps of the Dirac algebra that share the usual diagonalization is incorrect because I overcounted by a factor of at least 3. I'll eventually correct it, but it doesn't matter much.

    Carl
     
    Last edited: Mar 9, 2006
  12. Mar 10, 2006 #11

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Homomorphism -- but since it has an inverse homomorphism it would be an isomorphism.

    This transformation does change the Frobenius norm of the matrix, but it doesn't change the operator norm of the matrix. (which is what I think of when I think of "matrix norm")


    I was thinking more along the lines that I was trying to put it into the C*-algebra picture because I know (a little) more about those than Clifford algebras -- but I'm becoming less sure that this translation effort is worthwhile... it would be better to just try to work with them as Clifford algebras and spend the effort working out what I need to know about those.
     
  13. Mar 10, 2006 #12

    selfAdjoint

    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    Isomorphism into, or maybe injection. For a full isomorphism you have to show it's onto.
     
  14. Mar 10, 2006 #13

    CarlB

    User Avatar
    Science Advisor
    Homework Helper

    Yes, the operator norm is what I'd think of as the "biggest eigenvalue norm", at least in the context of operators that have complete sets of eigenvectors (is that true? Maybe for groups of finite dimension.) It is possible to create a transform that changes the operator norm.

    Suppose you have a Clifford algebra element W that, when exponentiated, is invertible. This produces a parameterization of a set of representations modified from some original representation by the transformation (in the language of the Dirac algebra):

    [tex]\gamma_j \to \gamma_j' = e^{+W/2}\;\gamma_j\;e^{-W/2},[/tex]

    where the 1/2 is included for convenience. In other words, you write W as a matrix, divide it by 2, exponentiate it, and apply it before and after the old gamma matrices. The result is a new set of gamma matrices that satisfy the usual (anticommutation) relations of the usual gamma matrices. This is because the exponentials just cancel.

    Anyway, if you do the above, it is possible to obtain a representation that modifies the operator norm. To do it, you need to choose an exponential that mixes space and time in the canonical basis elements, which is why I chose the example of the Dirac algebra. For example, a transformation that will modify the operator norm would be to take:

    [tex]\gamma_3 \to \gamma_3' = \cosh(\alpha)\gamma_3 + \sinh(\alpha)\gamma_0,[/tex]

    [tex]\gamma_0\to \gamma_0' = \cosh(\alpha)\gamma_0 - \sinh(\alpha)\gamma_3,[/tex]

    where [tex]\alpha[/tex] is any real number, and [tex]\gamma_1,\gamma_2[/tex] are left unchanged. The above is the example of what you get when you modify the representation by the exponential method given above with [tex]W = \alpha\gamma_0\gamma_3,[/tex] though it is always possible I've dropped a sign or lost a factor of two. (I do that.)

    When you compute the operator norm for the modified representation as given above, I believe you will find that it has been modified. This is because the above modified representation modifies the trace and the operator norm, at least for primitive idempotents and their multiples, is the same as the trace (i.e., the eigenvalues are what count).

    This information is in that paper, but it may be hard to dig out. I tried to make it a calculation oriented paper, at least partly to make it more readable, but that just made it more difficult to understand for the people who had the ability to read it.

    I really do believe that by analyzing them in terms of primitive idempotents, I now understand representations of Clifford algebras far better than I ever did before. I'm thinking I should split that stuff out of the paper; it's way too long.

    By the way, I'm sure you'll be amused to know that I've got a lot of hits on my website recently associated with the above paper making the "crank of the day" link at www.crank.net.

    Carl
     
    Last edited: Mar 10, 2006
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Orientation and Hilbert space bra / ket factoring.
  1. Ket to bra (Replies: 1)

Loading...