Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Hilbert space metric tensor?

  1. Jul 22, 2011 #1
    When I was studying general relativity, I learned that to change a vector into a covector (or vice versa), one used the metric tensor. When I started quantum mechanics, I learned that the difference between a vector in Hilbert space and its dual is that each element of one is the complex conjugate of the corresponding element of the other.

    Is there a metric tensor for Hilbert space that will do this - that is - change a vector into its dual, in the same way?
     
  2. jcsd
  3. Jul 22, 2011 #2

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The inner product that's a part of the definition of "Hilbert space" does the job. Suppose that H is a Hilbert space. For each y in H, the function [itex]B_y:H\rightarrow\mathbb C[/itex] defined by [itex]B_y(x)=\langle y,x\rangle[/itex] for all x in H, is a member of the dual space H*.

    This is part of the motivation for Dirac's "bra-ket notation": We use the notation |x> instead of x and <y| instead of [itex]B_y[/itex].

    More about the notation here. (Post 4).

    More about the relationship between H and H* here. (Posts 13-14).
     
  4. Jul 22, 2011 #3
    The metric tensor defines an inner product. Like [itex]\mathbf{a} \cdot \mathbf{b} = g_{\mu \nu} a^\mu b^\nu[/itex].

    What is the inner product between two Hilbert states? It, in fact, cannot be written in this form. So ask yourself this: Is Hilbert space an manifold? Can it be "locally" mapped to R^n?
     
  5. Jul 24, 2011 #4
    So then, a manifold and its corresponding metric tensor define a vector space (the one of small displacements inside the manifold) with an inner product, but a vector space with an inner product doesn't define - or even require - a metric or manifold. Hmm.

    Another way of asking my original question would have been, "is there a square matrix that will turn a list of complex numbers into their conjugates?" It seems that the answer is no.

    Thanks guys.
     
  6. Jul 24, 2011 #5

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    One way to see that the answer is no is to note that you're describing a non-linear transformation and that multiplication by a matrix is a linear transformation.

    This doesn't seem to have a lot to do with your original question. Even when we're dealing with the real vector space of n×1 matrices, the product of an n×n matrix with an n×1 matrix will just give you another n×1 matrix, not a member of the dual space.
     
  7. Jul 24, 2011 #6
    Oh yeah! :biggrin:

    It does in the sense that

    [tex]

    \left| \begin {array} {cc} V_{r} \\ V_{\theta} \end{array} \right|
    =
    \left| \begin{array} {cc} g_{rr} & g_{\theta r} \\ g_{r \theta} & g_{\theta \theta} \end{array} \right|

    \left| \begin {array} {cc} V^{r} \\ V^{\theta} \end{array} \right|

    [/tex]
     
  8. Jul 24, 2011 #7

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I suspected that this is what you had in mind. When you're dealing with a complex Hilbert space, this is how you "turn a list of complex numbers into their conjugates":

    Let H be a Hilbert space and [itex]\{e_i\}[/itex] an orthonormal basis for it. Each [itex]y\in H[/itex] defines a "list of complex numbers" since [itex]y=y_i e_i[/itex]. The i component of [itex]B_y\in H^*[/itex] (defined in my first post) in the dual basis associated with the orthonormal basis [itex]\{e_i\}[/itex] of H, is [tex](B_y)_i=B_y(e_i)=\langle y,e_i\rangle=\langle y_j e_j,e_i\rangle=y_j^*\langle e_j,e_i\rangle=y_j^* \delta_{ji}=y_i^*[/tex]
     
  9. Jul 24, 2011 #8
    Got it. Thanks, Fredrik.
     
  10. Jul 24, 2011 #9

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You don't know it, but you've implicitly used the Euclidean metric tensor (i.e. the dot product) to justify writing the covector as a column vector on the left hand side.

    When using a choice of coordinates to turn linear algebra into matrix algebra, vectors* are Nx1 matrices, and covectors are 1xN matrices. Transposition is using the dot product to convert between row and column vectors.



    *: Following the usual convention of a certain vector space being more "important" than the rest, and we call its elements vectors, and every other vector space we use a different term
     
  11. Jul 25, 2011 #10
    I don't completely follow. I know there's a convention of writing vectors as columns and covectors as rows, but I don't see how my equation in entry #6 is necessarily using the Euclidean metric. Those g's could have been anything, after all.
     
  12. Jul 25, 2011 #11

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    When I said the Euclidean metric, I wasn't referring to the matrix g's. I was referring to the fact you used the transpose operation. You used it twice, actually: once to put the coordinates of your covector into a column matrix, and once to put the coordinates of your metric into a 2x2 matrix. (In reality, the coordinate form of your metric g should be a 1x4 partitioned matrix, viewed as a 1x2 matrix of 1x2 matrices)

    Transposition is the matrix operation corresponding to raising/lowering indices according to the Euclidean metric.



    In fact, when other metrics are more important, you will see people change the transpose operator -- for example, in special relativity, people often write
    Code (Text):

    / t \T
    | x |
    | y | = (t, -x, -y, -z)
    \ z /
     
    (or, depending on their sign convention, the right hand side could be (-t, x, y, z))
     
  13. Jul 25, 2011 #12

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Now I don't understand. If [itex]V=\mathbb R^n[/itex] and [itex]\{e_i\}[/itex] is the standard basis, the standard isomorphism from V into V* can be defined by saying that for each [itex]v=v^ie_i\in V[/itex], the corresponding member of V* has the components [itex]v_i=g_{ij} v^j[/itex] in the dual basis [itex]\{e^i\}[/itex]. By definition of matrix multiplication, that equality is the ith component of a matrix equation like the one snoopies posted. No need to use the transpose operation, especially not on g.
     
  14. Jul 25, 2011 #13

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You're right in the sense that you get correct results... but you get correct results in a way that hides some of what's going on.

    There is a natural progression from ignoring the difference between vectors and covectors to ignoring the difference between all shapes of rank 2 tensors and writing their coordinate forms as matrices.

    While this is does simplify some things (as long as you don't progress beyond rank 2), it loses some of the subtle features of multi-linear algebra that help describe things.
     
  15. Jul 27, 2011 #14
    Deep as always. Thanks, Hurkyl.
     
  16. Apr 10, 2013 #15
    It has not been mentioned but one can extend the concept of finite-dimensional manifolds to infinite-dimensional manifolds, whereby one of them could easily be a Hilbert manifold, in the sense that the tangent spaces of its points are homeomorphic to Hilbert spaces. So in principle, Hilbert spaces do have a metric defined on them, but since they are infinite-dimensional, its tensorial coordinate representation is not really of any use.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Hilbert space metric tensor?
  1. Hilbert space (Replies: 31)

  2. Hilbert Space (Replies: 1)

  3. Hilbert Space (Replies: 9)

  4. Hilbert Space (Replies: 18)

Loading...