Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Doubt from Georgi(Lie Algebras)

  1. Jan 12, 2010 #1
    In Georgi(Lie Algebras-Second Edition), theorem 1.5 states:

    " The matrix elements of the unitary, irreducible representations of G are a complete orthonormal set for the vector space of the regular representation, or alternatively, for functions of g belonging to G."

    Now, just before that, he says that the matrix elements of the regular representation can be written as linear combination of matrix elements of irreducible representations. Well...I imagine that the regular representation can be brought into block diagonal form by a similarity transformation such that each block represents some irreducible representation....Since the similarity transformation just mixes up the elements of a matrix, the matrix elements of regular rep are linear combination of the elements of the irreps.....this I understand...am happy!!:smile:

    But what does he mean by vector space above? If he just means the elements of the matrices, I am happy and can understand......but I imagine the vector space to be the space of column vectors...you know..like...made of [tex]$\left(\begin{array}{c}
    0\end{array}\right)$[/tex] and [tex]$\left(\begin{array}{c}

    for SU(2)...
  2. jcsd
  3. Jan 12, 2010 #2


    User Avatar
    Science Advisor
    Gold Member

    The regular representation is not a priori a linear representation until you define the space of linear combinations of group elements.

    Its really a linear extension of the regular rep. You must formally define linear combinations of group elements and with some smudging, extend this to functions (distributions) over the group since it is a Lie group.

    This space of (e.g. complex valued) functions over the Lie group is I believe the vector space in question.

    Example with SU(2). The group elements are parameterized by the coefficients in the Lie algebra:
    [tex] g(a,b,c) = \exp(a\,i\sigma_x+b\,i\sigma_y+c\,i\sigma_z)[/tex]
    Hence the space in question is the set of functions of (a,b,c) appropriately periodic with the group elements. We needn't use a parametrization. You can work directly with the group which is geometrically isomorphic to the 3-sphere.

    Or if you like you can work with the quaternions since SU(2) = Sp(1,H).
  4. Jan 13, 2010 #3
    Georgi has not started the topic of Lie groups yet....this is from the first chapter...I do not think the above interpretation is correct..have you looked at the book?
  5. Jan 13, 2010 #4


    User Avatar
    Science Advisor
    Gold Member

    I have it in hand now. I thought you'd gotten to Lie algebras (and when you do I think the theorem applies as well).

    Look at section 1.1 where he defines a representation as a mapping from the group to a set of linear operators, and then at section 1.3 where he defines the regular representation.

    At the very end of the section he says:
    My apologies for not realizing you were still talking about finite groups. As I look through the book I do not see where he mentions the regular representation of a Lie group. As I mentioned there's some "smudging" involved. (And it is not as critical to the physics as is the adjoint rep.)

    ( What follows is more off-topic.)

    Note at the beginning of that section 1.3 he mentions defining an "orthonormal basis" indexed by the group elements but you needn't have any metric structure at all to define the regular representation. By making the group elements orthonormal he is defining a metric on the space so that the regular representation is also Unitary or Orthogonal but that is not essential to the definition of the representation itself. But appending this condition is essential to subsequent theorems so take it with the def. (One can be more rigorous and subsequently define the metric to fit. The theorems then come out the same.)

    In a broader sense one can speak of representations and then a subclass of linear representations. (Examples of more general representations are e.g. point mappings, or other geometric transformations.) Within the context of the book however he is taking "representation" and "linear representation" as synonymous and some will argue always should be taken as such. Some references would refer to what I describe as a "not linear" representation as a presentation.

    To my mind a presentation defines the group uniquely and representations are group homomorphisms (product preserving mappings). Hence a linear representation is a homomorphism from a given group to a general linear group (group of all invertible linear operators acting on a vector space= automorphism group of a vector space). Maybe this is being pedantic but I think there is virtue in such hair splitting as one begins to seeking more general contexts.
  6. Jan 15, 2010 #5
    Er..am confused still....when Georgi says in theorem 1.5...."vector space of the regular representation" ? What is the vector space?

    I seem to understand from your reply that it is not the space made of the (1 0 ) and (0 1) column vectors...

    Neither is it the space made of the matrix elements of the regular representation...

    Then what is it?
  7. Jan 15, 2010 #6


    User Avatar
    Science Advisor
    Gold Member

    Look again at:
    Example: Consider the cyclic group of order 3:
    [tex]Z_3= \{1,z=z^{-2},z^2=z^{-1}\},\quad z^3 = 1[/tex]

    We have 3 group elements so we define a 3-dimensional space with basis indexed by these three elements.

    Define the basis of our vector space as:

    (Using "bra-ket" notation with [tex]\langle a|b\rangle = \delta^a_b[/tex] i.e. = 1 if a=b else 0. That's the "unitarity" business, choosing the basis to be ortho-normal.)

    Then the regular representation is:

    [tex] 1 \to\quad |1\rangle\langle 1| \; +\; |z\rangle\langle z| \;+\; |z^{-1}\rangle\langle z^{-1}|\sim \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\0 & 0 & 1\end{pmatrix};\quad \mathbf{e}_1 \sim \begin{pmatrix}1\\0\\0\end{pmatrix}[/tex]

    [tex] z \to \quad|z\rangle\langle 1| \;+\; |z^{-1}\rangle\langle z| \;+\; |1\rangle\langle z^{-1}|\sim \begin{pmatrix} 0 & 0 & 1\\ 1 & 0 & 0\\0 & 1 & 0\end{pmatrix};\quad \mathbf{e}_z \sim \begin{pmatrix}0\\1\\0\end{pmatrix}[/tex]

    [tex] z^{-1} \to \quad|z^{-1}\rangle\langle 1| \;+\; |1\rangle\langle z| \;+\; |z\rangle\langle z^{-1}|\sim \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\1 & 0 & 0\end{pmatrix};\quad \mathbf{e}_{1/z} \sim \begin{pmatrix}0\\0\\1\end{pmatrix}[/tex]
    [EDIT: Above matrix was incorrect, now fixed]

    You can diagonalize these matrices in the complex domain (simultaneously since it's an abelian group)
    You should find a basis where:
    [tex] 1 \to diag(1,1,1) [/tex]
    [tex] z \to diag(1,e^{2\pi i/3},e^{-2\pi i/3})[/tex]
    [tex] z^{-1} \to diag(1,e^{-2\pi i/3},e^{2\pi i/3})[/tex]

    Here's something to keep in mind. You have two distinct vector spaces here (and in most problems). The abstract vector space of the representation and then its "re-representation" in terms of matrices acting on column vectors. Given a vector space when you choose a basis you are also choosing an isomorphism mapping from the abstract vector space into the column vectors of coefficients in that basis. It is important to keep them distinct (hence my using ~ "corresponds to" instead of = "equals"). The reason this is important is that the very same vector can correspond to many different column vectors given different choices of bases.

    Then the way to get back to a given vector in the abstract space is to multiply the column vector of coefficients on the left by a row vector of basis elements:

    [tex] \begin{pmatrix}\mathbf{i} & \mathbf{j}& \mathbf{k}\end{pmatrix}\begin{pmatrix}x\\y\\z\end{pmatrix} = x\mathbf{i} + y\mathbf{j} + z\mathbf{k}[/tex]

    This is one of those distinctions that authors in an applied field will play fast and loose with because his typical reader has played enough linear algebra games to instinctively keep things straight. It can be frustrating for students until they realize this and get the hang of it.

    This is also one of the great things about the "bra-ket" notation. It avoids this confusion allowing you to point directly at the vectors in the abstract space and still do some of the calculations. Just always keep in mind when you transition to matrices you're slipping over briefly into a distinct space, the space of column vectors, doing the calculation and then slipping back.
    Last edited: Jan 15, 2010
  8. Jan 19, 2010 #7
    Thank you !! That was illuminating..so ...just to make sure I understood it correctly....the vector space in question is the space of the matrices......if it is a yes for that...we can close this thread now.......:smile:
  9. Jan 19, 2010 #8


    User Avatar
    Science Advisor
    Gold Member

    The space on which they act, yes. (The matrices themselves form a linear space but that's not the same one quite. From your wording I'm not sure you're making the distinction.)

    Note each element occurs twice, as a matrix acting on a vector space and as a basis vector of that space.

    For group element [tex]x[/tex] we then have matrix [tex]M_x[/tex] and basis vector [tex]\mathbf{e}_x[/tex].

    If in the group [tex]xy=z[/tex] then the matrices are such that:
    [tex]M_x \mathbf{e}_y = \mathbf{e}_z[/tex]
  10. Jan 20, 2010 #9
    I do understand the distinction between the two vector spaces in question..my problem was which one was being referred in theorem 1.5.......and how the equation after that comes...I need to sort some things out here..will be back in a couple of days with what I manage to understand....
  11. Jan 26, 2010 #10
    Hey..now its all clear..I looked at the book by Hamemesh....

    Thank you for your help!:smile:
  12. Jan 26, 2010 #11


    User Avatar
    Science Advisor
    Gold Member

    Excellent. Georgi's book is good later but the preliminary stuff can be a bit "fast and loose".
  13. Sep 1, 2010 #12
    Hello, I'm reading Georgi's book right now and I can't understand the proof of the Theorem 1.1 on page 10 (Every representation of a finite group is equivalent to a unitary representation).

    What I don't understand is why the operator [tex]S=\sum_{g\in G}D(g)^\dagger D(g)[/tex] can be diagonalized.
    Last edited: Sep 1, 2010
  14. Sep 2, 2010 #13


    User Avatar
    Science Advisor
    Gold Member

    By construction it is Hermitian and thus spectral (with real eigen-values in fact.) Recall diagonalizing involves transforming into the eigen-basis of the operator. If it is Hermitian it then has a complete set of eigen-vectors forming a basis and is diagonal in that basis.
  15. Sep 2, 2010 #14
    Thanks! :smile:
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook