# Representations, states and tensors

1. ### spookyfish

53
Hi. I am currently studying about representations of Lie algebras. I have two questions:

1. As I understand, when we say a "representation" in the context of Lie algebras, we don't mean the matrices (with the appropriate Lie algebra) but rather the states on which they act. But then, the states contain less information than matrices, since there could be several representations (for a given dimension) acting on the same state.

I am trying to understand why these states contain the same information as matrices. Is it because by "states" we actually mean weight vectors, which - in addition to containing vectors - contain the eigenvalues, so we could build up the matrices from them, and therefore they are equivalent to finding the generator matrices?

2. When we speak about "tensor representations" (for example, SU(n) tensors, which can also be described by Young Tableaux), what is their relation to the representations I mentioned in question 1 above?

Thanks.

2. ### samalkhaiat

1,117
If I write
$$[2] \otimes [2] = [3] \oplus [1]$$
Do you know the meaning of those numbers and their connection to "representation spaces", "states", "matrices" and "tensor representation" of $SU(2)$ and its algebra?

3. ### spookyfish

53
I know the connection to states: the LHS is the direct product of 2D states and the RHS is the 3D and 1D states corresponding to the irreps.

4. ### samalkhaiat

1,117
Ok, $[2]$ is the smallest dimensional (defining) representation space, i.e., a 2-dimensional linear vector space $V^{ (2) }$. The elements of such space are 2-component vectors $v_{ i }$. The $2 \times 2$ matrices of $SU(2)$ act naturally on $v_{ i }$ mixing its components (states).
So, $[2] \otimes [2]$ is the 4-dimentional (reducible) representation space $V^{ (4) }$ which contains 2 invariant (irreducible) subspaces $V^{ (3) }$ and $V^{ (1) }$. Again, you can think of the elements of $[3]$ (3 states) as 3-component vectors or rank-2 symmetric tensor. And the elements of the space $[1]$ (one state) as 1-component scalar or anti-symmetric rank-2 tensor.
So, we can translate $V^{ (4) } = V^{ (3) } \oplus V^{ (1) }$ into a statement about reducing the tensor product $u_{ i } v_{ j } \equiv T_{ i j } \in V^{ (4) }$ into irreducible tensors. Simply, decompose the tensor product into symmetric and anti-symmetric tensors:
$$u_{ i } v_{ j } = \frac{ 1 }{ 2 } ( u_{ i } v_{ j } + u_{ j }v_{ i } ) + \frac{ 1 }{ 2 } ( u_{ i } v_{ j } - u_{ j } v_{ i } ) ,$$
or
$$T_{ i j } = T_{ ( i j ) } + T_{ [ i j ] }$$
For SU(2), the (irreducible) symmetric tensor $T_{ ( i j )}$ has 3 independent components (3 states) and the anti-symmetric tensor $T_{ [ i j] }$ has only one component (one state). The (3 + 1) states, vectors or tensors do not mix.

Sam

5. ### spookyfish

53
Thanks. That answers my second question, i.e. connect the tensors to the state representations. So the irreducible representations are represented by symmetric and antisymmetric tensors. But I still don't know why this is true. I read that in general the irreps of SU(N) are tensors with definite symmetry and antisymmetry, but I don't know why this is true.

Also, I would like to get an answer to my question 1: why the states are the representations in the first place? I thought representations were supposed to be matrices (generators) satisfying the Lie algebra.

6. ### The_Duck

988
We often use the same word "representation" to refer both to a set of matrices that satisfy the Lie algebra commutation relations and to the vector space on which those matrices act.

7. ### strangerep

2,214
Strictly speaking, the "representation" is the mapping from the abstract generators to specific matrices acting on a linear space.
Cf. this Wiki page.

People often say that the matrices "represent" the abstract group generators. Similarly, the linear space is sometimes said to be the "carrier space" for the representation, or that it "carries the representation".

8. ### spookyfish

53
Why are the states as useful in describing the group structure as the matrices themselves? is it because it easy to build the matrices from the states (using weights and roots)?

9. ### samalkhaiat

1,117
It is easy to check that the (anti-)symmetry property of a tensor is an invariant property, i.e., the group maps (anti-)symmetric onto (anti-)symmetric tensor: they never mix.

I thought I’ve answered that already! What is the difference between “states” (in the linear Hilbert space) and “vectors” (in the linear representation space)? The group can be represented by unitary operators acting on the Hilbert (representation) space. In QM-1 you learn that you can (almost always) set up a 1-to-1 correspondence between states and vectors on one hand and operators and matrices on the other hand.
In general, a Lie algebra admits matrix as well as operator representations.

Sam

10. ### spookyfish

53
Unfortunately that was not my question. I know that states are vectors. My question was - why finding vectors and not the matrices? I thought that our goal in representation theory was to find matrix representations of groups

11. ### samalkhaiat

1,117
No, as well as the generators (matrices or operators) you also need the eigen-values and eigen-vectors of the invariant Casmir operators of the group. Pauli matrices alone do not let you formulate theory for spin 1/2 particle. You also need the simultaneous eigen-vectors of $S^{ 2 }$ and $S_{ z }$.

12. ### spookyfish

53
Fine, but that still does not answer my question. I will appreciate if someone will be able to take the time and try to answer my previous question.