Regarding commutative Lie groups and irreducible rep

Click For Summary
SUMMARY

The discussion centers on the properties of commutative Lie groups and their irreducible representations, specifically addressing the implications of Schur's lemma. It is established that for a commutative group \( G \) and an irreducible representation \( \rho: G \to GL(V) \), the eigenspaces corresponding to eigenvalues are invariant under the action of \( G \). The conclusion drawn is that if \( G \) is commutative, then the representation is one-dimensional, as every element acts by multiplication by a scalar on all vectors in the vector space \( V \).

PREREQUISITES
  • Understanding of commutative groups and their properties.
  • Familiarity with linear algebra concepts, particularly eigenvalues and eigenvectors.
  • Knowledge of representation theory, specifically irreducible representations.
  • Comprehension of Schur's lemma and its implications in the context of group representations.
NEXT STEPS
  • Study the implications of Schur's lemma in greater detail.
  • Explore the structure of irreducible representations of finite groups.
  • Learn about the relationship between eigenvalues and invariant subspaces in linear transformations.
  • Investigate the properties of commutative Lie groups and their representations in various mathematical contexts.
USEFUL FOR

Mathematicians, particularly those specializing in representation theory, linear algebra, and group theory, will benefit from this discussion. It is also relevant for graduate students studying advanced topics in algebra and geometry.

aalma
Messages
46
Reaction score
1
Homework Statement
Bellow
Relevant Equations
Question: Use that ##G = R/Z## is commutative and prove that its any irreducible representation is one-dimensional. Hint: Choose an element ##g ∈ G## and find an eigenvalue ##λ## and a corresponding eigenvector ##v ∈ V## . Prove that ##\{v ∈ V | g(v) = λv\}## is a ##G##-subrepresentation.

Here, I can look at any
arbitrary irreducible repeesentation ##\rho:R/Z\to GL(V)## of ##G=R/Z## where ##V## is a vector space.
##H## is commutative so for every ##g, h\in G##: ##[g,h]=0##.
We can choose an arbitrary element ##g ∈ G## and consider the operator ##ρ(g) ∈ GL(V)## associated with ##g##. Since ##ρ(g)## is a linear operator on ##V##, we can find an eigenvalue ##λ## and a corresponding eigenvector ##v ∈ V## such that ##ρ(g)(v) = λv##.
Denote the given set by: ##W=\{v ∈ V | g(v) = λv\}\subset V##.

Does ##W## being a subrepresentation of ##G## mean that ##\forall w\in W, g\in G: \rho(g)(w)\in W##?

I think I have some doubts of how we use here that ##G## is commutative and how we need to consider the irreducible rep of ##G##.

Definition: A Lie algebra ##g## is called commutative if for ##X,Y ∈ g## one has ##[X,Y] = 0##.
-Commutative Lie algebras are just real vector spaces.
-Let ##G## be a commutative Lie group, then ##Lie(G)## is a commutative Lie algebra.


It would be very helpful if you can explain the idea/concept in this claim.
Above
 
Physics news on Phys.org
aalma said:
Does ##W## being a subrepresentation of ##G## mean that ##\forall w\in W, g\in G: \rho(g)(w)\in W##?
Yes.

aalma said:
I think I have some doubts of how we use here that ##G## is commutative and how we need to consider irreducible rep of ##G##.
The keyword here is Schur's lemma. But you can use the profane version of it that the only matrices in the center of ##GL(V)## are multiples of the identity matrix. The same is true for the vector space ##M(n,\mathbb{C})## of all matrices.

aalma said:
It would be very helpful if you can explain the idea/concept in this claim.
 
  • Like
Likes   Reactions: aalma
fresh_42 said:
Yes.The keyword here is Schur's lemma. But you can use the profane version of it that the only matrices in the center of ##GL(V)## are multiples of the identity matrix. The same is true for the vector space ##M(n,\mathbb{C})## of all matrices.
Can you please explain how we use this (Schur's lemma, the version you mentioned) in this case. I do not understand how we start with this question reching this lemma
 
aalma said:
Can you please explain how we use this (Schur's lemma, the version you mentioned) in this case. I do not understand how we start with this question reching this lemma
1) What exactly is the statement you have to prove?
2) What are the conditions from which the statement has to follow?
3) Which theorems can you use?
4) Do you have hints?

Sorry, I'm sure you have written it, but I have difficulties holding things apart. It is always a good idea to clean the desk and first answer these four questions above. And not in one text where it isn't clear what is what.
 
fresh_42 said:
1) What exactly is the statement you have to prove?
2) What are the conditions from which the statement has to follow?
3) Which theorems can you use?
4) Do you have hints?

Sorry, I'm sure you have written it, but I have difficulties holding things apart. It is always a good idea to clean the desk and first answer these four questions above. And not in one text where it isn't clear what is what.
Yes, I agree. But it is still hard for me to connect all things tohether. Can you provide some details of how this works (Starting with ##g\in H## how to find ##\lambda, v##?
I tried to show that the given set is a subrep but I got stuck:
Given ##g\in G, w\in V: g(w)=\lambda w## then why is ##\rho(g)(w)## in this set, i.e.
##g(\rho(g)(w))=\lambda(\rho(g)(w)##
How then this leads to what they ask).
 
aalma said:
Yes, I agree. But it is still hard for me to connect all things tohether. Can you provide some details of how this works (Starting with ##g\in H## how to find ##\lambda, v##?
What is ##H##? I assume it is a subgroup of the center by what you wrote, maybe the entire center of ##G## However, if ##G## is commutative per given condition, then ##H## is any subgroup, or ##H=G##?

We have a representation ##\rho\, : \,G\longrightarrow GL(V).##

I assume that ##V## is a finite-dimensional, complex vector space. It means that ##\rho(g)## is a square matrix of the size of the dimension of ##V.## We are looking for values ##\lambda \in \mathbb{C}## such that
$$
\rho(g)(v)=\lambda \cdot v \, \Longleftrightarrow \, \rho(g)(v)-\lambda v=(\rho(g)-\lambda \cdot \operatorname{id}_V)(v)=0
$$
The notation ##g(v)## is sloppy since ##g## has nothing to do with ##v##, ##\rho(g)## has. You can write ##g(v)## or better ##g.v## but you must be aware that ##(\rho(g))(v)## is meant. This is especially important if ##G## itself consists of matrices. Then we can use matrix multiplication as ##\rho.##

You really need to tell us such things. Anyway.

We are looking for a complex number ##\lambda ,## a vector ##v\in V## and have an equation, namely
$$
(\rho(g)-\lambda \cdot \operatorname{id}_V)(v)=0 \text{ or } v\in \operatorname{ker}(\rho(g)-\lambda \cdot \operatorname{id}_V)
$$
I cannot hold a lecture on linear algebra here, the more as I'm basically already gone, and knowledge of linear algebra is definitely what you need. The equation above that defines eigenvalues and eigenvectors is solved by finding the zeros of the characteristic polynomial. We have in particular
$$
\det(\rho(g)-\lambda \cdot \operatorname{id}_V) = 0
$$
This is where we need the complex numbers. Only there is guaranteed that all zeros of this polynomial of degree dimension of ##V## always exist. Any polynomial over the complex numbers splits into linear factors, i.e. (I write ##x## for the complex indeterminate because we want to find values ##x##)
\begin{align*}
0&=\det(\rho(g) - x \cdot \operatorname{id}_V)=(a_1-x)\cdot(a_2-x)\cdot(a_3-x) \cdot\ldots\cdot (a_n-x)\\
\end{align*}
This polynomial is zero, if and only if ##x=a_k## for some ##k.## Note that the ##\{a_k\}## can have repetitions! So we actually have something like ##0=(b_1-x)^{m_1} \cdot\ldots \cdot (b_k -x)^{m_k}## and the polynomial is zero if and only if ##x=b_r## for some ## r .###

Hence ##\lambda \in \{b_1,\ldots,b_m\}.## To find a corresponding vector, you have to solve
$$
\rho(g)(v)=\lambda \cdot v
$$
Note, that there is, in general, more than one vector that fulfills the equation!

aalma said:
I tried to show that the given set is a subrep but I got stuck:
Given ##g\in G, w\in V: g(w)=\lambda w## then why is ##\rho(g)(w)## in this set, i.e.
##g(\rho(g)(w))=\lambda(\rho(g)(w)##
How then this leads to what they ask).

Next, we have the eigenspace
$$
W(\lambda )=\left\{v\in V\,|\,\rho(g)(v)=\lambda \cdot v\;\textbf{ for all }\;g\in G\right\}.
$$
The forall quantifier is essential here as we will soon see, at (*)!

We need to show that ##W(\lambda )## is a vector subspace in ##V,## i.e. that
\begin{align*}
\alpha v+\beta w &\in W(\lambda )\text{ for all }\alpha,\beta \in \mathbb{C} \text{ and } v,w\in W(\lambda )\\
&\Longleftrightarrow \\
\rho(g)(\alpha v+\beta w)&=\ldots =\lambda \cdot (\alpha v+\beta w)
\end{align*}

This proves that ##W(\lambda ) \subseteq V## is a subspace.

At last, we want to show that ##\left. \rho\right|_{W(\lambda )}\, : \,G\longrightarrow GL(W(\lambda ))## is a representation again. This implies to show that ##\left. \rho\right|_{W(\lambda )}## is a group homomorphism, i.e. that
$$
\left. \rho\right|_{W(\lambda )}(g\cdot h)(v)=(\left. \rho\right|_{W(\lambda )}(g)\circ \left. \rho\right|_{W(\lambda )}(h))(v)
$$
or since ##\left. \rho\right|_{W(\lambda )}(g)## and ##\left. \rho\right|_{W(\lambda )}(h)## are matrices
$$
\left. \rho\right|_{W(\lambda )}(g\cdot h)=\left. \rho\right|_{W(\lambda )}(g) \, \cdot \,\left. \rho\right|_{W(\lambda )}(h)
$$
Fortunately, there is nothing to do here because ##\left. \rho\right|_{W(\lambda )}## inherits this property from ##\rho .## The restriction on ##W(\lambda )## does not change this. However, and this is essential, we do not know whether
$$
\left. \rho\right|_{W(\lambda )}(W(\lambda )) \subseteq W(\lambda )
$$
That means we have to show it. Let ##w\in W(\lambda ).## Then by definition of ##W(\lambda )##
\begin{align*}
(\,\left. \rho\right|_{W(\lambda )}(g)\,)\,(\left. \rho\right|_{W(\lambda )}(h))\,(w)&=(\rho(g))(\rho(h))(v)\\
&=\rho(g\cdot h)(w)\stackrel{(*)}{=}\lambda \cdot w
\end{align*}
So ##\left. \rho\right|_{W(\lambda )}(h)(w)\in W(\lambda )## or generally
$$
\left. \rho\right|_{W(\lambda )}(g)(W(\lambda )\subseteq W(\lambda )\;\text{ for all }\;g\in G
$$
and the restriction of ##\rho## on the subspace ##W(\lambda )\subseteq V## is a subrepresentation.

Hint: Don't be so sloppy. Use your definitions, separate both sides of ##A\Rightarrow B## clearly, and trace your dependencies. Maybe your book or professor writes ##W_\lambda ## instead of ##W(\lambda )## but it should be noted: different eigenvalues ##\lambda ## give different eigenspaces ##W(\lambda ).##

Maybe you want to have a look at:
https://www.physicsforums.com/insights/10-math-tips-save-time-avoid-mistakes/
or
https://www.physicsforums.com/insights/how-most-proofs-are-structured-and-how-to-write-them/
 
  • Like
Likes   Reactions: aalma
I think I misunderstood you and chose the wrong definition of ##W.##

Let's define it as ##W_g(\lambda )=\{v\in V\,|\,\rho(g)(v)=\lambda \cdot v\}## for a specific ##g\in G## and not all group elements. What we need is
$$
\rho(G)(W_g(\lambda )) \subseteq W_g(\lambda )
$$
So for a given element ##w\in W_g(\lambda )## we need to show that ##\rho(g)(\rho(h)(w))=\lambda \rho(h)(w).## It is ##\rho(g)(\rho(h)(w))=\rho(g\cdot h)(w)## by the definition of a representation. Since ##w\in W_g(\lambda )## we know that ##\rho(g)(w)=\lambda \cdot w## and we have no idea how ##g## gets from left of ##h## to the right of ##h##.

I thought erroneously that the forall quantifier solves the problem, but that cannot be achieved in general.

Here comes your commutativity into play. If ##gh=hg## then
\begin{align*}
\rho(g)(\rho(h)(w))&=\rho(g\cdot h)(w)=\rho(h\cdot g)(w)=\rho(h)(\rho(g)(w))=\rho(h)(\lambda w)=\lambda \rho(h)(w)
\end{align*}
so ##\rho(G)(W_g(\lambda ) \subseteq W_g(\lambda )## for a single ##g## not all, if ##G## is commutative.
 
  • Like
Likes   Reactions: aalma
This seems like a very long discussion for this problem. I think it is partly because of the way it was written.

Problem: Let ##G## be a commutative group, and ##V## an irreducible representation. Show that it is one dimensional.

Solution: Following the hint pick an element ##g_0 \in G## and let ##v## be an eigenvector for the corresponding operator with eigen value ##\lambda##. Then the eigen space is invariant because for any ##g\in G## we have ##g_0\cdot (g\cdot v) = g\cdot (g_0\cdot v)=\lambda (g\cdot v)##. Because the representation is irreducible the eigenspace is the all of ##V##. This means that ##g_0## acts by multiplication by a number on all vectors. But this is was any fixed element of the group. So every element acts by multiplication by a scalar. Then any one dimensional subspace of ##V## is going to be invariant, and since ##V## is irreducible it has to be the whole space, so ##dim V =1##.
 
  • Like
Likes   Reactions: aalma

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 6 ·
Replies
6
Views
6K
  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 27 ·
Replies
27
Views
3K