# Finding the kernel of an action

• Mr Davis 97
I'm afraid you will have to solve ##\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot \begin{bmatrix}1\\0 \end{bmatrix} =\lambda \begin{bmatrix}1\\0\end{bmatrix}## and so on for every direction.

## Homework Statement

Let ##\mathbb{F}_3## denote the field with 3 elements and let ##V = \mathbb{F}_3^2##. Let ##\alpha, \beta, \gamma, \delta## be the four one-dimensional subspaces of ##V## spanned by ##(1,0), (0,1), (1,1)## and ##(1,-1)## respectively. Let ##\operatorname{GL}_2 (\mathbb{F}_3)## act on ##\{ \alpha, \beta, \gamma, \delta \}## by matrix multiplication.

Find the kernel of the homomorphism corresponding to this action.

## The Attempt at a Solution

So I don't think this problem will be very difficult, but I have a confusion. How exactly does the general linear group act on those subspaces? If they were particular elements, it would be clear how it acts on them by matrix multiplication. But I'm not seeing how this is done when they are subspaces.

Mr Davis 97 said:

## Homework Statement

Let ##\mathbb{F}_3## denote the field with 3 elements and let ##V = \mathbb{F}_3^2##. Let ##\alpha, \beta, \gamma, \delta## be the four one-dimensional subspaces of ##V## spanned by ##(1,0), (0,1), (1,1)## and ##(1,-1)## respectively. Let ##\operatorname{GL}_2 (\mathbb{F}_3)## act on ##\{ \alpha, \beta, \gamma, \delta \}## by matrix multiplication.

Find the kernel of the homomorphism corresponding to this action.

## The Attempt at a Solution

So I don't think this problem will be very difficult, but I have a confusion. How exactly does the general linear group act on those subspaces? If they were particular elements, it would be clear how it acts on them by matrix multiplication. But I'm not seeing how this is done when they are subspaces.
I also had to make some scribbling to understand it. Let's forget about the field and the vectorspace for a moment. A group action of ##G## on a set ##X## is a homomorphism ##\varphi\, : \,G \longrightarrow GL(X)## defined by ##\varphi(g)(x)=g.x##. Now we have ##G=GL_2(\mathbb{F}_3)## and ##X## the set of all proper, nontrivial subspaces of ##V=\mathbb{F}_3^2##. Since a regular matrix doesn't change dimension, and ##X## contains all one-dimensional subsapces, we have ##g.x \in X## for all ##g\in GL_2(\mathbb{F}_3)\, , \,x \in X =\{\,\alpha,\beta,\gamma,\delta \,\}##.

Now ##g\in \operatorname{ker}\varphi## means ##\varphi(g)(x)=g\cdot x = 1.x = x## for all ##x \in \{\,(1,0),(0,1),(1,1),(1,2)\,\}##.

Alternatively: Look up Schur's Lemma.

Mr Davis 97 said:

## Homework Statement

Let ##\mathbb{F}_3## denote the field with 3 elements and let ##V = \mathbb{F}_3^2##. Let ##\alpha, \beta, \gamma, \delta## be the four one-dimensional subspaces of ##V## spanned by ##(1,0), (0,1), (1,1)## and ##(1,-1)## respectively. Let ##\operatorname{GL}_2 (\mathbb{F}_3)## act on ##\{ \alpha, \beta, \gamma, \delta \}## by matrix multiplication.

Find the kernel of the homomorphism corresponding to this action.

## The Attempt at a Solution

So I don't think this problem will be very difficult, but I have a confusion. How exactly does the general linear group act on those subspaces? If they were particular elements, it would be clear how it acts on them by matrix multiplication. But I'm not seeing how this is done when they are subspaces.
I don't have an answer as far as how the general linear group acts on those subspaces, but here's a thought on how you might get started. Your first subspace, the elements in V that are spanned by (1,0), consists of {(1, 0), (0, 0), (-1, 0)}; i.e., the elements that are scalar multiples of (1, 0). The transformation that maps these elements to (0, 0) is a projection to the axis of the first coordinate. Possibly it would help to analyze each of the other subspaces to see what kind of transformation maps them to the zero element in ##\mathbb F_3 \times \mathbb F_3##.

Okay. So ##\alpha = \{(1,0),(0,0),(-1,0)\}##, ##\beta = \{(0,1),(0,0),(0,-1)\}##, ##\gamma = \{(1,1),(0,0),(-1,-1)\}## and ##\delta = \{(1,-1),(0,0),(-1,1)\}##. Now, for an invertible matrix to fix all of these sets, it seems clear that it must be either the identity matrix or the negative of the identity matrix. So it seems that if ##\varphi## is the homomorphism corresponding to this action, then ##\ker (\varphi) = \{\pm I_2\}##. But is what I've stated sufficient for proof? It's just that is seems kind of obvious...

Mr Davis 97 said:
But is what I've stated sufficient for proof?
It is not sufficient and not really obvious. If you quote Schur's Lemma, you must find a formulation which fits here and why it is applicable. E.g. some books generally assume generally ##\operatorname{char}\mathbb{F}=0## or algebraic closure, or irreducibility of the action, so it has to be checked, whether a quotation is applicable.

I think, and what I've read you as well, we have to consider ##g.\alpha = \alpha\, , \,g.\beta=\beta\, , \,etc.## as one-dimensional spaces, i.e. not elementwise but subspacewise. I'm afraid you will have to solve ##\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot \begin{bmatrix}1\\0 \end{bmatrix} =\lambda \begin{bmatrix}1\\0\end{bmatrix}## and so on for every direction.

fresh_42 said:
It is not sufficient and not really obvious. If you quote Schur's Lemma, you must find a formulation which fits here and why it is applicable. E.g. some books generally assume generally ##\operatorname{char}\mathbb{F}=0## or algebraic closure, or irreducibility of the action, so it has to be checked, whether a quotation is applicable.

I think, and what I've read you as well, we have to consider ##g.\alpha = \alpha\, , \,g.\beta=\beta\, , \,etc.## as one-dimensional spaces, i.e. not elementwise but subspacewise. I'm afraid you will have to solve ##\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot \begin{bmatrix}1\\0 \end{bmatrix} =\lambda \begin{bmatrix}1\\0\end{bmatrix}## and so on for every direction.
Suppose that ##A\in\ker (\varphi)##. Let ##A = \begin{bmatrix}a&b\\c&d\end{bmatrix}##. Then by applying ##A## to each of the 4 subspaces and insisting that the subspaces be fixed, we find that

\begin{align*} \left\{\begin{bmatrix}a\\c \end{bmatrix}, \begin{bmatrix}0\\0 \end{bmatrix}, \begin{bmatrix}-a\\-c \end{bmatrix}\right\} &= \left\{\begin{bmatrix}1\\0 \end{bmatrix}, \begin{bmatrix}0\\0 \end{bmatrix}, \begin{bmatrix}-1\\0 \end{bmatrix}\right\}\\ \left\{\begin{bmatrix}b\\d \end{bmatrix}, \begin{bmatrix}0\\0 \end{bmatrix}, \begin{bmatrix}-b\\-d \end{bmatrix}\right\} &= \left\{\begin{bmatrix}0\\1 \end{bmatrix}, \begin{bmatrix}0\\0 \end{bmatrix}, \begin{bmatrix}0\\-1 \end{bmatrix}\right\}\\ \left\{\begin{bmatrix}a+b\\c+d \end{bmatrix}, \begin{bmatrix}0\\0 \end{bmatrix}, \begin{bmatrix}-a-b\\-c-d \end{bmatrix}\right\} &= \left\{\begin{bmatrix}1\\1 \end{bmatrix}, \begin{bmatrix}0\\0 \end{bmatrix}, \begin{bmatrix}-1\\-1 \end{bmatrix}\right\}\\ \left\{\begin{bmatrix}a-b\\c-d \end{bmatrix}, \begin{bmatrix}0\\0 \end{bmatrix}, \begin{bmatrix}-a+b\\-c+d \end{bmatrix}\right\} &= \left\{\begin{bmatrix}1\\-1 \end{bmatrix}, \begin{bmatrix}0\\0 \end{bmatrix}, \begin{bmatrix}-1\\1 \end{bmatrix}\right\}\\ \end{align*}

The first condition implies that ##c=0,a=\pm 1##. The second condition implies that ##b=0,d=\pm 1##. The third condition implies that ##a=1\text{ and }d=1## or ##a=-1\text{ and }d=-1##, which is also what the fourth condition implies.

So we see that the only values that satisfy all 4 cases is when ##a=1, d=1,b=0,c=0## or ##a=-1,d=-1,b=0,c=0##, and this corresponds to the identity matrix and its additive inverse.

Mr Davis 97 said:
... this corresponds to the identity matrix and its additive inverse.
The usual wording is: All scalar multiples of the identity matrix or ##Z(GL_n(\mathbb{F}))=\mathbb{F}\cdot \operatorname{id}\,.##

fresh_42 said:
The usual wording is: All scalar multiples of the identity matrix or ##Z(GL_n(\mathbb{F}))=\mathbb{F}\cdot \operatorname{id}\,.##
So does this argument work? It seems very ad hoc but seems to work. Is there a better way?

If not, then one more question. How would I show that this map ##\varphi## is surjective? Any initial tip would help, since I tried to get started but can't seem to find an avenue using the definition of surjectivity.

Mr Davis 97 said:
So does this argument work? It seems very ad hoc but seems to work. Is there a better way?
Which argument? I haven't checked your calculations as the result was correct, and I haven't checked to which extend the statement of the exercise is indeed equivalent to Schur's Lemma or the statement about the center, but they all play in the same league with varying setups. I even know of a formulation for Lie algebras and their representations. Multiples of the identity are automatically central, and ##GL_n## is large enough to exclude other possibilities. That's the core of Schur's Lemma.
If not, then one more question. How would I show that this map ##\varphi## is surjective? Any initial tip would help, since I tried to get started but can't seem to find an avenue using the definition of surjectivity.
##\varphi## doesn't have to be surjective (in the sense of an arbitrary action), but I guess here it is. You can always find a regular matrix sending one vector onto another, so ##\varphi## is surjective on the generators of ##\{\,\alpha,\beta,\gamma,\delta\,\}## and thus also on their multiples.

fresh_42 said:
Which argument? I haven't checked your calculations as the result was correct, and I haven't checked to which extend the statement of the exercise is indeed equivalent to Schur's Lemma or the statement about the center, but they all play in the same league with varying setups. I even know of a formulation for Lie algebras and their representations. Multiples of the identity are automatically central, and ##GL_n## is large enough to exclude other possibilities. That's the core of Schur's Lemma.

##\varphi## doesn't have to be surjective (in the sense of an arbitrary action), but I guess here it is. You can always find a regular matrix sending one vector onto another, so ##\varphi## is surjective on the generators of ##\{\,\alpha,\beta,\gamma,\delta\,\}## and thus also on their multiples.
Well, I'm trying in particular to show that the homomorphism ##\varphi : \operatorname{GL}_2(\mathbb{F}_3) \to S_4## is surjective, so that we can conclude that ##\operatorname{GL}_2(\mathbb{F}_3)/\{\pm I_2\} \cong S_4##. So in your argument for surjectivity, I'm not seeing where the symmetric group fits in. Basically, I'm not sure how to associate with every element of ##S_4## a matrix from ##\operatorname{GL}_2(\mathbb{F}_3)##.

Mr Davis 97 said:
Well, I'm trying in particular to show that the homomorphism ##\varphi : \operatorname{GL}_2(\mathbb{F}_3) \to S_4## is surjective, so that we can conclude that ##\operatorname{GL}_2(\mathbb{F}_3)/\{\pm I_2\} \cong S_4##. So in your argument for surjectivity, I'm not seeing where the symmetric group fits in. Basically, I'm not sure how to associate with every element of ##S_4## a matrix from ##\operatorname{GL}_2(\mathbb{F}_3)##.
I get to a different conclusion, but maybe I've overlooked something:

We have ##A_4=\operatorname{Sym}(\{\,\alpha,\beta,\gamma,\delta\,\})/\mathbb{Z}_2=\operatorname{Sym}(X)/\mathbb{Z}_2\stackrel{(*)}{=}GL(X)\,.##
Now ##\varphi \, : \,GL_2(\mathbb{F}_3) \longrightarrow GL(X)## is surjective ##(*)## with kernel ##\mathbb{F}^*_3 \cdot I##.
So ##GL_2(\mathbb{F}_3)/\mathbb{F}^*_3 \cdot I \cong GL(X) \cong A_4\,.##

(*) ##(\alpha,\beta)## can be mapped on every other pair of distinct subspaces, which are ##2 \cdot \binom{4}{2}=12## ordered pairs all in all. Once their images are determined, the images of ##\gamma =\alpha+\beta## and ##\delta=\alpha+2\beta## will be as well. That's why only half of the permutations in ##S_4## count: We cannot have ##(\alpha,\beta) \longmapsto (\alpha,\beta)## and simultaneously map ##(\gamma, \delta) \longmapsto (\delta, \gamma)\,.##

fresh_42 said:
Now ##g\in \operatorname{ker}\varphi## means ##\varphi(g)(x)=g\cdot x = 1.x = x## for all ##x \in \{\,(1,0),(0,1),(1,1),(1,2)\,\}##.
I admit it's been many years since I studied modern algebra, but the use of the term "kernel" here seems odd. In other contexts, the kernel of a transformation is the set of things in the domain space that are mapped to whatever passes for the zero vector in the range space. As described above, ##\varphi(g)(x)=g\cdot x = 1.x = x## seems to be an identity transformation. I'm not sure how to parse ##\varphi(g)(x)##.

Mark44 said:
I admit it's been many years since I studied modern algebra, but the use of the term "kernel" here seems odd. In other contexts, the kernel of a transformation is the set of things in the domain space that are mapped to whatever passes for the zero vector in the range space. As described above, ##\varphi(g)(x)=g\cdot x = 1.x = x## seems to be an identity transformation. I'm not sure how to parse ##\varphi(g)(x)##.
The kernel are all elements which are mapped to neutral. It's ##0## for vector spaces and additive groups, and ##1## for multiplicative groups. A group ##G## action on a set ##X## is a binary operation ##(G,X) \longmapsto X## for which ##(1,x) \mapsto x## and ##(g,(h,x)) \mapsto ((g\cdot h),x)## hold. But this is what defines a homomorphism ##\varphi## between groups, namely ##G## and ##GL(X)## the group of bijections on ##X##. The linearity in ##GL## is misleading here. It should be ##\operatorname{Sym}(X)##. But the set ##X## is often a linear space, so ##GL(X)## is a common notation. In the above example the set ##X## is more a lattice than a linear space, but as it consists of linear subspaces, which are determined by their generating vector, and ##G## operates via matrix multiplication, the distinction is more of logical nature.

So instead of writing ##g.x## one can as well write ##\varphi(g)(x)##. This has the advantage, that the rules for ##'.'## are notated in ##\varphi##. ##\varphi(g)## is a bijection on ##X##, so we have ##\varphi(g)(x) = g.x \in X##.

fresh_42 said:
I get to a different conclusion, but maybe I've overlooked something:

We have ##A_4=\operatorname{Sym}(\{\,\alpha,\beta,\gamma,\delta\,\})/\mathbb{Z}_2=\operatorname{Sym}(X)/\mathbb{Z}_2\stackrel{(*)}{=}GL(X)\,.##
Now ##\varphi \, : \,GL_2(\mathbb{F}_3) \longrightarrow GL(X)## is surjective ##(*)## with kernel ##\mathbb{F}^*_3 \cdot I##.
So ##GL_2(\mathbb{F}_3)/\mathbb{F}^*_3 \cdot I \cong GL(X) \cong A_4\,.##

(*) ##(\alpha,\beta)## can be mapped on every other pair of distinct subspaces, which are ##2 \cdot \binom{4}{2}=12## ordered pairs all in all. Once their images are determined, the images of ##\gamma =\alpha+\beta## and ##\delta=\alpha+2\beta## will be as well. That's why only half of the permutations in ##S_4## count: We cannot have ##(\alpha,\beta) \longmapsto (\alpha,\beta)## and simultaneously map ##(\gamma, \delta) \longmapsto (\delta, \gamma)\,.##
The problem is number (3) here: https://www.math.unl.edu/graduate/exams/quals/algebra/817-818January_2001.pdf

So I'm pretty sure that it must be the case that ##\operatorname{GL}_2(\mathbb{F}_3)/\{\pm I_2\} \cong S_4##. I just don't know how to show that ##\varphi## is surjective.

Mr Davis 97 said:
The problem is number (3) here: https://www.math.unl.edu/graduate/exams/quals/algebra/817-818January_2001.pdf

So I'm pretty sure that it must be the case that ##\operatorname{GL}_2(\mathbb{F}_3)/\{\pm I_2\} \cong S_4##. I just don't know how to show that ##\varphi## is surjective.
Unfortunately, it is not explained how ##G## actually operates. Matrix multiplication means, that the elements of ##G## transform vectors. Now how does it transform subspaces, if not via: ##g.\alpha = g \cdot (1,0) \cdot \mathbb{F}\,?## But then ##g.\alpha\, , \,g.\beta## already determine ##g##, as they span the entire vector space, resp. ##g.\gamma = g.\alpha +g.\beta## and ##g.\delta = g.\alpha -g.\beta\,.## However, this means that ##(\alpha,\beta,\gamma,\delta) \longmapsto (\alpha,\beta,\delta,\gamma)## is an impossible permutation, i.e. although an element of ##S_4## it is not in the image of ##G.X##. Try to find a matrix, which leaves ##\alpha## and ##\beta## invariant, but not ##\gamma## and ##\delta## and still represent this mapping by a matrix multiplication by a regular ##2 \times 2## matrix.

Perhaps it is a trap. If nothing else works, count the elements in ##GL_2(\mathbb{F}_3)\, : \, 24 \text{ or } 48\,?##

Edit: I counted them, and you are right. There are ##4^3=81## possible matrices and ##5^2+4\cdot 2=33## singular ones, which means ##|GL_2(\mathbb{F}_3)|=48## and ##GL_2(\mathbb{F}_3)/\mathbb{F}^*_3 \cong S_4##. So where did I have been wrong? Which matrix transforms ##(\alpha,\beta,\gamma,\delta) \longmapsto (\alpha,\beta,\delta,\gamma)\,?##

Edit 2: I found my mistake. Did you?

Last edited: