Understanding Group Orders: A Take Home Problem

  • Thread starter Thread starter Wrench
  • Start date Start date
  • Tags Tags
    Group Home
Wrench
Messages
7
Reaction score
0
Ok, here is the situation. I am working on a take home in which I have this problem involving the order of a group.

Let: G be the group of all non singular matrices (matrices that you can invert) and the operation on the group is regular old matrix multiplication.

a = a matrix:
|0 -1|
|1 0|

b = another matrix:
|0 1|
|-1 -1|

now, a has order four. This means that if you multiply a by itself a couple of times, you eventualy get the same three elements over and over again. b has order three. However, ab has infinite order.

This is a concept that I have a little bit of trouble getting my head around. If we are talking about cyclic groups, ( I assume that we are, but If we arent tell me how) then a group with infinite order seems rather rediculous because there is no way that you can keep multiplying till you get tired, your mother calls you downstairs for dinner, or you reach infinity (in which case you have lost it)

All in all I am a little confused with this problem and I wanted to know if anyone could point me in the right direction. Dont do the whole thing, because I need the mental practice. I thank you in advance for your help.
 
Physics news on Phys.org
The order being infinity does not mean that if you multiply it with itself an infinite number of times (something which does not make sense anyway, at this stage) you get the identity. It means that the order is not finite, and that is all.

And you aren't necessarily talkjing about cyclic groups. The group of invertible matrices isn't even abelian so it cannot be cyclic.
 
Oh...I think I sort of missed something. What I am trying to do is to show that ab has infinite order. a an b are easy enough to show through regular mathematics. I am having trouble with showing that ab has infinite order. In all honesty, it feels embarassing to even post this here because there is a big part of me that hates being stumped...

Actualy let me rephrase that.
I hate being stumped and not having an idea if I will find an answer. There is a diffrence with me. So yeah once again, I apreciate if anyone can help me get my head on straight.
 
I was wondering if you were going to say what the question on the take home was.

Suppose that a matrix has finite order, that is X^n=1. What must you then know about its minimal/characteristic polynomial? What about the eigenvalues of X? What about diagonalizability?

Edit: having just worked out what ab is, why don't you just work out what the n'th power of it is and show it can never be the identity. You know about jordan normal form right? Anyway, just look at ab, now square it (the power two), now cube it (the power three), notice a pattern with the answer and the word following power in the bracket? I reckon you can prove something there now (induction, right?).
 
Last edited:
Thanks matt grime. I sort of kicked that idea around in my head, but I had came back to it and trashed it over and over again in my mind. I finaly wrote something that made sense and it is thanks to you. I hope someday I can return the favor.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top