- #1

- 274

- 0

I got the => direction, but I'm having trouble with the backwards direction. Any hints?

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Thread starter Treadstone 71
- Start date

- #1

- 274

- 0

I got the => direction, but I'm having trouble with the backwards direction. Any hints?

- #2

AKG

Science Advisor

Homework Helper

- 2,565

- 4

EDIT: I originally made three posts, but I'll put them all in one:

----------------------------------------------------------------

POST 1:

Suppose A is diagonalizable but it's order does not divide p-1. Let D = (d_{ij}) be the corresponding diagonal matrix. Then use:

- Fermat's Little Theorem

- the fact that D is diagonal

- the fact that A and D are similar

- then use the division algorithm together with assumption that the order of A does not divide p-1 to derive a contradiction which essentially says "if t is the order of A, i.e. if t is the least positive natural such that A^{t} = 1, then there exists a t' such that 0 < t' < t but such that A^{t'} = 1"

EDIT TO POST 1: Oops, I guess that's the direction you already proved. I'll have to think some more.

----------------------------------------------------------------

POST 2:

Just throwing out some ideas:

1) the characteristic polynomial of a diagonal matrix splits (it's irreducible factors are all linear)

2) matrices with degree dividing p-1 form a normal subgroup of GL_{n}(Z_{p}) - maybe the orbit-stabilizer theorem or the class equation can be used here (you want to show that every matrix whose order divides p-1 contains a diagonal matrix in its conjugacy class).

----------------------------------------------------------------

POST 3:

I'm rusty on the linear algebra, but how about this:

The order of A divides p-1

implies

The minimal polynomial of A is x^{t} - 1, where t is the order of A

implies

The minimal polynomial of A splits (since x^{t} - 1 = 0 has solutions in Z_{p} iff t | p-1)

implies

The char poly of A splits

implies

A is diagonalizable (I think there's a theorem showing that the char poly splits iff A is diagonalizable).

EDIT TO POST 3: Actually, it wouldn't surprise me if the "implies"s can be changed to "iff"s, but at the same time, it wouldn't surprise me if some of the "implies"s were wrong altogether. It's been well over a year since I did any linear algebra, especially anything to do with diagonalization. And I've never really done any linear algebra over finite fields. So check your theorems in your book, and see if the above proof a) is correct, and b) can be strengthened so the "implies"s can become "iff"s, which would then prove both directions of the theorem simultaneously, and then get back to me about it.

----------------------------------------------------------------

POST 1:

Suppose A is diagonalizable but it's order does not divide p-1. Let D = (d

- Fermat's Little Theorem

- the fact that D is diagonal

- the fact that A and D are similar

- then use the division algorithm together with assumption that the order of A does not divide p-1 to derive a contradiction which essentially says "if t is the order of A, i.e. if t is the least positive natural such that A

EDIT TO POST 1: Oops, I guess that's the direction you already proved. I'll have to think some more.

----------------------------------------------------------------

POST 2:

Just throwing out some ideas:

1) the characteristic polynomial of a diagonal matrix splits (it's irreducible factors are all linear)

2) matrices with degree dividing p-1 form a normal subgroup of GL

----------------------------------------------------------------

POST 3:

I'm rusty on the linear algebra, but how about this:

The order of A divides p-1

implies

The minimal polynomial of A is x

implies

The minimal polynomial of A splits (since x

implies

The char poly of A splits

implies

A is diagonalizable (I think there's a theorem showing that the char poly splits iff A is diagonalizable).

EDIT TO POST 3: Actually, it wouldn't surprise me if the "implies"s can be changed to "iff"s, but at the same time, it wouldn't surprise me if some of the "implies"s were wrong altogether. It's been well over a year since I did any linear algebra, especially anything to do with diagonalization. And I've never really done any linear algebra over finite fields. So check your theorems in your book, and see if the above proof a) is correct, and b) can be strengthened so the "implies"s can become "iff"s, which would then prove both directions of the theorem simultaneously, and then get back to me about it.

Last edited:

- #3

- 274

- 0

- #4

matt grime

Science Advisor

Homework Helper

- 9,395

- 4

This now tells us everything about the minimal poly we know. Think what the multiplicities of the eigenvalues can be (think Fermat's Little Theorem). Think back to the other question you posted too about multiplicity one eigenvalues.

Share: