# Homework Help: Powers of matrix

1. Jul 21, 2015

### Incand

1. The problem statement, all variables and given/known data
A square matrix $n\times n$, A, that isn't the zero-matrix have powers $A^{k-1}$ that isn't the zero matrix. $A^k$ is the zero matrix. What are the possible values for $k$?

2. Relevant equations
N/A

3. The attempt at a solution
I'm a bit lost here but I figure that maybe I could write the matrix as an eigenvalue decomposition.
$A = PDP^{-1}$ and then I get $A^k = PD^kP^{-1}$.
Since $D^k$ is diagonal we could say that it's weighting the columns of $P$. So what happends when we get $A^k = \mathbf 0$ is that some rescaling of the columns in $P$ makes $PP^{-1} = \mathbf{0}$.

That's the only thing I manage to reason out, as far as actually calculating $k$ I'm clueless where to even begin. Any hints on where to start?

2. Jul 21, 2015

### micromass

Have you see the Cayley-Hamilton theorem?

3. Jul 21, 2015

### micromass

The matrix in your OP is not necessarily diagonalizable (and usually isn't except for special cases).

4. Jul 21, 2015

### Incand

I haven't but I checked it up on wikipedia now. Assuming I understand this right I would have a charasteristic equation
$p(\lambda ) = \Pi_{i=1}^n (\lambda_i-c_i)$ where $c_i$ are some complex constants. The theorem would then give me
$\mathbf 0 = \Pi_{i=1}^n (A-c_i I)$
If $A^k=\mathbf 0$ then $\mathbf 0 = c_k\Pi_{\notin k}^n (A-c_i I)$.

Edit: Adding to the above I realise that if $A^k=\mathbf 0$ then so is every matrix of higher power. And I could write the above as
$\mathbf 0 = \Pi_{i=k}^n c_i \cdot \Pi_{i=1}^{k-1} (A-c_i I)$ and to get the zero-matrix one of $c_k \to c_n$ has to be zero or $\Pi_{i=1}^{k-1} (A-c_i I)$ ?

Not sure If I'm getting anywhere here, am I somewhat on the right track? The question is an old exam question for an introductory linear algebra course covering the basics so maybe there's a simpler way to look at it.

Last edited: Jul 21, 2015
5. Jul 21, 2015

### micromass

I guess the first thing to do is to find the eigenvalues of $A$. Show that $0$ is the only eigenvalue.

6. Jul 21, 2015

### Incand

Not sure I follow. If I have
$Av = \lambda v$ and left multiply each side by $A^{k-1}$ I get
$\mathbf 0 = A^kv = A^{k-1}\lambda v$
We know that $A^{k-1} \ne \mathbf 0$ so either $\lambda = 0$ or I have $A^{k-1}v = \mathbf 0$?

Btw. Eigenvalues isn't part of the course this is in, althought I don't mind using them to solve the problem (have had a course after covering them. I'm just going back to questions I never managed to solve when I took it).

7. Jul 21, 2015

### micromass

OK, without eigenvalues then. Can you show that you have the following decreasing sequence of sets:
$$V\supseteq \text{Im}(A)\supseteq \text{Im}(A^2)\supseteq \text{Im}(A^3) \supseteq ...\supseteq \text{Im}(A^k) = \{0\}$$
Can you show that all these inclusions are strict? Given that $V$ has $n$ dimensions, what are the possibilities for $k$?

8. Jul 22, 2015

### Incand

When you write Im (image?) that's the same as the column space right?
Doesn't the statement follow from $AA = \bigg| Aa_1 \; \; Aa_2 \; \; \dots \; \;Aa_n\bigg|$ i.e. $AA$ is a linear combination of $A$ and therefore $C(A^2) \subseteq C(A)$ and the other statements follow the same way. And of course $C(A)$ is in the vectorspace it's embedded.
(sad to say our course didn't cover vector spaces either, it was just a 3 week introduction to vectors, matrices, linear transforms, determinants, subspaces, least squares etc. To me this old exam question is nightmarish compared to the exercises we had in our textbook)

When you say I should show if that they're strict you mean show $Im(A^2) \subset Im(A) \dots$ (but not equal) right? I don't believe that's true. For example $rank(A^2)$ could be equal to $rank(A)$ (or less). Isn't it similar for the column space itself that they could always be equal?

9. Jul 22, 2015

### geoffrey159

Let $f$ be the endomorphism represented by matrix $A$ in basis ${\cal B} = (e_1,...,e_n)$

What Micromass is telling you is that in order to reach your constraint, you have to remove at least one basis vector everytime you compound by $f$.

Assume that there exist a first index $1 \le \ell \le n$ that does not remove at least one vector from the previous image subspace :

$f( \text{span}(e_{i_1},...,e_{i_\ell}) )= \text{span}(e_{i_1},...,e_{i_\ell})$

You can show by induction that you will never be able to remove any other vector, and therefore, you will never be able to find any $k$ such that $( f \circ ... \circ f ) (V) = \text{span}(\emptyset) = \{0\}$ ($k$ times)

10. Jul 23, 2015

### Incand

Thanks for responding geoffrey. I'm afraid both you and micromass is a lot more mathematicaly knowledgable than me so while it may seem obvious to you it really isn't me.

So what you're saying (perhaps?) is that $f$ is a linear map and that somehow the only thing that matters is the the span. So if I take $AA$ the only thing that matters is the number of base vectors that that span the column room of A. And either they will get reduced or stay the same number. The actual coefficients in the matrix doesn't matter but only the column space?

The thing I don't understand is how do I know that $f$ only a function of the span (it's defined that way, but why is that true for any/my matrix $A$). Is this because it's a linear map and since $f(a_1e_1 + \dots a_ne_n) = a_1f(e_1) + \dots + a_nf(e_n)$ with $f(e_i) = cf(e_i)$ and I only "lose" a base vector if i get $c = 0$?
So $k=n$ is the only possibility to get the zero-matrix.

11. Jul 23, 2015

### geoffrey159

Thanks ! But Micromass is light-years more knowledgeable and capable than me.

Any $n\times n$ square matrix can be interpreted as a linear map from a vector space $V$ of dimension $n$ to itself.
Here, $f$ is a linear map from $V\rightarrow V$, where $V = \text{span}(e_1,...,e_n)$.

You know that the matrix of $f^{(m)} = f\circ ... \circ f$ ($m$ times) is $A^m$.

As Micromass said, you have a decreasing sequence of subspaces of $V$ : $f^{(m)}(V) \subset f^{(m-1)}(V) \subset ... \subset f(V) \subset V$ (prove it).

The question asks you to find the smallest $m$ such that $A^m = 0_n \iff f^{(m)}(V) = \{0\}$ (prove it).

Micromass told you that these inclusions had to be strict in order to prove your problem. But you did not agree. So in order to convince you, I told you to consider the hypothesis where the inclusion is large, i.e there exist a first $m_0$ for which $f^{(m_0)}(V) = f^{(m_0-1)}(V)$, i.e the first time you don't remove vectors from $f^{(m_0-1)}(V) := \text{span}(e_{i_1}...e_{i_l})$

In that case, you can show by induction that $f^{(k+m_0)}(V) = f^{(m_0-1)}(V) := \text{span}(e_{i_1}...e_{i_l}) \neq \{0\} , \ \forall k\ge 1$

Therefore, you will never achieve $A^m = 0_n \iff f^{(m)}(V) = \{0\}$

Maybe someone better than me could give you a better explanation

12. Jul 24, 2015

### vela

Staff Emeritus
Are you talking about some generic matrix A or a matrix A that satisfies $A^k=0$? It's true that there are matrices for which rank(A) = rank(A2), but then it wouldn't satisfy $A^k=0$.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted