Possible Values for K in Powers of Matrix

  • Thread starter Incand
  • Start date
  • Tags
    Matrix
In summary, the conversation discusses the possible values of k for a square matrix A that is not the zero matrix, where the powers of A^(k-1) are not the zero matrix but A^k is the zero matrix. The conversation explores the idea of using an eigenvalue decomposition and the Cayley-Hamilton theorem to solve for k and shows that k=n is the only possibility to get the zero matrix. It also discusses the concept of linear maps and their relationship to column spaces.
  • #1
Incand
334
47

Homework Statement


A square matrix ##n\times n##, A, that isn't the zero-matrix have powers ##A^{k-1}## that isn't the zero matrix. ##A^k## is the zero matrix. What are the possible values for ##k##?

Homework Equations


N/A

The Attempt at a Solution


I'm a bit lost here but I figure that maybe I could write the matrix as an eigenvalue decomposition.
##A = PDP^{-1}## and then I get ##A^k = PD^kP^{-1}##.
Since ##D^k## is diagonal we could say that it's weighting the columns of ##P##. So what happends when we get ##A^k = \mathbf 0## is that some rescaling of the columns in ##P## makes ##PP^{-1} = \mathbf{0}##.

That's the only thing I manage to reason out, as far as actually calculating ##k## I'm clueless where to even begin. Any hints on where to start?
 
Physics news on Phys.org
  • #2
Have you see the Cayley-Hamilton theorem?
 
  • Like
Likes Incand
  • #3
Incand said:
I'm a bit lost here but I figure that maybe I could write the matrix as an eigenvalue decomposition.
##A = PDP^{-1}## and then I get ##A^k = PD^kP^{-1}##.

The matrix in your OP is not necessarily diagonalizable (and usually isn't except for special cases).
 
  • Like
Likes Incand
  • #4
micromass said:
Have you see the Cayley-Hamilton theorem?
I haven't but I checked it up on wikipedia now. Assuming I understand this right I would have a charasteristic equation
##p(\lambda ) = \Pi_{i=1}^n (\lambda_i-c_i)## where ##c_i## are some complex constants. The theorem would then give me
##\mathbf 0 = \Pi_{i=1}^n (A-c_i I)##
If ##A^k=\mathbf 0## then ##\mathbf 0 = c_k\Pi_{\notin k}^n (A-c_i I)##.

Edit: Adding to the above I realize that if ##A^k=\mathbf 0## then so is every matrix of higher power. And I could write the above as
##\mathbf 0 = \Pi_{i=k}^n c_i \cdot \Pi_{i=1}^{k-1} (A-c_i I)## and to get the zero-matrix one of ##c_k \to c_n## has to be zero or ##\Pi_{i=1}^{k-1} (A-c_i I)## ?

Not sure If I'm getting anywhere here, am I somewhat on the right track? The question is an old exam question for an introductory linear algebra course covering the basics so maybe there's a simpler way to look at it.
 
Last edited:
  • #5
I guess the first thing to do is to find the eigenvalues of ##A##. Show that ##0## is the only eigenvalue.
 
  • #6
Not sure I follow. If I have
##Av = \lambda v## and left multiply each side by ##A^{k-1}## I get
##\mathbf 0 = A^kv = A^{k-1}\lambda v##
We know that ##A^{k-1} \ne \mathbf 0## so either ##\lambda = 0## or I have ##A^{k-1}v = \mathbf 0##?

Btw. Eigenvalues isn't part of the course this is in, althought I don't mind using them to solve the problem (have had a course after covering them. I'm just going back to questions I never managed to solve when I took it).
 
  • #7
OK, without eigenvalues then. Can you show that you have the following decreasing sequence of sets:
[tex]V\supseteq \text{Im}(A)\supseteq \text{Im}(A^2)\supseteq \text{Im}(A^3) \supseteq ...\supseteq \text{Im}(A^k) = \{0\}[/tex]
Can you show that all these inclusions are strict? Given that ##V## has ##n## dimensions, what are the possibilities for ##k##?
 
  • #8
When you write I am (image?) that's the same as the column space right?
Doesn't the statement follow from ##AA = \bigg| Aa_1 \; \; Aa_2 \; \; \dots \; \;Aa_n\bigg|## i.e. ##AA## is a linear combination of ##A## and therefore ##C(A^2) \subseteq C(A)## and the other statements follow the same way. And of course ##C(A)## is in the vectorspace it's embedded.
(sad to say our course didn't cover vector spaces either, it was just a 3 week introduction to vectors, matrices, linear transforms, determinants, subspaces, least squares etc. To me this old exam question is nightmarish compared to the exercises we had in our textbook)

When you say I should show if that they're strict you mean show ##Im(A^2) \subset Im(A) \dots ## (but not equal) right? I don't believe that's true. For example ##rank(A^2) ## could be equal to ##rank(A)## (or less). Isn't it similar for the column space itself that they could always be equal?
 
  • #9
Let ##f## be the endomorphism represented by matrix ##A## in basis ##{\cal B} = (e_1,...,e_n) ##

What Micromass is telling you is that in order to reach your constraint, you have to remove at least one basis vector everytime you compound by ##f##.

Assume that there exist a first index ## 1 \le \ell \le n ## that does not remove at least one vector from the previous image subspace :

## f( \text{span}(e_{i_1},...,e_{i_\ell}) )= \text{span}(e_{i_1},...,e_{i_\ell}) ##

You can show by induction that you will never be able to remove any other vector, and therefore, you will never be able to find any ##k## such that ##( f \circ ... \circ f ) (V) = \text{span}(\emptyset) = \{0\} ## (##k## times)
 
  • Like
Likes Incand
  • #10
Thanks for responding geoffrey. I'm afraid both you and micromass is a lot more mathematicaly knowledgeable than me so while it may seem obvious to you it really isn't me.

So what you're saying (perhaps?) is that ##f## is a linear map and that somehow the only thing that matters is the the span. So if I take ##AA## the only thing that matters is the number of base vectors that that span the column room of A. And either they will get reduced or stay the same number. The actual coefficients in the matrix doesn't matter but only the column space?

The thing I don't understand is how do I know that ##f## only a function of the span (it's defined that way, but why is that true for any/my matrix ##A##). Is this because it's a linear map and since ##f(a_1e_1 + \dots a_ne_n) = a_1f(e_1) + \dots + a_nf(e_n)## with ##f(e_i) = cf(e_i)## and I only "lose" a base vector if i get ##c = 0##?
So ##k=n## is the only possibility to get the zero-matrix.
 
  • #11
Incand said:
Thanks for responding geoffrey. I'm afraid both you and micromass is a lot more mathematicaly knowledgeable than me so while it may seem obvious to you it really isn't me.

Thanks ! But Micromass is light-years more knowledgeable and capable than me.

Incand said:
So what you're saying (perhaps?) is that ##f## is a linear map and that somehow the only thing that matters is the the span.

Any ##n\times n## square matrix can be interpreted as a linear map from a vector space ##V## of dimension ##n## to itself.
Here, ##f## is a linear map from ##V\rightarrow V##, where ##V = \text{span}(e_1,...,e_n)##.

You know that the matrix of ##f^{(m)} = f\circ ... \circ f## (##m## times) is ## A^m##.

As Micromass said, you have a decreasing sequence of subspaces of ##V## : ## f^{(m)}(V) \subset f^{(m-1)}(V) \subset ... \subset f(V) \subset V ## (prove it).

The question asks you to find the smallest ##m## such that ##A^m = 0_n \iff f^{(m)}(V) = \{0\} ## (prove it).

Micromass told you that these inclusions had to be strict in order to prove your problem. But you did not agree. So in order to convince you, I told you to consider the hypothesis where the inclusion is large, i.e there exist a first ##m_0## for which ## f^{(m_0)}(V) = f^{(m_0-1)}(V) ##, i.e the first time you don't remove vectors from ##f^{(m_0-1)}(V) := \text{span}(e_{i_1}...e_{i_l})##

In that case, you can show by induction that ##f^{(k+m_0)}(V) = f^{(m_0-1)}(V) := \text{span}(e_{i_1}...e_{i_l}) \neq \{0\} , \ \forall k\ge 1 ##

Therefore, you will never achieve ##A^m = 0_n \iff f^{(m)}(V) = \{0\} ##

Maybe someone better than me could give you a better explanation
 
  • Like
Likes Incand and micromass
  • #12
Incand said:
When you say I should show if that they're strict you mean show ##Im(A^2) \subset Im(A) \dots ## (but not equal) right? I don't believe that's true. For example ##rank(A^2) ## could be equal to ##rank(A)## (or less). Isn't it similar for the column space itself that they could always be equal?
Are you talking about some generic matrix A or a matrix A that satisfies ##A^k=0##? It's true that there are matrices for which rank(A) = rank(A2), but then it wouldn't satisfy ##A^k=0##.
 
  • Like
Likes Incand

Related to Possible Values for K in Powers of Matrix

1. What is the meaning of "K" in powers of matrix?

K is a constant or variable that represents the number of times a matrix is multiplied by itself, also known as the power or exponent. It determines the size and properties of the resulting matrix.

2. How do you calculate the value of K in powers of matrix?

The value of K can be calculated by counting the number of times the matrix is multiplied by itself. For example, if a matrix is raised to the power of 3, then K would equal 3.

3. Can K have a negative value in powers of matrix?

Yes, K can have a negative value in powers of matrix. This indicates that the inverse of the matrix will be raised to the power of K, resulting in the original matrix.

4. What is the significance of K in powers of matrix?

K determines the growth rate of a matrix when it is multiplied by itself multiple times. It also affects the properties of the resulting matrix, such as symmetry and invertibility.

5. Are there any limitations on the possible values of K in powers of matrix?

Yes, the possible values of K are limited by the size and properties of the original matrix. For example, if a matrix is not square, then K cannot be a negative or non-integer value.

Similar threads

  • Calculus and Beyond Homework Help
Replies
4
Views
694
  • Calculus and Beyond Homework Help
Replies
5
Views
2K
  • Calculus and Beyond Homework Help
Replies
18
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
10K
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
636
  • Differential Equations
Replies
3
Views
2K
Back
Top