How can the Jordan form help with finding high powers of a matrix?

  • Thread starter Thread starter UrbanXrisis
  • Start date Start date
Click For Summary

Homework Help Overview

The discussion revolves around the computation of high powers of a matrix, specifically exploring how the Jordan form can facilitate this process. The original poster presents a matrix A and inquires about expressing A^n in terms of its entries, raising questions about patterns in the resulting expressions.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning

Approaches and Questions Raised

  • Participants discuss the simplicity of finding high powers of matrices when they are diagonal or triangular. There are inquiries about how to express a matrix in diagonal form and the implications of diagonalization. Some participants introduce recursion relations for the entries of A^n and relate them to the theory of difference equations. Others mention the significance of eigenvalues in this context.

Discussion Status

The conversation is active, with various approaches being explored, including the use of recursion and the Jordan form. Some participants suggest that while not all matrices can be diagonalized, they can often be transformed into Jordan Normal Form, which may simplify the computation of high powers. There is no explicit consensus, but several participants provide insights and guidance on the topic.

Contextual Notes

Participants note that not every matrix can be diagonalized, which leads to the necessity of considering Jordan Normal Form. There are indications that some participants may lack familiarity with diagonalization, which raises questions about the expectations for understanding these concepts in the context of the problem.

UrbanXrisis
Messages
1,192
Reaction score
1
I was just wondering if there is a way to show, using variables, this:

say:

A = \left(\begin{array}{cc}a&b \\ c&d \end{array}\right)

A^n

I was wondering if there is any way to show A^n as a formula of entries? just curious..

A^2=A = \left(\begin{array}{cc}aa+cb&ab+bd \\ ca+cd&bc+dd \end{array}\right)

the next entry for a is a^3+cab+bca+cdb which has nothing to do with the previous matrix... is there a theme to all this?
 
Last edited:
Physics news on Phys.org
Nothing at all nice in general, but it's extremely simple if A is diagonal (triangular will be nicer as well). This is one reason to like diagonalizable matrices- you can express high powers easily.
 
how can the matrix be expressed as a diagonal matrix?
 
Let

A^n = \left (\begin{array}{cc}a_n&b_n\\c_n&d_n\end{array}\right )

So a1 = a, b1 = b, c1 = c, d1 = d. We get four simultaneous recursions:

an = aan-1 + bcn-1
bn = abn-1 + bdn-1
cn = can-1 + dcn-1
dn = cbn-1 + ddn-1

We can look at the above as two pairs of recursion relations. Let En be an operator on sequences defined by En(an) = an+1. Then we get:

(E-a)an + (-b)cn = 0
(E-d)cn + (-c)an = 0

(E-a)bn + (-b)dn = 0
(E-d)dn + (-c)bn = 0

Let's look at the first pair. We let -c operate on both sides of the first equation, and let (E-a) operate on both sides of the second, giving:

(-c)(E-a)an + bccn = 0
(E-a)(E-d)cn + (E-a)(-c)an = 0

Since 0 is just the 0 sequence, and (E-x)0 is just 0. Subtracting equations gives:

(E-a)(E-d)cn - bccn = 0
(E² - tr(A)E + det(A))cn = 0

Let T = tr(A)/2, D = det(A). Then we get:

c_n = \alpha _c (T + \sqrt{T^2 - D})^n + \beta _c (T - \sqrt{T^2 - D})^n

where \alpha _c,\, \beta _c are undetermined coefficients, which can be determined since we can compute, by hand, c1 = c, and c2 = c(a+d) = 2cT. This whole thing is just like what you do when solving a second order linear homogenous constant coefficient differential equation, and you end up with two undetermined coefficients which you determine if you know the intial/boundary values. In fact, this is part of what is called the theory of difference equations, and it has strong similarities in many respects to the theory of differential equations (at least what I've seen of it). Look up the theory of difference equations if you need to know more.

an, bn, and dn can all be determined in a similar manner.
 
Last edited:
By the way, observe that T \pm \sqrt{T^2-D} are just the eigenvalues of the matrix A.
 
UrbanXrisis said:
how can the matrix be expressed as a diagonal matrix?

Matrix diagonalization will be in any linear algebra text.
 
If M = SDS-1, then M² = (SDS-1)(SDS-1) = SD(S-1S)DS-1 = SDIDS-1 = SD²S-1. In general, if M = SDS-1, then Mn = SDnS-1.

Note, not every matrix can be diagonalized, so for an arbtrariy matrix, you will have to use the formulas I derived. However, if you haven't done any combinatorics, I don't see how they can expect you to derive such formulas. In fact, posts 3 and 7 suggest you aren't even familiar with diagonalization, so I don't see how they can ask such a question of you.

By the way, your work for determing S is very hard to figure out. However, I think v2 is supposed to be an eigenvector corresponding to the eigenvalue 6. To do this, you have a matrix, and then you write an equation for v2 so my guess is that you expect v2 to be a vector such that when you apply the matrix to that vector, you get 0. Notice, you didn't say any of this so I had to scratch my head to figure out this is what you meant. In general, if you want help, it's best to make it easy on those you want to help you, and you can do this by being clear. Anyways, the problem is that the matrix you're looking at is wrong. You essentially want to look at M - 6I, but you made a minor computation error, since 4-6 = -2, not +2 as you have it (the top left entry in the matrix).
 
Not every matrix can be "diagonalized". In general the best you can do is convert to "Jordan Normal Form" which has all 0's below the main diagonal, eigenvalues of the matrix on the main diagonal with repetitions grouped together, and 1 directly above each eigenvalue except the first in each group of repetitions.
For example if a 4 by 4 matrix, M, has only two eigenvalues, 3 and 4, and the eigenspace of each has dimension 1 (so we cannot have a basis for R4 consisting of eigenvectors of M) then it can be converted (J= SMS-1 for some S) to
J = \left (\begin{array}{cccc}3&1&0&0\\0&3&0&0\\0&0&4&1\\0&0&0&4\end{array}\right )
 
The Jordan form makes it much easier to find high powers as well. From Hall's example:

J = \left (\begin{array}{cccc}3&0&0&0\\0&3&0&0\\0&0&4&0\\0&0 &0&4\end{array}\right )+\left (\begin{array}{cccc}0&1&0&0\\0&0&0&0\\0&0&0&1\\0&0 &0&0\end{array}\right )=D+N

where D and N commute and are diagonal and nilpotent respectively. High powers of such a D+N are relatively simple, use binomial theorem and high enough powers of N are zero (in this example N^2=0).
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 46 ·
2
Replies
46
Views
9K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
13
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K