# Just wondering about maticies

1. May 2, 2006

### UrbanXrisis

I was just wondering if there is a way to show, using variables, this:

say:

$$A = \left(\begin{array}{cc}a&b \\ c&d \end{array}\right)$$

$$A^n$$

I was wondering if there is any way to show A^n as a formula of entries? just curious..

$$A^2$$=$$A = \left(\begin{array}{cc}aa+cb&ab+bd \\ ca+cd&bc+dd \end{array}\right)$$

the next entry for a is a^3+cab+bca+cdb which has nothing to do with the previous matrix... is there a theme to all this?

Last edited: May 2, 2006
2. May 2, 2006

### shmoe

Nothing at all nice in general, but it's extremely simple if A is diagonal (triangular will be nicer as well). This is one reason to like diagonalizable matrices- you can express high powers easily.

3. May 3, 2006

### UrbanXrisis

how can the matrix be expressed as a diagonal matrix?

4. May 3, 2006

### AKG

Let

$$A^n = \left (\begin{array}{cc}a_n&b_n\\c_n&d_n\end{array}\right )$$

So a1 = a, b1 = b, c1 = c, d1 = d. We get four simultaneous recursions:

an = aan-1 + bcn-1
bn = abn-1 + bdn-1
cn = can-1 + dcn-1
dn = cbn-1 + ddn-1

We can look at the above as two pairs of recursion relations. Let En be an operator on sequences defined by En(an) = an+1. Then we get:

(E-a)an + (-b)cn = 0
(E-d)cn + (-c)an = 0

(E-a)bn + (-b)dn = 0
(E-d)dn + (-c)bn = 0

Let's look at the first pair. We let -c operate on both sides of the first equation, and let (E-a) operate on both sides of the second, giving:

(-c)(E-a)an + bccn = 0
(E-a)(E-d)cn + (E-a)(-c)an = 0

Since 0 is just the 0 sequence, and (E-x)0 is just 0. Subtracting equations gives:

(E-a)(E-d)cn - bccn = 0
(E² - tr(A)E + det(A))cn = 0

Let T = tr(A)/2, D = det(A). Then we get:

$$c_n = \alpha _c (T + \sqrt{T^2 - D})^n + \beta _c (T - \sqrt{T^2 - D})^n$$

where $\alpha _c,\, \beta _c$ are undetermined coefficients, which can be determined since we can compute, by hand, c1 = c, and c2 = c(a+d) = 2cT. This whole thing is just like what you do when solving a second order linear homogenous constant coefficient differential equation, and you end up with two undetermined coefficients which you determine if you know the intial/boundary values. In fact, this is part of what is called the theory of difference equations, and it has strong similarities in many respects to the theory of differential equations (at least what I've seen of it). Look up the theory of difference equations if you need to know more.

an, bn, and dn can all be determined in a similar manner.

Last edited: May 3, 2006
5. May 3, 2006

### AKG

By the way, observe that $T \pm \sqrt{T^2-D}$ are just the eigenvalues of the matrix A.

6. May 3, 2006

### shmoe

Matrix diagonalization will be in any linear algebra text.

7. May 4, 2006

### AKG

If M = SDS-1, then M² = (SDS-1)(SDS-1) = SD(S-1S)DS-1 = SDIDS-1 = SD²S-1. In general, if M = SDS-1, then Mn = SDnS-1.

Note, not every matrix can be diagonalized, so for an arbtrariy matrix, you will have to use the formulas I derived. However, if you haven't done any combinatorics, I don't see how they can expect you to derive such formulas. In fact, posts 3 and 7 suggest you aren't even familiar with diagonalization, so I don't see how they can ask such a question of you.

By the way, your work for determing S is very hard to figure out. However, I think v2 is supposed to be an eigenvector corresponding to the eigenvalue 6. To do this, you have a matrix, and then you write an equation for v2 so my guess is that you expect v2 to be a vector such that when you apply the matrix to that vector, you get 0. Notice, you didn't say any of this so I had to scratch my head to figure out this is what you meant. In general, if you want help, it's best to make it easy on those you want to help you, and you can do this by being clear. Anyways, the problem is that the matrix you're looking at is wrong. You essentially want to look at M - 6I, but you made a minor computation error, since 4-6 = -2, not +2 as you have it (the top left entry in the matrix).

8. May 5, 2006

### HallsofIvy

Staff Emeritus
Not every matrix can be "diagonalized". In general the best you can do is convert to "Jordan Normal Form" which has all 0's below the main diagonal, eigenvalues of the matrix on the main diagonal with repetitions grouped together, and 1 directly above each eigenvalue except the first in each group of repetitions.
For example if a 4 by 4 matrix, M, has only two eigenvalues, 3 and 4, and the eigenspace of each has dimension 1 (so we cannot have a basis for R4 consisting of eigenvectors of M) then it can be converted (J= SMS-1 for some S) to
$$J = \left (\begin{array}{cccc}3&1&0&0\\0&3&0&0\\0&0&4&1\\0&0&0&4\end{array}\right )$$

9. May 5, 2006

### shmoe

The Jordan form makes it much easier to find high powers as well. From Hall's example:

$$J = \left (\begin{array}{cccc}3&0&0&0\\0&3&0&0\\0&0&4&0\\0&0 &0&4\end{array}\right )+\left (\begin{array}{cccc}0&1&0&0\\0&0&0&0\\0&0&0&1\\0&0 &0&0\end{array}\right )=D+N$$

where D and N commute and are diagonal and nilpotent respectively. High powers of such a D+N are relatively simple, use binomial theorem and high enough powers of N are zero (in this example N^2=0).