Using Cayley-Hamilton Theorem to Calculate Matrix Powers

  • Thread starter Thread starter AcidRainLiTE
  • Start date Start date
  • Tags Tags
    Matrix Theorem
Click For Summary
The Cayley-Hamilton theorem allows for the expression of matrix powers in terms of its coefficients, specifically A^k = a_0 + a_1 A for k ≥ 2. To find the coefficients a_0 and a_1, one can use the eigenvalues of the matrix A, leading to the equation λ^k = a_0 + a_1 λ for distinct eigenvalues. In cases of repeated eigenvalues, differentiation of the original equation provides an additional equation, k λ^(k-1) = a_1, enabling the solution for both coefficients. The discussion raises a question about the validity of differentiating the equation, which is addressed by considering the characteristic polynomial. Ultimately, the method is confirmed to be effective for calculating matrix powers using the Cayley-Hamilton theorem.
AcidRainLiTE
Messages
89
Reaction score
2
Given a matrix A (for simplicity assume 2x2) we can use the Cayley-Hamilton theorem to write:

A^k = a_0 + a_1 A

for k>= 2.

So suppose we have a given k and want to find the coefficients a_0,a_1. We can use the fact that the same equation is satisfied by the eigenvalues. That is, for any eigenvalue \lambda we have

\text{(1) } \quad \lambda^k=a_0+a_1\lambda.

If A has 2 distinct eigenvalues, we can plug them into this equation and get 2 equations which allow us to solve for a_0, a_1. And we're done.

If on, the other hand, A has a repeated eigenvalue \lambda_1, then we only get 1 equation from the above procedure. To get another equation we can differentiate (1) to get
\text{(2) } \quad k \lambda^{k-1} = a_1.
We then plug \lambda_1 into (2) to get our second equation. Now we can solve for a_0, a_1.

My question is how we know we can differentiate (1) and still get a valid equation. In order to differentiate, (1) must hold for all \lambda (or at least over some interval). But we only know that it holds for particular discrete values of \lambda (i.e. for the eigenvalues of A).
 
Physics news on Phys.org
Ugh... First, huge thanks for your question. I solved my problem via this method.
I'm not sure that this is still valid question since you posted almost 3yrs ago.
Think about characteristic polynomial. f(t).
We differentiate f(t), than substitute eigenvalue after. Than it's a little bit tricky but still no problem .. I guess..
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 10 ·
Replies
10
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K