How to Solve the Exponential of a Matrix?

  • Context: Undergrad 
  • Thread starter Thread starter Sherin
  • Start date Start date
  • Tags Tags
    Exponential Matrix
Click For Summary

Discussion Overview

The discussion centers around the method of solving the exponential of a matrix, particularly through the use of Taylor series and the Cayley-Hamilton theorem. Participants explore the mathematical representation and implications of these concepts in the context of matrix exponentiation.

Discussion Character

  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant explains that the exponential of a matrix can be defined using a Taylor series, similar to the case of real numbers.
  • Another participant introduces the Cayley-Hamilton theorem, suggesting that a matrix satisfies its own characteristic equation, which can help in expressing powers of the matrix as linear combinations of the identity matrix and the matrix itself.
  • There is a mention of truncating the Taylor series for small values of time, leading to an approximate expression for the matrix exponential.
  • One participant seeks clarification on the origin of time-dependent functions mentioned in the context of the matrix exponential.

Areas of Agreement / Disagreement

Participants present different approaches to understanding the exponential of a matrix, with some agreeing on the use of Taylor series and the Cayley-Hamilton theorem, while others raise questions about specific components of the equations, indicating that the discussion remains unresolved.

Contextual Notes

The discussion does not resolve the specific context or applications of the time-dependent functions mentioned, nor does it clarify the assumptions behind the truncation of the series.

Sherin
Messages
9
Reaction score
0
Please help me understand the following step

upload_2015-10-29_18-48-12.png
 
Physics news on Phys.org
When ## a ## is just a real number, one can use a Taylor series to represent ## e^{at} ## as

## e^{at} = \sum_{n=0}^{\infty} \frac{(at)^n}{n!} = 1 + at + \frac{1}{2}(at)^2 + \ldots ##

By analogy, one can define the exponential of ## \mathbf{A}t ##, where ## \mathbf{A} ## is now a matrix, as

## e^{\mathbf{A}t} = \sum_{n=0}^{\infty} \frac{(\mathbf{A}t)^n}{n!} ##.

Because multiplying a matrix by itself is perfectly well defined, the above sum makes sense. Now if ## t ## is not too large, we can truncate the series and write

## e^{\mathbf{A}t} \approx (\mathbf{A}t)^0 + \mathbf{A}t \equiv \mathbf{I} + \mathbf{A}t ##,

where ##\mathbf{I}## is the identity matrix. Using this truncated sum, your formula can be derived, except for the time dependent functions ##\alpha_1(t)## and ##\alpha_2(t)##. Perhaps someone else can shed some light on where those might be coming from. What is the context in which you are seeing this equation?
 
  • Like
Likes   Reactions: jim mcnamara
Geofleur's infinite series is the place to start. The trick is to use the Cayley-Hamilton theorem, which tells you that a matrix satisfies it's own characteristic equation. Sine you have a 2x2 matrix this is a second order polynomial, so you can write ##\mathbf{A}^2=\alpha_0 \mathbf{I} + \alpha_1 \mathbf{A}## for some ##\alpha_0## and ##\alpha_1## . It follows that any power of your matrix can also be represented by a linear combination of ##\mathbf{I}## and ## \mathbf{A}##. Hopefully that shows you where the result comes from.

This is a common approach used in electrical engineering.

Jason
 
Thank you so much for your help!
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
6
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K