Explanation of exponential operator proof

In summary, the conversation discusses how to prove the equality between ##e^{A+B}## and ##e^{A}e^{B}## using a power series expansion. The method involves distributing and grouping terms in the power series for ##e^{A}## and ##e^{B}##, and using matrix multiplication and commutators. Further discussion is had about the steps involved in finding the exponential of a matrix, including finding eigenvalues and eigenvectors, constructing a matrix ##B##, and solving for ##e^{A}## using a Taylor expansion. The conversation also addresses the case when a matrix cannot be diagonalized and must be matched to a known form.
  • #1
gkirkland
11
0
Can someone please explain the below proof in more detail?
Capture_zpsb4f8f1f9.jpg


The part in particular which is confusing me is
Capture2_zpsf444f5b1.jpg


Thanks in advance!
 
Physics news on Phys.org
  • #2
gkirkland said:
Can someone please explain the below proof in more detail?
Capture_zpsb4f8f1f9.jpg


The part in particular which is confusing me is
Capture2_zpsf444f5b1.jpg


Thanks in advance!

What don't you understand? That seemed pretty straightforward. Do you know about power series expansions?

The idea is this:

We want to show that ##e^{A+B} = e^{A}e^{B}##. If they are, then we can say that the difference, ##e^{A+B} - e^{A}e^{B} = 0##. To demonstrate this, we use a power series expansion.

The power series for our exponential function is ##e^{A} = \frac{1}{0!}I + \frac{1}{1!}A + \frac{1}{2!}A^2 + \cdots##, implying that ##e^{A}e^{B} = \left(\frac{1}{0!}I + \frac{1}{1!}A + \frac{1}{2!}A^2 + \cdots\right)\left(\frac{1}{0!}I + \frac{1}{1!}B + \frac{1}{2!}B^2 + \cdots\right)##. Matrix multiplication is distributive over addition. In the proof, they shorten it for the benefit of saving space, so they don't show the step between the distributing and the grouping of the terms.

Is that fairly clear for you?
 
  • #3
Oh ok! So they show the first two terms and "FOIL" it out for simplicity sake. I've been staring at this thing for 20 minutes and can't believe I didn't realize that.

That was a great explanation, thanks!

Could you also explain the 1/2!(AB+BA) portion in the last line?
 
Last edited:
  • #4
gkirkland said:
Oh ok! So they show the first two terms and "FOIL" it out for simplicity sake. I've been staring at this thing for 20 minutes and can't believe I didn't realize that.

That was a great explanation, thanks!
Well, the technical term is "distribute," but yes. Sometimes math is silly like that, though, so I wouldn't be too irritated that you didn't see it.

You're most certainly welcome. :biggrin:
 
  • #5
gkirkland said:
Could you also explain the 1/2!(AB+BA) portion in the last line?
(A+B)^2=A^2+AB+BA+B^2
Which if A and B commute can be written
(A+B)^2=A^2+2AB+B^2
In general we cannot assume this so we either include the terms for each ordering ie
BAA,ABA,AAB
or we include one ordering and appropriate commutators
[A,B]=AB-BA
 
  • #6
So I still don't quite understand how they got what they got. Here's is my attempt:
http://i4.photobucket.com/albums/y117/The0wnage/Capture_zps52a2608b.jpg

I get 7 terms from [tex]e^{a+b}[/tex] but 9 terms from [tex]e^ae^b[/tex] after I distribute and I don't see a way to cancel them all?
 
  • #7
You need to expand ##e^{S+T}## further. For instance, a term in ##S^2 T^2## appears only when expanding ##(S+T)^4##.
 
  • #8
Won't you end up with differing coefficients even with further expansion?
such as [tex]\frac{1}{4}S^2T^2 - \frac{1}{2}S^2T^2[/tex]
 
  • #9
gkirkland said:
Won't you end up with differing coefficients even with further expansion?
such as [tex]\frac{1}{4}S^2T^2 - \frac{1}{2}S^2T^2[/tex]

You forgot a factor of ##1/2!## in the last term you wrote for ##e^S e^T##:
$$
\frac{1}{2!} S^2 \times \frac{1}{2!} T^2 = \frac{1}{4} S^2T^2
$$
 
  • #10
Ok, I'll keep working on the proof, but in the mean time I'd like to get some instruction on as to how these exponentials are used.

For example, if I'm given a matrix A and asked to find the exponential of A these are the steps I take:
1) Find eigenvalues and then eigenvectors of A
2) Form a matrix P consisting of eigenvectors of A
3) Find a matrix B such that [tex]B=PAP^{-1}[/tex]
4) Match B to a known form
5) Then [tex]e^{At}=Pe^{Bt}P^{-1}[/tex]

Is that correct? Here's a screenshot of the notes I'm forming my steps from:
http://i4.photobucket.com/albums/y117/The0wnage/Capture_zpse86364b1.jpg
 
Last edited:
  • #11
gkirkland said:
For example, if I'm given a matrix A and asked to find the exponential of A these are the steps I take:
1) Find eigenvalues and then eigenvectors of A
2) Form a matrix P consisting of eigenvectors of A
Strictly speaking, you can only do step 2 as you wrote it if ##A## is diagonalizable. For instance, in the case where you get
$$
B = \left[ \begin{array}{cc} \lambda & 1 \\ 0 & \lambda \end{array}\right]
$$
then ##P## was not constructed from the eigenvectors of ##A##, since in that case ##A## didn't have two linearly independent eigenvectors. I imagine that in your notes you will have a description of how to construct the three possible matrix forms for ##B##.

gkirkland said:
3) Find a matrix B such that [tex]B=PAP^{-1}[/tex]
That is ##B=P^{-1}AP##, and see my comment above.

gkirkland said:
4) Match B to a known form
5) Then [tex]e^{At}=Pe^{Bt}P^{-1}[/tex]
These steps are actually inverted. Once you have ##B=P^{-1}AP##, using the Taylor expansion you get directly that
$$
e^{A} = e^{P B P^{-1}} = P e^B P^{-1}
$$
Then, you can solve the exponential by matching the correct result depending on the from of ##B##.
 
  • #12
Ok, so check my logic on this one:

If you can form a matrix [itex]P[/itex] (ie: [itex]A[/itex] is an [itex]n x n[/itex] matrix and has n eigenvalues with n independent eigenvectors) [itex]B=P^{-1}AP[/itex] will form a diagonolized matrix and then [itex]e^A=P^{-1}e^BP[/itex] so you reach a solution fairly easily.

If A is an n x n matrix and has n eigenvalues and less than n independent eigenvectors then [itex]P[/itex] won't form a diagonalized matrix and you must match [itex]A[/itex] to a known form of [itex]B[/itex]?
I'm still hazy on what to do when you can't form a diagonalized matrix [itex]P[/itex]

As an example, how would you solve the matrix [tex]\begin{matrix} 1 & 2 \\ 0 & -1 \end{matrix}[/tex] as I believe it only has 1 independent eigenvector.

Sorry for the format, I don't know how to do tex code inline
 
Last edited by a moderator:
  • #13
gkirkland said:
Ok, so check my logic on this one:

If you can form a matrix [itex]P[/itex] (ie: [itex]A[/itex] is an [itex]n x n[/itex] matrix and has [itex]n[/itex] eigenvalues with [itex]n[/itex] independent eigenvectors) [itex]B=P^{-1}AP[/itex] will form a diagonolized matrix and then [itex]e^A=P^{-1}e^BP[/itex] so you reach a solution fairly easily.
Again, be careful that since [itex]B=P^{-1}AP[/itex], you get [itex]e^A=Pe^BP^{-1}[/itex] (note the order of ##P## and ##P^{-1}##).

gkirkland said:
If A is an [itex]n x n[/itex] matrix and has [itex]n[/itex] eigenvalues and less than [itex]n[/itex] independent eigenvectors then [itex]P[/itex] won't form a diagonalized matrix and you must match [itex]A[/itex] to a known form of [itex]B[/itex]?
I'm still hazy on what to do when you can't form a diagonalized matrix [tex]P[/tex]
In one of your posts, your notes read
As shown earlier, an invertible 2x2 matrix ##P## [...] such that the matrix ##B## has one of the following forms
so I guess that the answer to that is shown earlier.

gkirkland said:
As an example, how would you solve the matrix [itex]\begin{matrix} 1 & 2 \\ 0 & -1 \end{matrix}[/itex] as I believe it only has 1 independent eigenvector.
That one is actually diagonalizable. But if you change the -1 to 1, you get a single eigenvector ##( 1\ 0)^T##, and you construct ##P## using ##(0\ 1)^T## as the second vector, such that
$$
P = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right)
$$
so obviously ##P = I## and therefore ##B = P^{-1} A P = A##, which is exactly the second form in your notes.

gkirkland said:
Sorry for the format, I don't know how to do tex code inline
Use itex instead of tex.
 

1. What is an exponential operator?

An exponential operator is a mathematical symbol that represents repeated multiplication of a number by itself. It is denoted by the symbol "^", and is read as "to the power of". For example, 2^3 means 2 multiplied by itself 3 times, or 2 x 2 x 2, which equals 8.

2. What is the proof for the exponential operator?

The proof for the exponential operator is based on the fundamental properties of exponents, which state that when multiplying numbers with the same base, you can add their exponents. For example, 2^3 x 2^4 can be simplified to 2^(3+4), which equals 2^7. This proof can be extended to any number of terms, showing that the exponential operator follows the same rules as traditional exponentiation.

3. Why is the exponential operator useful?

The exponential operator is useful in simplifying and solving complex mathematical expressions, especially those involving repeated multiplication. It also plays a key role in many scientific and engineering applications, such as in the fields of physics, chemistry, and computer science.

4. What is the difference between the exponential operator and the caret (^) symbol?

The exponential operator and the caret symbol are often used interchangeably, but they can have different meanings depending on the context. In mathematics, the caret symbol is typically used to represent both exponentiation and bitwise XOR operations, while the exponential operator specifically represents exponentiation.

5. Can the exponential operator be applied to non-numeric values?

Yes, the exponential operator can also be applied to non-numeric values, such as variables, functions, and matrices. In these cases, the operator represents repeated multiplication or multiplication of a function or matrix by itself, similar to how it works with numeric values.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
946
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • General Math
Replies
3
Views
808
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Atomic and Condensed Matter
Replies
1
Views
813
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Back
Top