Explanation of exponential operator proof

gkirkland
Messages
11
Reaction score
0
Can someone please explain the below proof in more detail?
Capture_zpsb4f8f1f9.jpg


The part in particular which is confusing me is
Capture2_zpsf444f5b1.jpg


Thanks in advance!
 
Physics news on Phys.org
gkirkland said:
Can someone please explain the below proof in more detail?
Capture_zpsb4f8f1f9.jpg


The part in particular which is confusing me is
Capture2_zpsf444f5b1.jpg


Thanks in advance!

What don't you understand? That seemed pretty straightforward. Do you know about power series expansions?

The idea is this:

We want to show that ##e^{A+B} = e^{A}e^{B}##. If they are, then we can say that the difference, ##e^{A+B} - e^{A}e^{B} = 0##. To demonstrate this, we use a power series expansion.

The power series for our exponential function is ##e^{A} = \frac{1}{0!}I + \frac{1}{1!}A + \frac{1}{2!}A^2 + \cdots##, implying that ##e^{A}e^{B} = \left(\frac{1}{0!}I + \frac{1}{1!}A + \frac{1}{2!}A^2 + \cdots\right)\left(\frac{1}{0!}I + \frac{1}{1!}B + \frac{1}{2!}B^2 + \cdots\right)##. Matrix multiplication is distributive over addition. In the proof, they shorten it for the benefit of saving space, so they don't show the step between the distributing and the grouping of the terms.

Is that fairly clear for you?
 
Oh ok! So they show the first two terms and "FOIL" it out for simplicity sake. I've been staring at this thing for 20 minutes and can't believe I didn't realize that.

That was a great explanation, thanks!

Could you also explain the 1/2!(AB+BA) portion in the last line?
 
Last edited:
gkirkland said:
Oh ok! So they show the first two terms and "FOIL" it out for simplicity sake. I've been staring at this thing for 20 minutes and can't believe I didn't realize that.

That was a great explanation, thanks!
Well, the technical term is "distribute," but yes. Sometimes math is silly like that, though, so I wouldn't be too irritated that you didn't see it.

You're most certainly welcome. :biggrin:
 
gkirkland said:
Could you also explain the 1/2!(AB+BA) portion in the last line?
(A+B)^2=A^2+AB+BA+B^2
Which if A and B commute can be written
(A+B)^2=A^2+2AB+B^2
In general we cannot assume this so we either include the terms for each ordering ie
BAA,ABA,AAB
or we include one ordering and appropriate commutators
[A,B]=AB-BA
 
So I still don't quite understand how they got what they got. Here's is my attempt:
http://i4.photobucket.com/albums/y117/The0wnage/Capture_zps52a2608b.jpg

I get 7 terms from e^{a+b} but 9 terms from e^ae^b after I distribute and I don't see a way to cancel them all?
 
You need to expand ##e^{S+T}## further. For instance, a term in ##S^2 T^2## appears only when expanding ##(S+T)^4##.
 
Won't you end up with differing coefficients even with further expansion?
such as \frac{1}{4}S^2T^2 - \frac{1}{2}S^2T^2
 
gkirkland said:
Won't you end up with differing coefficients even with further expansion?
such as \frac{1}{4}S^2T^2 - \frac{1}{2}S^2T^2

You forgot a factor of ##1/2!## in the last term you wrote for ##e^S e^T##:
$$
\frac{1}{2!} S^2 \times \frac{1}{2!} T^2 = \frac{1}{4} S^2T^2
$$
 
  • #10
Ok, I'll keep working on the proof, but in the mean time I'd like to get some instruction on as to how these exponentials are used.

For example, if I'm given a matrix A and asked to find the exponential of A these are the steps I take:
1) Find eigenvalues and then eigenvectors of A
2) Form a matrix P consisting of eigenvectors of A
3) Find a matrix B such that B=PAP^{-1}
4) Match B to a known form
5) Then e^{At}=Pe^{Bt}P^{-1}

Is that correct? Here's a screenshot of the notes I'm forming my steps from:
http://i4.photobucket.com/albums/y117/The0wnage/Capture_zpse86364b1.jpg
 
Last edited:
  • #11
gkirkland said:
For example, if I'm given a matrix A and asked to find the exponential of A these are the steps I take:
1) Find eigenvalues and then eigenvectors of A
2) Form a matrix P consisting of eigenvectors of A
Strictly speaking, you can only do step 2 as you wrote it if ##A## is diagonalizable. For instance, in the case where you get
$$
B = \left[ \begin{array}{cc} \lambda & 1 \\ 0 & \lambda \end{array}\right]
$$
then ##P## was not constructed from the eigenvectors of ##A##, since in that case ##A## didn't have two linearly independent eigenvectors. I imagine that in your notes you will have a description of how to construct the three possible matrix forms for ##B##.

gkirkland said:
3) Find a matrix B such that B=PAP^{-1}
That is ##B=P^{-1}AP##, and see my comment above.

gkirkland said:
4) Match B to a known form
5) Then e^{At}=Pe^{Bt}P^{-1}
These steps are actually inverted. Once you have ##B=P^{-1}AP##, using the Taylor expansion you get directly that
$$
e^{A} = e^{P B P^{-1}} = P e^B P^{-1}
$$
Then, you can solve the exponential by matching the correct result depending on the from of ##B##.
 
  • #12
Ok, so check my logic on this one:

If you can form a matrix P (ie: A is an n x n matrix and has n eigenvalues with n independent eigenvectors) B=P^{-1}AP will form a diagonolized matrix and then e^A=P^{-1}e^BP so you reach a solution fairly easily.

If A is an n x n matrix and has n eigenvalues and less than n independent eigenvectors then P won't form a diagonalized matrix and you must match A to a known form of B?
I'm still hazy on what to do when you can't form a diagonalized matrix P

As an example, how would you solve the matrix \begin{matrix} 1 & 2 \\ 0 & -1 \end{matrix} as I believe it only has 1 independent eigenvector.

Sorry for the format, I don't know how to do tex code inline
 
Last edited by a moderator:
  • #13
gkirkland said:
Ok, so check my logic on this one:

If you can form a matrix P (ie: A is an n x n matrix and has n eigenvalues with n independent eigenvectors) B=P^{-1}AP will form a diagonolized matrix and then e^A=P^{-1}e^BP so you reach a solution fairly easily.
Again, be careful that since B=P^{-1}AP, you get e^A=Pe^BP^{-1} (note the order of ##P## and ##P^{-1}##).

gkirkland said:
If A is an n x n matrix and has n eigenvalues and less than n independent eigenvectors then P won't form a diagonalized matrix and you must match A to a known form of B?
I'm still hazy on what to do when you can't form a diagonalized matrix P
In one of your posts, your notes read
As shown earlier, an invertible 2x2 matrix ##P## [...] such that the matrix ##B## has one of the following forms
so I guess that the answer to that is shown earlier.

gkirkland said:
As an example, how would you solve the matrix \begin{matrix} 1 & 2 \\ 0 & -1 \end{matrix} as I believe it only has 1 independent eigenvector.
That one is actually diagonalizable. But if you change the -1 to 1, you get a single eigenvector ##( 1\ 0)^T##, and you construct ##P## using ##(0\ 1)^T## as the second vector, such that
$$
P = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right)
$$
so obviously ##P = I## and therefore ##B = P^{-1} A P = A##, which is exactly the second form in your notes.

gkirkland said:
Sorry for the format, I don't know how to do tex code inline
Use itex instead of tex.
 
Back
Top