# Prove e^(A+B)=e^A*e^B*e^-k/2 if [A,B]=k

1. Jul 5, 2010

### Cruikshank

1. The problem statement, all variables and given/known data

Self study, Bransden and Joachain, Quantum Mechanics, problem 5.8, as written above in title, c a complex number, A and B matrices. I found the statement itself on Wikipedia but no proof.

2. Relevant equations

I've used power series to prove e^(A+c)=e^A*e^c, and I checked [A-a,B-b]=[A,B]
I've written a lot of products of power series.

3. The attempt at a solution

Notation I use: C^n = (A+B)^n respecting order, c^n = (a+b)^n with all A's before B's.
e.g.: (A+B)^2 = AA + AB +BA + BB, and (a+b)^2 = AA + 2AB + BB
(A+B)^0 = I
(A+B)^1 = A+B
(A+B)^2 = (a+b)^2 - k
(A+B)^3 = (a+b)^3 - 3k(A+B)
(A+B)^4 =(a+b)^4 - 6k(a+b)^2 + 3k^2
(A+B)^5 =(a+b)^5 - 10k(a+b)^3 + 15k^2(a+b)
(A+B)^6=(a+b)^6 - 15k(a+b)^4 + 45k^2(a+b)^2 - 15k^3

I have more but error checking takes forever. I can tell the second coefficient is (n choose 2) and I have more partial patterns, but not the entire pattern figured out. I've tried power series written in lots of ways and made no progress. Since this was just one problem at the end of a chapter I expect there is a simple solution I am missing, but I have really hunted for a very long time, and I don't see how to make this happen, although the terms I have worked out make the theorem look fairly plausible. Any tips or pointers welcome.

2. Jul 5, 2010

### Dickfore

Is the commutator k a constant (proportional to the identity matrix) or a general operator in itself?

3. Jul 5, 2010

### Cruikshank

k is a complex number, so equivalently kI is the commutator. I mistyped as c above.

4. Jul 6, 2010

### Cruikshank

To add detail: I have tried summing over every "binary string" of A's and B's of length n. I have found BA^n=A^n*B-nkA^(n-1). I have attempted to create recurrence relations for the coefficients of (A+B)^n. I have not figured out how to group the k terms as a factor rather than a sum. I have tried writing a triple sum of 3 exponential power series, and have been unable to convert it to a single exponential. I have found (A+B)(a+b)^n=(a+b)^(n+1) - nk(a+b)^(n-1). The calculations are all very long.

5. Jul 6, 2010

### Cruikshank

Further: From Wikipedia, the Baker-Campbell-Hausdorff formula has this problem as a special case, but it is not proven there, and I do not have the references, unless this is buried in the depths of Arfken and Weber somewhere. If anyone has any suggestions, I would be grateful.

6. Jul 8, 2010

### weejee

Make replacements $$A, B \ \rightarrow \ \lambda A, \lambda B$$.
Then you can prove the theorem by taking successive derivatives with respect to $$\lambda$$

7. Jul 8, 2010

### Cruikshank

Thank you for the hint, weejee. I've taken a derivative and don't see how it helps at all, but I am happy to have learned a new trick: "insert a new variable and differentiate with respect to it" is not something I would ever have thought of on my own. I'll keep staring at it and try to see how multiple derivatives could help any more than taking the first one. I'm not actually sure what I'm supposed to be taking the derivative of, unless it is the left and right sides of what I am trying to prove, and I guess one could demonstrate separately that the constant terms match and get the result if one could prove for example that the 2nd derivative of both sides match, or some such, but I still don't see the plan. I'll keep working on it. At least it is something new to try!

8. Jul 8, 2010

### Dickfore

Take the operator depending on two parameters:

$$f(s, t) \equiv \exp[s A + t B]$$

What are its partial derivatives with respect to $s$ and $t$?

EDIT:

Scratch that for now. One way of solving it is through the use of Feynamn label ordering rule (R. P. Feynman, Phys. Rev. 84, 108 (1951)). If you go through that paper, the following steps ought to make sense:

$$\exp(A + B) = \exp\left(\int_{0}^{1}{(A_{s} + B_{s}) \, ds}\right) = \exp\left(\int_{0}^{1}{A_{s } \, ds}\right) \, \exp\left(\int_{0}^{1}{B_{s} \, ds}\right)$$

Then, for the second exponential, we can perform Taylor expansion:

$$\exp\left(\int_{0}^{1}{B_{s} \, ds}\right) = 1 + \sum_{n = 1}^{\infty}{\frac{1}{n!} \, \left(\int_{0}^{1}{B_{s} \, ds}\right)^{n}} = 1 + \sum_{n = 1}^{\infty}{\frac{1}{n!} \, \int_{0}^{1}{\int_{0}^{1}{\ldots \int_{0}^{1}{dt_{n} \, dt_{n - 1} \, \ldots \, dt_{1} \, B_{t_{n}} \, B_{t_{n - 1}} \, \ldots \, B_{t_{1}}}}}$$

We can always make a permutation of the dummy indices $\{t_{1}, \ldots, t_{n - 1}, t_{n}\}$ so that we always have the condition $t_{n} \ge t_{n - 1} \ge \ldots \ge t_{1}$ and the order in which the operators are written is automatically properly ordered. But, then, the intervals of integration for the dummy variable $t_{k - 1}, \; 2 \le k \le n$ is $[0; t_{k}]$ and $[0, 1]$ for $t_{n}$. There are $n!$ such permutations which cancels the factor $1/n!$. Thus we can write:

$$\exp\left(\int_{0}^{1}{B_{s} \, ds}\right) = 1 + \sum_{n = 1}^{\infty}{\int_{0}^{1}{\int_{0}^{t_{n}}{\ldots \int_{0}^{t_{2}}{dt_{n} \, dt_{n - 1} \, \ldots \, dt_{1} \, B_{t_{n}} \, B_{t_{n - 1}} \, \ldots \, B_{t_{1}}}}}}$$

As for the factor $\exp\left(\int_{0}^{1}{A_{s} \, ds}\right)$, we divide it into $n + 1$ segments:

$$\exp\left(\int_{0}^{1}{A_{s} \, ds}\right) = \exp\left(\int_{t_{n}}^{1}{A_{s} \, ds}\right) \, \exp\left(\int_{t_{n - 1}}^{t_{n}}{A_{s} \, ds}\right) \, \ldots \, \exp\left(\int_{0}^{t_{1}}{A_{s} \, ds}\right)$$

and insert each factor on the proper place to ensure normal ordering. We have:

$$\exp\left(\int_{0}^{1}{(A_{s} + B_{s}) \, ds}\right) = \exp\left(\int_{0}^{1}{A_{s} \, ds}\right) + \sum_{n = 1}^{\infty}{\int_{0}^{1}{\int_{0}^{t_{n}}{\ldots \int_{0}^{t_{2}}{dt_{n} \, dt_{n - 1} \, \ldots \, dt_{1} \, \exp\left(\int_{t_{n}}^{1}{A_{s} \, ds}\right) \, B_{t_{n}} \, \exp\left(\int_{t_{n - 1}}^{t_{n}}{A_{s} \, ds}\right) \, B_{t_{n - 1}} \, \ldots}}}}$$
$$\times \, \exp\left(\int_{t_{1}}^{t_{2}}{A_{s} \, ds}\right) \, B_{t_{1}} \, \exp\left(\int_{0}^{t_{1}}{A_{s} \, ds}\right)$$

Everything is normal-ordered in the expression on the rhs and we can take away the labels on the operators. It is convenient to introduce:

$$f(t) \equiv e^{-t \, A} \, B \, e^{t \, A}$$

The expression can be rewritten as:

$$\exp(A + B) = e^{A} \, \left( 1 + \sum_{n = 1}^{\infty}{\int_{0}^{1}{\int_{0}^{t_{n}}{\ldots \int_{0}^{t_{2}}{dt_{n} \, dt_{n - 1} \, \ldots \, dt_{1} \, f(t_{n}) \, f(t_{n - 1}) \, \ldots \, f(t_{1})}}}}\right)$$

Next, use a corollary of the Baker-Cambell-Hausdorff Theorem as well as the fact that the commutator of A and B is a constant k, to prove:

$$f(t) = B - t \, k$$

Notice that:

$$\int_{0}^{t_{2}}{dt_{1} \, f(t_{1})} = B \, t_{2} - \frac{k}{2} \, t^{2}_{2} = t_{2} \, \left( B - \frac{k}{2} \, t_{2} \right)$$

$$\int_{0}^{t_{3}}{dt_{2} \, (B - t_{2} k) \, (B \, t_{2} - \frac{k}{2} \, t^{2}_{2})} = B^{2} \, \frac{t^{2}_{3}}{2} - \frac{k}{2} \, B \, t^{3}_{3} + \frac{k^{2}}{2 \cdot 4} \, t^{4}_{3} = \frac{t^{2}_{3}}{2} \, \left( B - \frac{k}{2} \, t_{3} \right)^{2}$$

Continue in this way until you see some pattern. Prove it by mathematical induction. Then, do some algebraic simplification to get the final results..

Last edited: Jul 8, 2010
9. Jul 9, 2010

### Cruikshank

Excellent question; what IS (6/6m)e^(mA+nB) ? One would think A*e^(mA+nB), but that would be wrong. I don't see how to find out what it actually is without resorting to power series, and I don't see an easy way to figure out how to convert A*(A+B)^c into (A+B)^c * A...that is essentially what I've been slogging through, one exponent at a time, trying to find the pattern. For example, (6/6m)(mA+nB)^2 = 2A*(mA+nB) - nk = (mA*nB)*2A + nk, and that's one of the very easiest.

I'll try to get hold of a copy of that paper, but I'm not holding my breath on being able to understand it.

10. Jul 10, 2010

### Cruikshank

Update: By writing C=A+B, I found to my surprise that [A,C]=[C,B]=[A,B]=k.
From there, I used B*A^n=(A^n)*B - knA^(n-1) and its analogues
(all the work copies over because they have the same commutator!)
I got A*e^C=(e^C)*(A-k), and then (A^m)*(e^C) = (e^C)*(A-k)^m,
and using a power series again, I found (e^A)*(e^B) = (e^B)*(e^A) * e^-k,
which is clearly very close to what I am trying to show.
I keep staring at that neat partial derivative of coefficients trick, trying to see
how to apply it here, but so far it eludes me.

11. Jul 11, 2010

### Dickfore

Let

$$X = \alpha A + \beta B$$

and

$$Y = \gamma A + \delta B$$

$$[X, Y] = (\alpha \, \delta - \beta \, \gamma) \, [A, B] = k \, D, \; D = \alpha \, \delta - \beta \, \gamma$$

By induction, you can prove:
$$[X, Y^{n}] = n \, k \, D \, Y^{n - 1}, \; n \ge 1$$

from where it follows that:

$$[X, e^{Y}] = k \, D \, e^{Y}$$

Using BCH Theorem, we have:

$$e^{X} \, e^{Y} \, e^{-X} = e^{Y} + \frac{1}{1!} \, [X, e^{Y}] + \frac{1}{2!} \, [X, [X, e^{Y}]] + \ldots = \left(1 + \frac{k \, D}{1!} + \frac{(k \, D)^{2}}{2!} + \ldots \right) \, e^{Y}$$

$$e^{X} \, e^{Y} \, e^{-X} = e^{k \, D} \, e^{Y}$$

or

$$e^{X} \, e^{Y} = e^{k \, D} \, e^{Y} \, e^{X}$$

12. Jul 11, 2010

### weejee

$$e^{ \lambda (A + B)}=e^{\lambda A}e^{\lambda B}e^{-\lambda^2 [A,B]/2}$$
Both sides on the above are analytic functions of $$\lambda$$ in the whole complex plane. You can Taylor expand both sides around $$\lambda =0$$ (conceptually, any point in the complex plane will work as the reference point for the Taylor expansion) and compare the coefficient for each power of $$\lambda$$.
That is, if you can show that successive derivatives of both sides with respect to $$\lambda$$ evaluated at $$\lambda\,=\,0$$ are always equal to each other, the original theorem is proved.
Good luck!

13. Jul 11, 2010

### Dickfore

the operator function

$$f(\lambda) = e^{\lambda \, X}$$

is, by definition, a solution to the "initial value" problem:

$$\frac{d f(\lambda)}{d \lambda} = X \, f(\lambda) = f(\lambda) \, X, \; f(0) = 1$$

Take $X = A + B$ and rearrange to get:

$$f'(\lambda) - A \, f(\lambda) = B \, f(\lambda)$$

Now, we will use the trick of integrating factors. Multiply the both sides by some operator function $P^{-1}(\lambda)$:

$$P^{-1}(\lambda) \, f'(\lambda) - P^{-1}(\lambda) \, A \, f(\lambda) = P^{-1}(\lambda) \, B \, f(\lambda)$$

Then, we take the condition:

$$(P^{-1})'(\lambda) = -P^{-1}(\lambda), A, \; P^{-1}(0) = 1, \Rightarrow P^{-1}(\lambda) = e^{-\lambda \, A}$$

Then, by the product rule, the left-hand side is a total derivative:

$$\frac{d}{d \lambda} \left( P^{-1}(\lambda) \, f(\lambda) \right) = P^{-1}(\lambda) \, B \, f(\lambda)$$

Inserting a factor of $P(\lambda) \, P^{-1}(\lambda) = 1$ between the $B$ and $f(\lambda)$ on the rhs and denoting $g(\lambda) = P^{-1}(\lambda) \, f(\lambda)$:

$$g'(\lambda) = e^{-\lambda \, A} \, B \, e^{\lambda \, A} \, g(\lambda)$$

Using the BCH Lemma:

$$e^{-\lambda \, A} \, B \, e^{\lambda \, A} = B - \lambda \, k$$

and move the $-\lambda \, k$ term on the lhs, we have:

$$g'(\lambda) + \lambda \, k \, g(\lambda) = B \, g(\lambda)$$

Use the trick with the integrating factor, to introduce:

$$Q^{-\lambda}(\lambda) = e^{\frac{k \, \lambda^{2}}{2}}$$

Obviously, $Q^{-1}(\lambda)$ so:

$$\frac{d}{d \lambda} \left(Q^{-1}(\lambda) \, g(\lambda) \right) = Q^{-1}(\lambda) \, B \, g(\lambda) = B \, Q^{-1}(\lambda) \, g(\lambda)$$

$$h'(\lambda) = B \, h(\lambda), \; h(0) = 1, h(\lambda) = Q^{-1}(\lambda) \, g(\lambda) = Q^{-1}(\lambda) \, P^{-1}(\lambda) \, f(\lambda)$$

We are finally left with the same initial problem as we began with, so we can write:

$$h(\lambda) = e^{\lambda B}$$

Multiplying by $P(\lambda) \, Q(\lambda)$ from the left, we get:

$$f(\lambda) = e^{\lambda \, A} \, e^{-\frac{\lambda^{2}}{2} \, k} \, e^{\lambda B}$$

The last two terms commute, so you can finally write:

$$e^{\lambda \, (A + B)} = e^{\lambda \, A} \, e^{\lambda B} \, e^{-\frac{\lambda^{2}}{2} \, k}$$

14. Jul 12, 2010

### ross_tang

@Cruikshank
I used your approach, that is power series expansion, and proved the statement.

http://www.voofie.com/content/102/how-to-prove-eab-e-lambda2-ea-eb/" [Broken]

$$(A+B)^n=\sum _{k=0}^n S_k^n(a+b)^k$$
$$\begin{cases} S_k^n = \frac{n! (-\lambda)^\frac{n-k}{2} }{2^\frac{n-k}{2} \left(\frac{n-k}{2}\right)! k!} & \text{ if } n-k \text{ is even} \\ S_k^n = 0 & \text{ otherwise } \end{cases}$$