fresh_42 said:
2. Let ##A## and ##B## be complex ##n\times n## matrices such that ##AB-BA## is a linear combination of ##A## and ##B##. Show that ##A## and ##B## must have a common eigenvector. (IR)
Thank you to
@Infrared for posing this question. It made me discover some holes in my understanding of even the simpler case where ##AB=BA##. (E.g., certain conditions on the intersection of the nullspaces of ##A## and ##B## must be met.)
Anyway, I'll start with a non-general partial solution to get things moving...
For this initial simple case, assume we are given $$ AB - BA = \alpha A + \beta B ~,~~~~~ (1)$$ where the scalar coefficients ##\alpha,\beta## are non-zero.
Let ##|a\rangle## be an eigenvector of ##A## with eigenvalue ##a##, i.e., ##A |a\rangle = a |a\rangle##. Then ##|a\rangle## is also an eigenvector of ##(A+k)##, with eigenvalue ##a+k##.
Now, (1) implies $$0 ~=~ AB|a\rangle - BA|a\rangle - \alpha A |a\rangle - \beta B|a\rangle ~=~ AB|a\rangle - aB |a\rangle - a \alpha|a\rangle - \beta B|a\rangle~.~~~~~ (2)$$We need to recast (2) in the form: $$0 ~=~ A(B+v) |a\rangle ~-~ w (B+v) |a\rangle ~=~ AB|a\rangle + va|a\rangle - w (B+v) |a\rangle ~,~~~~~ (3)$$ because that means ##(B+v)|a\rangle## is an eigenvector of ##A##, with eigenvalue ##w##.
Rearranging (3) and comparing with (2), we find that we need ##w=a+\beta## and ##a\alpha = v (w-a)##, which implies ##v = a\alpha/\beta##.
Therefore, ##(B+ a\alpha/\beta) |a\rangle## is an eigenvector of ##A## with eigenvalue ##(a+\beta)##.
[Edit: I have struck out the following paragraph. Anyone following the proof above should now go straight to the later post in this thread by PeroK, and later by me, which complete the answer.]
The proof then continues (in the simple case of distinct ##A##-eigenvectors) by the standard technique of realizing that the vector ##(B+ a\alpha/\beta) |a\rangle## must be a multiple of an ##A##-eigenvector, resulting (after a shift of the eigenvalue) in an eigenvector of ##B##.
A more general solution would require a condition on the intersection of the nullspaces of ##A## and ##B##, among other things (IIUC). As an illustration of what can go wrong in the simpler case of ##AB=BA##, consider $$A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} ~~~\mbox{and}~~ B = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} ~,$$ for which ##\begin{pmatrix} 1 \\ 0 \end{pmatrix}## is an eigenvector of ##B\;## but not ##A## (since that vector lies in the nullspace of ##A##).
I'll leave it at that for now, and ask the question: how far did you [
@Infrared ] envisage that the answer should go? I'm guessing you'll require total completeness to qualify as a proper answer?
