strangerep
Science Advisor
- 3,766
- 2,213
A very minor improvement is that the equation above implies immediately that ##v_k## is an eigenvector of ##B## with eigenvalue ##-\lambda_k\, \alpha/\beta~## .PeroK said:[...]
This generates an infinite sequence of distinct eigenvalues, unless for some ##k## we have:
$$v_{k+1} = [B + \frac {\lambda_k \alpha}{\beta}I]v_k = 0$$ In which case, ##v_k## is a common eigenvector of ##A## and ##B + \frac {\lambda_k\, \alpha}{\beta}I##, hence also an eigenvector of ##B##.
I'd been thinking along related lines, but my logic was a bit different. In your terminology, we have $$A v_2 ~=~ (\lambda_1 + \beta) v_2 ~.$$ If ##A## does NOT have a (nonzero) eigenvector with that eigenvalue, then the only solution is ##v_2 = 0##, hence $$B v_1 = -\lambda_1 \alpha/\beta \; v_1$$ so ##v_1## is an eigenvector of ##B##.
Else, we act again with ##(B + \lambda_1 \alpha/\beta)## on ##v_2##, giving a ##v_3## with a different constant, and we can apply the same reasoning as in the previous paragraph. This algorithm must terminate because an ##n\times n## matrix has at most ##n## eigenvalues.
