Why Does (A-λ1I)(A-λ2I)x=0 Hold for Linear Combinations of Eigenvectors?

  • Thread starter Thread starter makris
  • Start date Start date
  • Tags Tags
    Eigenvector
makris
Messages
11
Reaction score
0
Hi all,

I have the following question.

A = nxn non singular matrix
I = nxn identity matrix
li = eigevalues of A i=1,2...n
ui = eigenvectors corresponding to the previous eigenvalues.

It true that

( A - l1 * I ) * x =0

is satisfied by any vector of the form x = a1 * u1 where a1= arbitrary real number

Lanczos in his book Applied Analysis p. 61 claims that the following quadratic equation in A

( A - l1 * I ) * ( A - l2 * I ) * x = 0

is satisfied by an arbitrary linear combination of the first two eigenvectors

x = a1 * u1 + a2 * u2

It is not very obvious to me why this happens.
(Extending this to include n eigenvectors and eigenvalues will eventually lead to the so called Cayley-Hamilton theorem.)

I was wondering if you could give me a hint starting from first principals.

Thanks
 
Physics news on Phys.org
( A - l1 * I ) * ( A - l2 * I ) = ( A - l2 * I ) *( A - l1 * I )
so if
T=( A - l1 * I ) * ( A - l2 * I ) = ( A - l2 * I ) *( A - l1 * I )
T(a1 * u1)=a1*T(u1)=0
T(a2 * u2)=a1*T(u2)=0
hence by linearity
T(a1 * u1 + a2 * u2)=0
 
for sharing your question and thoughts on solving the eigenvector problem. The first equation you mentioned is known as the eigenvalue equation, where the matrix A multiplied by an eigenvector u results in a scalar multiple of the same eigenvector, represented by the eigenvalue λ. This equation can be rewritten as (A-λI)u=0, where I is the identity matrix. This equation only has a non-trivial solution when the determinant of (A-λI) is equal to 0, which leads to the characteristic polynomial of A.

In the second equation, (A-λ1I)(A-λ2I)x=0, we can see that the first equation is satisfied by both λ1 and λ2, meaning that the product of the two matrices results in a zero matrix. This can be expanded to include any linear combination of the eigenvectors, as long as the corresponding eigenvalues are used. This is because when we multiply two matrices, the result is a linear combination of the columns of the second matrix, with coefficients determined by the entries of the first matrix. In this case, the first matrix is (A-λ1I) and the second matrix is (A-λ2I), so the resulting linear combination of columns will be a combination of the eigenvectors corresponding to λ1 and λ2.

Extending this to include all n eigenvectors and eigenvalues will lead to the Cayley-Hamilton theorem, which states that any square matrix satisfies its own characteristic polynomial. This can be proven using the same concept of multiplying matrices and linear combinations of eigenvectors.

I hope this helps to clarify the relationship between the two equations and how they can be extended to include multiple eigenvectors. It is a fundamental concept in linear algebra and is essential in solving many problems in mathematics and physics. Best of luck in your studies!
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top