- #1
- 312
- 0
Theorem: Let A be in [tex]M_n_x_n(F)[/tex]. Then a scalar [tex]\lambda[/tex] is an eigenvalue of A if and only if [tex]det(A - \lambda I_n) = 0[/tex].
Proof: A scalar lambda is an eigenvalue of A if and only if there exists a nonzero vector v in F^n such that lambda*v, that is (A - \lambda I_n)(v) = 0. By theorem 2.5, this is true if and only if A - \lambda I_n is not invertible (since it's not 1-1 or onto). However, this result is equivalent to the statement that det(A - \lambda I_n) = 0."
Question: Can someone please explain to me how something not being invertible implies that the determinant of such a "thing" is equal to 0?? In particular "By theorem 2.5, this is true if and only if A - \lambda I_n is not invertible (since it's not 1-1 or onto). However, this result is equivalent to the statement that det(A - \lambda I_n) = 0."
Theorem 2.5: Let T be a linear transformation T: V-->W where V and W are vector spaces of equal finite-dimension. Then T is 1-1 <==> T is onto <==> rank(T) = dim(V).
Thanks a lot,
JL
Proof: A scalar lambda is an eigenvalue of A if and only if there exists a nonzero vector v in F^n such that lambda*v, that is (A - \lambda I_n)(v) = 0. By theorem 2.5, this is true if and only if A - \lambda I_n is not invertible (since it's not 1-1 or onto). However, this result is equivalent to the statement that det(A - \lambda I_n) = 0."
Question: Can someone please explain to me how something not being invertible implies that the determinant of such a "thing" is equal to 0?? In particular "By theorem 2.5, this is true if and only if A - \lambda I_n is not invertible (since it's not 1-1 or onto). However, this result is equivalent to the statement that det(A - \lambda I_n) = 0."
Theorem 2.5: Let T be a linear transformation T: V-->W where V and W are vector spaces of equal finite-dimension. Then T is 1-1 <==> T is onto <==> rank(T) = dim(V).
Thanks a lot,
JL
Last edited: