Showing that the normalized eigenvector for a distinct eigenvalue is unique

randomafk
Messages
21
Reaction score
0
Hey guys,

I've been trying to brush up on my linear algebra and ran into this bit of confusion.

I just went through a proof that an operator with distinct eigenvalues forms a basis of linearly independent eigenvectors.

But the proof relied on a one to one mapping of eigenvalues to eigenvectors. Is there any particular reason why for a distinct eigenvalue, there shouldn't be more than one (normalized) eigenvectors that satisfies the eigenvalue definition.

And if so, how do I prove it? I'm not mentally convinced, even if that is the case as the proofs seem to indicate!

Thanks!
 
Physics news on Phys.org
randomafk said:
Hey guys,

I've been trying to brush up on my linear algebra and ran into this bit of confusion.

I just went through a proof that an operator with distinct eigenvalues forms a basis of linearly independent eigenvectors.

But the proof relied on a one to one mapping of eigenvalues to eigenvectors. Is there any particular reason why for a distinct eigenvalue, there shouldn't be more than one (normalized) eigenvectors that satisfies the eigenvalue definition.
No, there isn't any reason and unless that proof was dealing with a special situation (such as an n by n matrix having n distinct eigenvalues) it isn't true that there is a "one to one mapping of eigenvalues to eigenvectors". For example the diagonal matrix
\begin{pmatrix}2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2\end{pmatrix}
has the single eigenvalue 2 but every vector is an eigenvector. Even requiring normalization, every unit vector in every direction is an eigenvector.

Now, if you mean, not just "distinct eigenvalues" but "n distinct eigenvalues for an n by n matrix", yes that is true. It follows from the fact that eigenvectors corresponding to distinct eigenvalues are independent. If matrix A is n by n, it acts on an n dimensional space. If A has n distinct eigenvalues, then it has n independent eigenvectors which form a basis for the space. There is no "room" for any other eigenvectors.

And if so, how do I prove it? I'm not mentally convinced, even if that is the case as the proofs seem to indicate!

Thanks!
 
HallsofIvy said:
For example the diagonal matrix
\begin{pmatrix}2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2\end{pmatrix}
has the single eigenvalue 2 but every vector is an eigenvector.

I don't think that is what most linear algebraists would call that a "single" eigenvalue, any more than you would say that the equation ##(x- 2)^3 = 0## has only a "single" root (and of course the two statements are closely related).

I thnk the OP's question is about an eigenvalue with multiplicity one, which is what "distinct" means IMO.
 
HallsofIvy said:
No, there isn't any reason and unless that proof was dealing with a special situation (such as an n by n matrix having n distinct eigenvalues) it isn't true that there is a "one to one mapping of eigenvalues to eigenvectors". For example the diagonal matrix
\begin{pmatrix}2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2\end{pmatrix}
has the single eigenvalue 2 but every vector is an eigenvector. Even requiring normalization, every unit vector in every direction is an eigenvector.

Now, if you mean, not just "distinct eigenvalues" but "n distinct eigenvalues for an n by n matrix", yes that is true. It follows from the fact that eigenvectors corresponding to distinct eigenvalues are independent. If matrix A is n by n, it acts on an n dimensional space. If A has n distinct eigenvalues, then it has n independent eigenvectors which form a basis for the space. There is no "room" for any other eigenvectors.

Oops. Sorry for the vague language, but when I said distinct eigenvalues I did indeed mean multiplicity of 1!

But anyway, where does that fact follow from?
My understanding of the proof that eigenvectors of distinct eigenvalues are independent is something like this (in the special case of n distinct eigen values)
1) The eigenvectors span the null space
2) There are n eigenvectors since there n distinct eigenvalues
3) Since the n = dim, they must all be independent and form a basis

but step 2 assumes that each eigenvalue produces a single eigenvector
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top