Diagonalization, which eigenvector is found?

FredMadison
Messages
47
Reaction score
0
Hi!

This might be a silly question, but I can't seem to figure it out and have not found any remarks on it in the literature.

When diagonalizing an NxN matrix A, we solve the characteristic equation:

Det(A - mI) = 0

which gives us the N eigenvalues m. Then, to find the eigenvectors v of A, we solve the eigenvalue problem

Av = mv

Now, any scalar multiple of an eigenvector of A is itself an eigenvector with the same eigenvalue. So, which eigenvector do we find when solving the eigenvalue problem? It can't be totally random, can it? Is there a way of determining which of the infinitude of eigenvectors (all with the same "direction") the algorithm chooses?
 
Physics news on Phys.org
Usually 3 things are done.

Normalize the vector so that \|v\| = 1
Make the smallest entry other than zero equal to 1
Make the largest equal to one

<br /> \begin{align*}<br /> (1) \Longrightarrow v_0 &amp;= \frac{1}{\sqrt{v^*v}}v\\<br /> (2) \Longrightarrow v_0 &amp;= \frac{1}{\min_{v_i\neq 0}(v_i)}v\\<br /> (3) \Longrightarrow v_0 &amp;= \frac{1}{\max(v_i)}v<br /> \end{elign*}<br />

It is a choice... You can come up yourself with something else anyway
 
Thanks trambolin for your answer. However, that was not really my question. I'm aware this is what one usually does when the eigenvectors are found.

My problem is this: When we solve

Av = mv (1)

for the eigenvectors v, we find a set of N eigenvectors. But these are not unique. How does equation (1) "choose" which eigenvector in the 1-d subspace spanned by an eigenvector to "return" since all of them are equally valid?
 
Equation (1) doesn't "choose" any specific eigenvector. What typically happens is that you get something like y= 2x, z= 3x- that is, all but one of the components (in the case that there is only one eigenvector corresponding to each eigenvalue) so that the eigenvector is of the form <x, 2x, 3x>= x<1, 2, 3>. You then choose a value of x.

As far as the diagonalization is concerned, it doesn't matter which you choose- using any of the eigenvectors corresponding to the eigenvalues as columns of "P" will still give you a matrix such that P^{-1}AP is diagonal.
 
Ofcourse! It WAS a silly question. This is what happens when you stop doing things by hand and start relying too much on Mathematica to do the thinking for you.

Thanks guys!
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top