What Defines Mainvectors in the Context of Eigenvalue Multiplicities?

  • Thread starter Thread starter yeus
  • Start date Start date
  • Tags Tags
    Definition
yeus
Messages
6
Reaction score
0
When there is a difference between algebraic and geometric multiplicity of eigenvalues mainvectors are used to handle that difference. Mainvectors are defined as the solution v of the equation: (A-lambda*E)^k*v=0 where k is the multiplicity of the eigenvalue lambda. Now my question is: Why are you using the k-th power of the definition of an eigenvector to search for a mainvector? how do you get from the fact that there is a difference between alge./geom. mult. to that equation? thanks for answers

(I apologize for not knowing that many mathematical expressions in english. but I hope you guys understand my problem)
 
Last edited:
Physics news on Phys.org
so sad... no one wants to answer my question ;) please guys
 
Perhaps I can exlpain why there have been no answers:

google for "mainvector" and see how many hits you get, or at least how many are to do with mathematics.

One confusing point is that you say this is to do with the diffence between algebraic and geometric mutliplicites but then state k is "the multiplicity". Well, whcih is it? Algebraic or geometric?

Why not run through it with the example of the matrix

11
01

so that the algebraic multiplicity of the eigenvalue 1 is 2 but its geometric multiplicty is 1.

Should k be one or two?

I think you may need the word principal instead of main, and that what you're getting at is the difference between an eigenspace and a generalized eigenspace.

If A is a liner map and t an eigen value, then an eigenvector is a nonzero vector such that (A-t)v=0. Sometimes we can find a basis of eigenvectrs, but usually not. The next best thing we can do is, instead of diagonalizing the matrix, put it into Jordan Form, say by choosing a break down of the vector space into *generalized eigenspaces*, that is a set of vectors such that (A-t)^k vanishes on it for some k. We can then write V is a direct sum of subspaces where A acts as


t10000...
0t1000...
00t100...
000t10...
.
.
.
 
Last edited:
matt grime said:
The next best thing we can do is, instead of diagonalizing the matrix, put it into Jordan Form, say by choosing a break down of the vector space into *generalized eigenspaces*, that is a set of vectors such that (A-t)^k vanishes on it for some k. We can then write V is a direct sum of subspaces where A acts as


t10000...
0t1000...
00t100...
000t10...
.
.
.

well.. first of all thanks for you answer... it helped me a bit, but the thing in the qotation marks, -thats exactly the part that i don't understand- why can you find a basis of eigenvecotrs in the generalized eigenspace? I may have a bug in my brain regarding to this forgive me ;)
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top