Reduction of endomorphisms

  • Thread starter geoffrey159
  • Start date
  • Tags
    Reduction
In summary, the theorem states that an endomorphism of a finite dimensional vector space over a sub-field of the complex numbers is diagonalizable if and only if there exists a polynomial that satisfies certain conditions and can be written as a product of polynomials with degree 1. The proof for the necessary condition involves showing that the vector space is a direct sum of eigenspaces associated with the eigenvalues of the endomorphism. This can be achieved by finding a polynomial Q that satisfies certain constraints.
  • #1
geoffrey159
535
72
Hello, I am studying reduction of endomorphisms and I came across a theorem that I can't understand completely. It states that:

Theorem: Let ##E## be a finite dimensional ##K## vector space, ##K## sub-field of ##\mathbb{C}##, and ##f## be an endomorphism of ##E##. Then, ##f## is diagonalizable if and only if there exists a polynomial ##P## of ##K[X]##, such that ##P(f) = 0##, and ##P## can be written as a product of polynomials of degree 1, and its roots have order of multiplicity 1.

I understand the proof I have for the sufficient condition, but the proof for the necessary condition is hard to follow for me so I tried an alternate way. I would like you to tell me if this is correct please:

##\Leftarrow ## ) Assume that there exists distinct ##\lambda_i##'s for ##i = 1...p##, and a polynomial ##P## in the form ##P = a \prod_{i=1}^p (X - \lambda_i) ## such that ##P(f) = 0##.
I want to show that ##E = \bigoplus_{\lambda \in \text{Sp}(f) } E_{f,\lambda}##, which is a necessary and sufficient condition of diagonalizability.
We have that for any ##x\in E-\{0\}##, ## P(f)(x) = 0##. So there exists ##i \in \{ 1...p \}## such that ##f(x) = \lambda_i x##, and ##x## belongs to the eigenspace ##E_{f,\lambda_i}##. Therefore ##x\in \bigoplus_{\lambda \in \text{Sp}(f) } E_{f,\lambda}##, and ##E \subset \bigoplus_{\lambda \in \text{Sp}(f) } E_{f,\lambda}##. The other inclusion is trivial. So ##f## diagonalizable.
 
Physics news on Phys.org
  • #2
geoffrey159 said:
We have that for any ##x\in E-\{0\}##, ## P(f)(x) = 0##. So there exists ##i \in \{ 1...p \}## such that ##f(x) = \lambda_i x##, and ##x## belongs to the eigenspace ##E_{f,\lambda_i}##.
I may be missing the obvious, but I don't understand this step. How can it be that each element of E is an eigenvector?
 
  • Like
Likes geoffrey159
  • #3
If ##P(f) = 0##, then ##P(f)(x) = 0 ## for all ##x\in E##.
  1. If ##x = 0## then ##x \in E \cap \bigoplus_{\lambda \in \text{Sp}(f) } E_{f,\lambda}##
  2. If ##x\neq 0##, then ## 0 = P(f)(x) = a\prod_{i = 1}^p (f(x) - \lambda_i x)##. So at least one term in the product is equal to 0. Therefore, there exists ##i## such that ##f(x) - \lambda_i x = 0##, which means that ##x## belongs to the eigenspace associated to ##f## and eigenvalue ##\lambda_i##.
 
  • #4
I'm very confused, and probably wasting your time, but let us take a trivial example.
##E={\mathbb C}²##, and ##f## the endomorphism represented (with the canonical basis) by the matrix ##\begin{pmatrix}1 & 0 \\
0 & 2 \end{pmatrix}##.
So ##f(a,b)=(a,2b)##, and ##P(X)=(X-1)(X-2)##.
Take ##x=(1,1)##. ##f(x)=(1,2)##, so ##x## is not an eigenvector of f.
But ##P(f(x))=f(f(x))-3f(x)+2x=f(1,2)-3(1,2)+2(1,1)=(1,4)-(3,6)+(2,2)=(0,0)##.
 
  • Like
Likes geoffrey159
  • #5
Oooops, sorry it's me who's wasting your time, I realize I am completely confused with the notations. I need to re-do this. Sorry
 
  • #6
proofreading
 
Last edited:
  • #7
geoffrey159 said:
Developping the polynomial ##Q(f)## as a sum, and setting ##n = \text{Card(Sp}(f))##, we see that for any vector ##x\in E##, the family ## (x,f(x),f^2(x),...,f^n(x))## is linearly dependent in ##E##. Therefore ## \text{dim}(E) \le n ##.
I'm somewhat troubled by this step, but maybe I'm wrong about the notation.
If I understood it correctly, ##{Sp}(f)## is the set of all eigenvalues of ##f##.

If that is indeed the case, ##n = \text{Card(Sp}(f))## is the number of eigenvalues of ##f##. You claim that ## \text{dim}(E) \le n ##. But that can't be true in general, because an endomorphism can have less distinct eigenvalues than the dimension of the vector space, and still be diagonalizable.

I also don't immediately see why the linearly dependence of the family ## (x,f(x),f^2(x),...,f^n(x))## for all x implies that ## \text{dim}(E) \le n ##.
 
  • Like
Likes geoffrey159
  • #8
Lol, this is exactly the reason why I deleted my post :-) I need to work more on this. Thank you for your help
 
  • #9
geoffrey159 said:
Lol, this is exactly the reason why I deleted my post :-) I need to work more on this. Thank you for your help
Ok, no problem. :)
 
  • Like
Likes geoffrey159
  • #10
But I'm not giving up on this, I want to find the solution :-)
 
  • Like
Likes Samy_A
  • #11
Think I have it now.

In a previous post, we said that ##P(f) = 0## implies that the eigenvalues of ##f## are among the zeros of ##P##.
Then it had to be true that ## P(f) = 0 \iff Q(f) = 0 ##, where ##Q = \prod_{\lambda \in \text{Sp}(f)} (X - \lambda) ##.

Now I want to show that ## E = \bigoplus_{\lambda\in\text{Sp}(f)} \text{Ker}(f-\lambda e) ##. The inclusion ##\supset## is trivial, and we now want to show ##\subset##. We need to prove that any ##x\in E## has a decomposition over the eigenspaces of ##f## in the form ##x = x_1 + ... + x_n ##, where ##x_i \in \text{Ker}(f-\lambda_i e) ##.

That would be done if we could find ##n## polynomials ##Q_k## such that the endomorphism ##Q_k(f)## sends any ##x\in E## in the ##k##-th eigenspace and :
## 0 = Q(f) = e - \sum_{k=1}^n Q_k(f) ##.

So the following constraints must be satisfied:
  1. ##Q_k(f) \in \text{Ker}(f-\lambda_k e) \iff f(Q_k(f)) = \lambda_k Q_k(f) \iff X(f) \circ Q_k(f) = (\lambda_k e)(f)\circ Q_k(f) \iff ((X-\lambda_k ) Q_k)(f) =0 ##
  2. Existence of a non-zero constant ##\beta## such that ## Q = \beta \ (1 - \sum_{k=1}^n Q_k)##, and the ##Q_k##'s have the same degree as ##Q##.
So if ##Q_k## has the form ##Q_k = \alpha (X-\mu) \prod_{i\neq k} (X-\lambda_i)##, where ##\alpha,\mu## are constants to be determined, then the first constraint is satisfied, and ##Q_k## has the same degree as ##Q##. For the second constraint, we must satisfy that ## 1 - \sum_{k=1}^n Q_k ## has the same roots as ##Q##. So it seems logic to determine ##\alpha,\mu## such that ##Q_k(\lambda_i) = \delta_{ik} \iff \alpha = \frac{1}{(\lambda_k-\mu) \prod_{i\neq k} (\lambda_k - \lambda_i)}## and ##\mu## is not an eigenvalue of ##f##.

Finally, there exists a constant ##\beta \neq 0## such that ## Q = \beta (1 - \sum_{k=1}^n \frac{X-\mu}{\lambda_k -\mu} \prod_{i\neq k} \frac{ (X-\lambda_i)}{ (\lambda_k - \lambda_i)})##, and for any ##x\in E##, ##P(f)(x) = 0 \iff Q(f)(x) = 0 \Rightarrow x \in \bigoplus_{\lambda\in \text{Sp}(f)} \text{Ker}(f-\lambda e) ##, which proves that ##f## is diagonolizable.

Is it correct now ?
 
  • #12
Looks correct to me. I'm a little lost in your point 1) concerning the polynomials ##Q_k##, but the formula for these polynomials fits the bill.
 
Last edited:
  • Like
Likes geoffrey159
  • #13
Finally! That made me sweat ;-) Thank you very much Samy for your patience
 

What is the concept of reduction of endomorphisms?

The reduction of endomorphisms is a mathematical concept that involves simplifying a linear transformation to its most basic form. It is often used in algebraic geometry and other areas of mathematics to study the properties of geometric objects.

Why is reduction of endomorphisms important in mathematics?

Reduction of endomorphisms is important because it allows us to better understand the structure and properties of geometric objects. It also helps us to solve complex mathematical problems by breaking them down into simpler forms.

What is the difference between reduction of endomorphisms and reduction of matrices?

While both involve simplifying mathematical objects, reduction of endomorphisms deals specifically with linear transformations, while reduction of matrices deals with matrices as a whole. This means that the methods and techniques used for reduction may differ between the two concepts.

How is reduction of endomorphisms used in practical applications?

Reduction of endomorphisms has many practical applications, particularly in fields such as computer science and engineering. It is used to analyze and optimize algorithms, design efficient computer programs, and solve complex problems in various industries.

Are there any limitations to reduction of endomorphisms?

While reduction of endomorphisms is a powerful tool, it does have some limitations. It may not be applicable to all types of linear transformations, and the simplified form may not always accurately represent the original transformation. Additionally, the process of reduction can be time-consuming and complex for more complicated transformations.

Similar threads

  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
964
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
19
Views
2K
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
Replies
1
Views
1K
Replies
3
Views
1K
Back
Top