Eigenvalues and Eigenvectors over a Polynomial Ring

Click For Summary

Discussion Overview

The discussion revolves around the properties of eigenvectors and eigenvalues in the context of linear transformations represented over a polynomial ring. Participants explore whether an eigenvector of a linear transformation \(f\) remains an eigenvector when applying a polynomial \(P(f)\) to it, and what the corresponding eigenvalue would be. The scope includes theoretical reasoning and conceptual clarifications related to linear algebra.

Discussion Character

  • Exploratory, Technical explanation, Conceptual clarification, Debate/contested

Main Points Raised

  • One participant presents a solution showing that if \(v\) is an eigenvector of \(f\) with eigenvalue \(\lambda\), then \(v\) is also an eigenvector of \(P(f)\) with eigenvalue \(P(\lambda)\), where \(P(t)\) is a polynomial.
  • Another participant points out a conceptual error regarding the representation of the eigenvector \(v\) and suggests that the column vector representation should be explicitly stated.
  • Some participants express a preference for using matrix representations over functional forms of linear transformations, citing personal comfort with matrices.
  • A later reply discusses the well-known isomorphism of algebras and its implications for finite-dimensional vector spaces, suggesting that dimensionality does not affect the argument presented.
  • One participant emphasizes the importance of defining \(P(f)\) in terms of linear transformations and clarifies the operations involved in applying polynomials to these transformations.

Areas of Agreement / Disagreement

Participants express differing views on the necessity of using matrix representations versus functional forms. While some agree on the correctness of the initial solution, others raise concerns about clarity and representation. The discussion remains unresolved regarding the best approach to represent eigenvectors in this context.

Contextual Notes

There are limitations regarding assumptions about the dimensionality of the vector space and the representation of eigenvectors. The discussion does not resolve whether the matrix representation or functional form is preferable, leaving it open to interpretation.

Sudharaka
Gold Member
MHB
Messages
1,558
Reaction score
1
Hi everyone, :)

Here's another question that I solved. Let me know if you see any mistakes or if you have any other comments. Thanks very much. :)

Problem:

Prove that the eigenvector \(v\) of \(f:V\rightarrow V\) over a field \(F\), with eigenvalue \(\lambda\), is an eigenvector of \(P(f)\) where \(P(t)\in F[t]\). What is the eigenvalue of \(v\) with respect to \(P(f)\)?

My Solution:

Let \(A\) be the matrix representation of the linear transformation \(f\). Then we can write, \(Av=\lambda v\). Now let, \(P(t)=a_0+a_1 t+a_2 t^2+\cdots+a_n t^n\) where \(a_i\in F\,\forall\,i\). Then,

\[P(A)=a_0+a_1 A+a_2 A^2+\cdots+a_n A^n\]

\[\Rightarrow P(A)(v)=a_0 v+a_1 (Av)+a_2 (A^2 v)+\cdots+a_n (A^n v)\]

\[\Rightarrow P(A)(v)=a_0 v+a_1 \lambda v+a_2 \lambda^2 v+\cdots+a_n \lambda^n v\]

\[\therefore P(A)(v)=(a_0 +a_1 \lambda +a_2 \lambda^2 +\cdots+a_n \lambda^n )(v)\]

Hence \(v\) is an eigenvector for \(P(f)\) and the corresponding eigenvalue is, \(a_0 +a_1 \lambda +a_2 \lambda^2 +\cdots+a_n \lambda^n\).
 
Physics news on Phys.org
Sudharaka said:
Hi everyone, :)

Here's another question that I solved. Let me know if you see any mistakes or if you have any other comments. Thanks very much. :)

Problem:

Prove that the eigenvector \(v\) of \(f:V\rightarrow V\) over a field \(F\), with eigenvalue \(\lambda\), is an eigenvector of \(P(f)\) where \(P(t)\in F[t]\). What is the eigenvalue of \(v\) with respect to \(P(f)\)?

My Solution:

Let \(A\) be the matrix representation of the linear transformation \(f\). Then we can write, \(Av=\lambda v\). Now let, \(P(t)=a_0+a_1 t+a_2 t^2+\cdots+a_n t^n\) where \(a_i\in F\,\forall\,i\). Then,

Here's a conceptual error. You have written $Av=\lambda v$. Here $A$ is a matrix and $v$ is a vector. Here you should have used the column vector representation of $v$ with respect to the same basis you used to represent $f$ as the matrix $A$.

Sudharaka said:
\[P(A)=a_0+a_1 A+a_2 A^2+\cdots+a_n A^n\]

\[\Rightarrow P(A)(v)=a_0 v+a_1 (Av)+a_2 (A^2 v)+\cdots+a_n (A^n v)\]

\[\Rightarrow P(A)(v)=a_0 v+a_1 \lambda v+a_2 \lambda^2 v+\cdots+a_n \lambda^n v\]

\[\therefore P(A)(v)=(a_0 +a_1 \lambda +a_2 \lambda^2 +\cdots+a_n \lambda^n )(v)\]

Hence \(v\) is an eigenvector for \(P(f)\) and the corresponding eigenvalue is, \(a_0 +a_1 \lambda +a_2 \lambda^2 +\cdots+a_n \lambda^n\).
This is fine (keeping in mind the above comment) although you didn't need to use the matrix representation of $f$. You could have done what you have done with $f$ itself. The solution would be shorter too.

:)
 
caffeinemachine said:
Here's a conceptual error. You have written $Av=\lambda v$. Here $A$ is a matrix and $v$ is a vector. Here you should have used the column vector representation of $v$ with respect to the same basis you used to represent $f$ as the matrix $A$.

This is fine (keeping in mind the above comment) although you didn't need to use the matrix representation of $f$. You could have done what you have done with $f$ itself. The solution would be shorter too.

:)

Thank you very much for the ideas. I was too lazy to write down "let \(v\) be the column vector representation of ..." and thought it was rather implied when I used the matrix representation of \(f\).

Your other idea sounds good, but for some weird reason, I always like to deal with matrices rather than the functional form of linear transformations. :)

Thanks again for your generous help. :)
 
Sudharaka said:
... for some weird reason, I always like to deal with matrices rather than the functional form of linear transformations. :)
There is no problem if $V$ has dimension $n$ (finite), because we can use the well known isomophism of algebras $\phi:\operatorname{End}(V)\to \mathbb{K}^{n\times n},$ $\phi (f)=A=[f]_B,$ where $B$ is a fixed basis of $V.$
 
I could be wrong, but I think what caffeinemachine was getting at is this:

Suppose $P(t) = a_0 + a_1t +\cdots + a_nt^n$.

By definition:

$P(f)$ is defined as the linear transformation:

$P(f)(v) = (a_0I + a_1f + \cdots a_nf^n)(v) = a_0v + a_1f(v) + \cdots + a_nf^n(v)$

(in other words, we use the "point-wise" sum for linear transformations, and composition for multiplication, as is usual for a ring of endomorphisms of an abelian group. The scalar multiplication is also "pointwise": $(af)(v) = a(f(v))$).

Presumably, you have already proved that if $\lambda$ is an eigenvalue for $f$ with eigenvector $v$, then (for natural numbers $k$):

$f^k(v) = \lambda^kv$

(note the exponent on the left refers to k-fold composition, and the exponent on the right refers to exponentiation in the field). If you have not done so, it's easy to prove using induction (you may wish to use the common convention that $f^0 = I = \text{id}_V$, the identity function on $V$).

Thus, for an eigenvector $v$ with eigenvalue $\lambda$, we have that $v$ is likewise an egienvector for $P(f)$ with eigenvalue $P(\lambda)$.

The advantage to this is that no mention is made of the dimensionality of the vector space $V$, and no assumptions are made about any basis.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
Replies
8
Views
2K