Help with Understanding Eigenvalues, Eigenvectors, and Mapping with Lamda and P

  • Thread starter Thread starter terryfields
  • Start date Start date
  • Tags Tags
    Mean
Click For Summary
SUMMARY

This discussion focuses on the relationship between eigenvalues, eigenvectors, and polynomial mappings in linear algebra. It establishes that if λ is an eigenvalue of a matrix A, then for any polynomial P, the value P(λ) is also an eigenvalue of the matrix representation of P(A). The discussion provides a detailed derivation using matrix powers and polynomial expressions, demonstrating that the eigenvalues of polynomial transformations maintain a consistent relationship with their corresponding eigenvectors.

PREREQUISITES
  • Understanding of linear algebra concepts, specifically eigenvalues and eigenvectors.
  • Familiarity with polynomial functions and their properties.
  • Knowledge of matrix operations and representations.
  • Basic understanding of linear mappings and transformations.
NEXT STEPS
  • Study the properties of eigenvalues and eigenvectors in depth.
  • Learn about polynomial functions and their applications in linear transformations.
  • Explore matrix exponentiation and its implications for eigenvalues.
  • Investigate the Cayley-Hamilton theorem and its relation to polynomial matrices.
USEFUL FOR

Students and professionals in mathematics, particularly those studying linear algebra, as well as educators looking for clear explanations of eigenvalue properties in polynomial contexts.

terryfields
Messages
44
Reaction score
0
this is exam revision not homework so feal free to help

let lamda be an eigenvalue of T, and let P be a polynomial with coefficients in F, define the linear mapping S=p(T) and show that p(lamda) is an eigenvalue of S

i know that an eigenvalue of T is a element vnot=0 such that T(v)=lamda v for some lamda in the field, and that the scalar lamda here is the eigenvalue, however i don't understand this question at all

so lamda is our eigenvalue meaning there must be a corresponding eigenvector v not equal to zero but what's p and what does this mapping show? please help
 
Physics news on Phys.org
First,
If [tex]A[/tex] is the matrix and [tex]\lambda[/tex] the eigenvector, then [tex]A \vec x = \lambda \vec x[/tex] for some vector [tex]x[/tex]. Note these facts.

[tex] \begin{align*}<br /> A^n \lambda & = A^{(n-1)} (A \vec x) \\<br /> & = A^{(n-2)} A (\lambda \vec x)\\<br /> & = \hdots \\<br /> & = \lambda^n \vec x<br /> \end{align*} [/tex]

so [tex]\lambda^n[/tex] is an eigenvalue of [tex]A^n[/tex], with the same eigenvector

Next, if [tex]c[/tex] is any constant, then

[tex] (cA^n) \vec x = c\left(A^n \vec x\right) = c \lambda^n \vec x[/tex]

so [tex]c\lambda^n[/tex] is an eigenvalue of [tex]cA^n[/tex]

Finally, if [tex]a, b[/tex] are constants, and [tex]m, n[/tex] are integers, consider the
two-term polynomial [tex]p(s) = as^m + bs^n[/tex]. The polynomial [tex]p(A) [tex]is<br /> <br /> [tex] p(A) = a A^m + bA^n[/tex]<br /> <br /> which is a matrix the same size as [tex]A[/tex]. The product [tex]p(A) \vec x[/tex] is<br /> <br /> [tex] \begin{align*}<br /> (a A^m + b A^n) \vec x & = (a A^m) \vec x + (b A^n) \vec x \\<br /> & = \left(a \lambda^m\right) \vec x + \left(b \lambda^n\right) \vec x\\<br /> & = \left(a \lambda^m + b \lambda^n \right) \, \vec x \\<br /> & = p(\lambda) \, \vec x<br /> \end{align*}[/tex]<br /> <br /> That case does not have a constant term in the polynomial. If you have<br /> [tex] p(s) = as^m + bs^n + d[/tex]<br /> <br /> where [tex]d[/tex] is a constant, the appropriate modification is<br /> <br /> [tex] p(A) = aA^m + bA^n + d I_n[/tex]<br /> <br /> where [tex]I_n[/tex] is the identity matrix the same size as [tex]A[/tex]. Again, it is easy to show that<br /> <br /> [tex] \begin{align*}<br /> p(A) \, \vec x & = \left(a A^m + b A^n + dI_n\right) \, \vec x\\<br /> & = \left(a \lambda^m + b \lambda^n + d\lambda\right) \, \vec x\\<br /> & = p(\lambda) \, \vec x<br /> \end{align*}[/tex]<br /> <br /> so again [tex]p(\lambda)[/tex] is an eigenvalue of [tex]p(A)[/tex].<br /> <br /> The case for a general polynomial requires a little more notation but the steps are the same.<br /> <br /> The idea: if [tex]A[/tex] is the matrix for a linear operator, so is [tex]p(A)[/tex] for<br /> any polynomial [tex]p[/tex], and the eigenvalues behave ``as we expect them to''.[/tex][/tex]
 

Similar threads

Replies
5
Views
2K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
3K
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
8
Views
2K