Characteristic Polynomials/Eigenvalues

  • Context: Graduate 
  • Thread starter Thread starter jeff1evesque
  • Start date Start date
  • Tags Tags
    Characteristic
Click For Summary

Discussion Overview

The discussion centers around the characteristic polynomials and eigenvalues of matrices, specifically focusing on Theorem 5.3 regarding the properties of characteristic polynomials, proofs related to eigenvalues, and the application of linear transformations. Participants seek clarification on proofs, induction methods, and the evaluation of specific linear operators.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Homework-related
  • Debate/contested

Main Points Raised

  • Some participants assert that the characteristic polynomial of a matrix A is a polynomial of degree n with leading coefficient (-1)^n, while others question the necessity of induction for proving this.
  • One participant suggests that part (b) of the theorem follows from part (a) and the fundamental theorem of algebra, which states that a polynomial of degree n has at most n complex roots.
  • There is a discussion about expressing the linear map A - λI as an upper-triangular matrix and the implications for the determinant, with some participants providing references to relevant theorems.
  • Several participants express confusion about the details of the proof by induction and the steps involved in cofactor expansion, particularly regarding leading coefficients and the structure of determinants.
  • One participant clarifies that the transformation T(f(x)) = f(x) + (x + 1)f'(x) should be evaluated by substituting basis elements into f, rather than treating them as variables.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the necessity of induction for proving the theorem, and there are multiple competing views regarding the evaluation of determinants and the structure of the characteristic polynomial.

Contextual Notes

Participants express uncertainty about specific steps in the proof, including the application of cofactor expansion and the implications of different choices of rows and columns on leading coefficients. There are also unresolved questions about the evaluation of the linear operator and the substitution of basis elements.

Who May Find This Useful

This discussion may be useful for students and educators in linear algebra, particularly those interested in characteristic polynomials, eigenvalues, and the application of linear transformations.

jeff1evesque
Messages
312
Reaction score
0
Theorem 5.3: Let A be in [tex]M_n_x_n(F)[/tex].
(a) The characteristic polynomial of A is a polynomial of degree n with leading coefficient [tex](-1)^n[/tex].
(b) A has at most n distinct eigenvalues.
Note: The theorem can be proved by a straightfoward induction argument.

Question: Can someone help with the proofs? For part (b), I understand there can be at most n distinct eigenvalues, since the dimension of the matrix is the same as the number of elements along the diagonal. For this reason, there can be at most n distinct eigenvalues. But for (b), does the proof require induction also, or is the text simply encouraging induction for part (a)?

One last easy question: Let T be the linear operator on [tex]P_2(R)[/tex] defined by [tex]T(f(x)) = f(x) + (x + 1)f'(x)[/tex], let B be the standard ordered basis for [tex]P_2(R)[/tex], and let A = [tex][T]_B[/tex]. Then,

A = { (1, 0, 0), (1, 2, 0), (0, 2, 3) } This is a matrix with each paranthesis being column vectors.

In this example, i know B = { [tex]1, x, x^2[/tex] } is an ordered basis for [tex]P_2(R)[/tex]. So we plug the first element into the equation [tex]T(f(x)) = f(x) + (x + 1)f'(x)[/tex], and then plug x, then finally x^2. But for some reason I don't know how to evaluate each equation to get the respective column vectors above. In particular what is f(1), or what is [tex]f(x^2)[/tex]?


Thanks so much,


JL
 
Physics news on Phys.org
(b) follows directly from (a) and the fundamental theorem of algebra (a polynomial of degree n has at most n complex roots). This assumes that you already have the fact that the eigenvalues are precisely the roots of the characteristic polynomial.

(a) should be immediate if you express the linear map [tex]A - \lambda I[/tex] as an upper-triangular matrix with respect to an appropriate basis. It's always possible to find such a basis. Then use what you know about the determinant of an upper triangular matrix.

Of course, you can equally well define the characteristic polynomial as det[tex](\lambda I - A)[/tex], in which case the leading coefficient will be 1, not [tex](-1)^n[/tex].
 
jbunniii said:
(a) should be immediate if you express the linear map [tex]A - \lambda I[/tex] as an upper-triangular matrix with respect to an appropriate basis. It's always possible to find such a basis. Then use what you know about the determinant of an upper triangular matrix.


I know that the determinant of an upper triangular matrix is the product of the diagonal entries. But how do you express the linear map [tex]A - \lambda I[/tex] as an upper-triangular matrix with respect to an appropriate basis (this would be a linear transformation such that the domain and codomain differ- since the basis differ)?

Thanks,

J
 
Last edited:
jeff1evesque said:
I know that the determinant of an upper triangular matrix is the product of the diagonal entries. But how do you express the linear map [tex]A - \lambda I[/tex] as an upper-triangular matrix with respect to an appropriate basis (this would be a linear transformation such that the domain and codomain differ- since the basis differ)?

Thanks,

J

THEOREM: If V is a complex finite-dimensional vector space and A is a linear map from V to V, then there exists a basis for V with respect to which the matrix of A is upper-triangular. (See, e.g., Axler's "Linear Algebra Done Right," Theorem 5.13.)

Equivalently, if M is the matrix of A with respect to the standard basis, then there exists an invertible matrix S such that

[tex]M = S U S^{-1}[/tex]

where U is upper-triangular. ("M is similar to U.")

M and U have the same eigenvalues, since they represent the same linear map A.

You can use this theorem in your case as follows:

Let B be a basis for V with respect to which the linear map A has an upper triangular matrix U. Then the matrix of

[tex]A - \lambda I[/tex]

with respect to the basis B is simply

[tex]U - \lambda I[/tex]

Then

[tex]det(A - \lambda I) = det(U - \lambda I)[/tex]

which equals the product of the diagonal elements. From here, your result follows easily.
 
Last edited:
How about a proof by induction.

Proof: Consider the matrix A as a 2x2 matrix (base case).
Thus det(A - tI) = [tex](a_1_1 - t)(a_2_2 - t) - (a_2_1)(a_1_2)[/tex] = [tex](a_1_2a_2_2) - a_1_1t - a_2_2t + t^2[/tex] = [tex](-1)^2(a_1_2a_2_2 - a_1_1t - a_2_2t + t^2 )[/tex].

So we see in the base case has degree 2 with leading coefficient [tex](-1)^2[/tex] just as expected. So now we proceed with the induction hypothesis and consider some nxn matrix A. By cofactor expansion,
[tex]det(A - tI) = (-1)^1^+^1(a_1_1 - tI)det(A'_1_1) + \sum (-1)^k a_i_1 det(A_i_1)[/tex] (#1) (NOTE: summation runs from i = 2 to n)
= [tex](-1)^1^+^1(a_1_1 - tI)((-1)^n^-^1t^n^-^1+... + c_0)[/tex] (such that c_0 is a constant) (#2)
= .... conclusion of polynomial of degree n with leading coefficient [tex](-1)^n[/tex]?Questions:
1. In general how does one proceed from (#1) to conclude polynomial of degree n with leading coefficient [tex](-1)^n[/tex]?
2. Cofactor expansion of [tex]A'_1_1[/tex] (and even of it's minors) may have leading terms of [tex](-1)^(^n^-^1^)^+^(^n^-^1^)[/tex]. This doesn't necessarily mean all leading coefficients will be of the form [tex](-1)^n[/tex], since different choices of rows and columns may produce [tex](-1)^5[/tex] which does not equal [tex](-1)^n[/tex] if say n =10.
3. The person who helped me outline the proof for this theorem in the text never considered a leading term of [tex](-1)^(^n^-^1^)^+^(^n^-^1^)[/tex] on line (#2). I was wondering shouldn't there be such a term since we crossed out the first row and first column? Also, in line (#2), why isn't there some term of the form [tex](a_2_2 - tI)[/tex] in the paranthesis, but instead simply the term [tex]t^n^-^1[/tex]?
4. Since if we keep taking the determinants of each minor repeatedly, we will eventually have a remaining 2x2 matrix A'. is this why in line (#2), they incorporated a constant [tex]c_0[/tex]?
5. Lastly, is line (#2) only a cofactor expansion of [tex]A_1_1[/tex], meaning that this line should include the cofactor expansion of all [tex]A_i_1[/tex] elements along the first column? Oh yeah was the induction hypothesis applied in line (#2)?
 
Last edited:
You're getting there. Expand along the first row as normal

(a_11 - t) det ( ) + ... sum of stuff involving dets

What are the things you're taking determinants of...? (That's the hint for why this is induction.)
As for the other question: f(x) represents a generic element of P_2. The map sends

f(x) to f(x) + (x+1)f'(x)

So you need to substitute the basis elements for f. I don't know why you're trying to put them into f as variables.
 
I'm not sure about the proof, maybe I will try to seek help tomorrow. As for the second portion involving f(x) to f(x) + (x+1)f'(x), you are correct, and it works out if I substitute it for f.

THanks,

Jeffrey
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 24 ·
Replies
24
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
48
Views
6K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
3K