Characteristic Polynomials and Minimal polynomials

In summary, the conversation discusses two questions, one with three parts, and a proof. The first question involves clicking on a link and the second question asks for a proof with a link to a web page. The conversation also includes attempts and clarifications for the proof, as well as asking for help with the first question. The proof for the second question involves using the rational canonical form and Cayley-Hamilton theorem, and using induction to show that the coefficients must be zero, ultimately leading to the conclusion of the first equation. The conversation also suggests that the proof from the lecture notes is straightforward and simple, but may require some understanding of linear independence, eigen vectors, and linear transformation.
  • #1
xfunctionx
9
0
Hi, there are a few questions and concepts I am struggling with. The first question comes in 3 parts. The second question is a proof.

Question 1: Please Click on the link below :smile:

1.jpg


Question 2: Please Click on the link below :smile:

2.jpg


For Q2, could you please show me how to prove this. If possible, could you also link me to a web page where the full proof has already been provided?

I would appreciate the help.
 
Physics news on Phys.org
  • #2
Hi xfunctionx,

Show us what you've done so far.
 
  • #3
sure ... let me just write it up
 
  • #4
Question 1 attempt: I got stuck early.

3.jpg


Question 2 attempt/proof from lecture notes:

I don't understand what my lecturer did right at the end. Or how his conclusion proved anything. Please could you help me understand, or show me a better proof?

4.jpg


5.jpg


6.jpg
 
  • #5
For question two:

You can prove this by induction. Clearly, it's true for k = 0 since eigenvectors are always non-zero. Now, to prove it for k = n, we assume it's true for k = n - 1.

Suppose that,

(a_1)(v_1) + (a_2)(v_2) + ... + (a_n)(v_n) = 0

Apply T to the LHS, and you get

(λ_1)(a_1)(v_1) + (λ_2)(a_2)(v_2) + ... + (λ_n)(a_n)(v_n) = 0

Now, multiply the first equation by λ_n, and subtract it from the second. The last term will cancel out. You will be left with

(λ_1 - λ_n)(a_1)(v_1) + (λ_2 - λ_n)(a_2)(v_2) + ... + (λ_(n-1) - λ_n)(a_(n-1))(v_(n-1)) = 0.

By the inductive hypothesis, each of the coefficients here must be zero. Can you show that this implies the the a_i from i=1 to (n-1) must be 0? (Hint: Use the fact that the eigenvalues are distinct). Then the original first equation becomes (a_n)(v_n) = 0, so a_n too must be zero.
 
Last edited:
  • #6
dx said:
Clearly, it's true for k = 0 ...

Sorry, I meant k = 1.
 
  • #7
dx said:
Sorry, I meant k = 1.

Thank you for your help dx, I will attempt the proof and try to understand it using induction.
 
  • #8
Can anyone help me with question 1?
 
  • #9
Have you gone over the rational canonical form of a matrix? Or the Cayley-Hamilton theorem? If you have, question 1 should be straightforward. The answer to part (a) is yes (use the rational canonical form). For (b), use the Cayley-Hamilton theorem. For (c), write D = S-1 T S, where S is an invertible matrix and D is diagonal. What's D2?
 
  • #10
Hello xf...nx

Your proof of 2nd question which according to you is from Lecture notes is quite straight forward using definition of Linear independance and eigen vectors and the defined LT and given hypothesis of the theorem. It is the best and simplest proof you have. Just read a bit about Linear Independance, eigen vectors and linear transformation and you will find that the proof is quite straight forward and simple.
 

1. What is a characteristic polynomial?

A characteristic polynomial is a polynomial that is associated with a square matrix. It is obtained by taking the determinant of the matrix minus a variable.

2. What is the significance of characteristic polynomials?

Characteristic polynomials are significant because they can help determine important properties of a matrix, such as its eigenvalues and eigenvectors. They also play a crucial role in solving systems of linear equations and diagonalizing matrices.

3. How is a minimal polynomial different from a characteristic polynomial?

A minimal polynomial is the smallest degree monic polynomial that a matrix satisfies. It is a factor of the characteristic polynomial, but it may not be the same as the characteristic polynomial.

4. How can minimal polynomials be used in linear algebra?

Minimal polynomials are useful in determining the algebraic and geometric multiplicities of eigenvalues, as well as the Jordan canonical form of a matrix. They are also important in understanding the behavior of a matrix in fields other than the complex numbers.

5. Are characteristic polynomials and minimal polynomials always unique?

No, characteristic polynomials and minimal polynomials are not always unique. For example, a matrix may have different minimal polynomials over different fields, and a matrix may have the same characteristic polynomial as another matrix but different minimal polynomials.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
11
Views
4K
  • Linear and Abstract Algebra
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
956
Replies
4
Views
6K
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
2K
Back
Top