Question from a proof in Axler 2nd Ed, 'Linear Algebra Done Right'

Click For Summary

Discussion Overview

The discussion centers around a question related to the proof of Theorem 5.13 in the second edition of "Linear Algebra Done Right" by Sheldon Axler. Participants explore the implications of a specific linear operator's matrix representation in relation to eigenvalues and eigenvectors, particularly in the context of finite-dimensional complex vector spaces.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions whether the matrix of the operator ##T## can be represented in a specific upper triangular form given the structure of the basis and the properties of the operator ##T - \lambda I##.
  • Another participant proposes an alternative approach to constructing eigenvectors by considering the induced map on the quotient space, suggesting that the theorem might have a straightforward proof.
  • A third participant expresses discomfort regarding the presence of zeroes in the matrix representation, questioning the necessity of generalized eigenvectors in such cases.
  • A later reply acknowledges a misunderstanding about eigenvectors, noting that the vector ##v_1## does not necessarily satisfy the eigenvector condition ##Tv_1 = \lambda v_1##.

Areas of Agreement / Disagreement

Participants express differing views on the implications of the matrix structure and the necessity of generalized eigenvectors. There is no consensus on the correctness of the proposed matrix form or the necessity of generalized eigenvectors.

Contextual Notes

Participants highlight potential limitations in understanding the role of generalized eigenvectors and the conditions under which certain matrix forms hold. The discussion reflects uncertainty regarding the implications of the proof and the properties of the operator involved.

Stephen Tashi
Science Advisor
Homework Helper
Education Advisor
Messages
7,864
Reaction score
1,602
TL;DR
A question about the final step in a proof (by induction) that each linear transformation in a finite dimensional complex vector space has a basis in which its matrix is upper triangular.
My question is motivated by the proof of TH 5.13 on p 84 in the 2nd edition of Linear Algebra Done Right. (This proof differs from that in the 4th ed - online at: https://linear.axler.net/index.html chapter 5 )

In the proof we arrive at the following situation:
##T## is a linear operator on a finite dimensional complex vector space ##V## and ##\lambda ## is a an eigenvalue of ##T\ ##. The subspace ##U## is the range of the linear operator ##T−λI## The set of vectors ##\{ u_1,u_2,...,u_m, v_1,v_2,...v_k\}## is a basis for ##V## such that ##\{ u_1,u_2,...u_m\}## is a basis for ##U## and such that the matrix of the operator ##T - \lambda I ## restricted to ##U## is upper triangular in that basis.

We have the identity ## Tv_j = (T - \lambda I) v_j + \lambda v_j##. Since ##(T-\lambda I) v_j \in U##, this exhibits ##Tv_j## as the sum of two vectors where the first can be expressed as a linear combination of the vectors ##u_i## and the second is ##\lambda v_j##.

My question: (for example in the case Dim ##U = m = 2,\ ##, Dim ## V = 5## ) Does this show the matrix of ##T## has the form
##\begin{pmatrix} a_{1,1}&a_{1,2}& a_{1,3} & a_{1,4} & a_{1,5} \\ 0 & a_{2,2}, & a_{2,3} & a_{2,4} & a_{2,5} \\ 0 & 0 & \lambda & 0 & 0 \\ 0 & 0 &0 & \lambda &0\\ 0 & 0 & 0 & 0 & \lambda \end{pmatrix} ## ?

This is not how Axler ends the proof. He makes the less detailed observation that ##v_j \in ## Span ## \{u_1,u_2,...v_j\} ## for ##j = 1, k\ ##. That property characterizes an upper triangular matrix by a previously proved theorem, TH 5.12.
 
Last edited:
Physics news on Phys.org
pardon me, I did not try read axler, but this theorem seems off hand to have a trivial proof. just take v1 as an eigenvector for T, then mod out V by the space spanned by v1, and take v2 to be a vector representing an eigenvector for the induced map on V/<v1>. Then take v3 to be a vector representing an eigenvector for the induced map on V/<v1,v2>, ...... I.e. at each stage, vk is an eigenvector mod the previous vectors, so T(vk)-ck.vk is a linear combination of v1,...,vk-1. Is this nonsense?

but I see your questions is otherwise. I think the answer to it is yes.
 
Last edited:
Here's my discomfort with the zeroes: If all those zeroes appear, why do we ever need generalized eigenvectors? This is just an intuitive discomfort. I haven't figured out whether a defective matrix couldn't have them.
 
I see why my intuition is wrong. For example, ##Tv_1 = \begin{pmatrix} a_{1,3} \\ a_{2,3} \\ \lambda,\\0,\\0 \end{pmatrix} ## , which isn't equal to ##\lambda v_1##. So ##\lambda v_1## isn't necessarily an eigenvector.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
3
Views
2K