I Question from a proof in Axler 2nd Ed, 'Linear Algebra Done Right'

Stephen Tashi
Science Advisor
Homework Helper
Education Advisor
Messages
7,864
Reaction score
1,602
TL;DR
A question about the final step in a proof (by induction) that each linear transformation in a finite dimensional complex vector space has a basis in which its matrix is upper triangular.
My question is motivated by the proof of TH 5.13 on p 84 in the 2nd edition of Linear Algebra Done Right. (This proof differs from that in the 4th ed - online at: https://linear.axler.net/index.html chapter 5 )

In the proof we arrive at the following situation:
##T## is a linear operator on a finite dimensional complex vector space ##V## and ##\lambda ## is a an eigenvalue of ##T\ ##. The subspace ##U## is the range of the linear operator ##T−λI## The set of vectors ##\{ u_1,u_2,...,u_m, v_1,v_2,...v_k\}## is a basis for ##V## such that ##\{ u_1,u_2,...u_m\}## is a basis for ##U## and such that the matrix of the operator ##T - \lambda I ## restricted to ##U## is upper triangular in that basis.

We have the identity ## Tv_j = (T - \lambda I) v_j + \lambda v_j##. Since ##(T-\lambda I) v_j \in U##, this exhibits ##Tv_j## as the sum of two vectors where the first can be expressed as a linear combination of the vectors ##u_i## and the second is ##\lambda v_j##.

My question: (for example in the case Dim ##U = m = 2,\ ##, Dim ## V = 5## ) Does this show the matrix of ##T## has the form
##\begin{pmatrix} a_{1,1}&a_{1,2}& a_{1,3} & a_{1,4} & a_{1,5} \\ 0 & a_{2,2}, & a_{2,3} & a_{2,4} & a_{2,5} \\ 0 & 0 & \lambda & 0 & 0 \\ 0 & 0 &0 & \lambda &0\\ 0 & 0 & 0 & 0 & \lambda \end{pmatrix} ## ?

This is not how Axler ends the proof. He makes the less detailed observation that ##v_j \in ## Span ## \{u_1,u_2,...v_j\} ## for ##j = 1, k\ ##. That property characterizes an upper triangular matrix by a previously proved theorem, TH 5.12.
 
Last edited:
Physics news on Phys.org
pardon me, I did not try read axler, but this theorem seems off hand to have a trivial proof. just take v1 as an eigenvector for T, then mod out V by the space spanned by v1, and take v2 to be a vector representing an eigenvector for the induced map on V/<v1>. Then take v3 to be a vector representing an eigenvector for the induced map on V/<v1,v2>, ...... I.e. at each stage, vk is an eigenvector mod the previous vectors, so T(vk)-ck.vk is a linear combination of v1,...,vk-1. Is this nonsense?

but I see your questions is otherwise. I think the answer to it is yes.
 
Last edited:
Here's my discomfort with the zeroes: If all those zeroes appear, why do we ever need generalized eigenvectors? This is just an intuitive discomfort. I haven't figured out whether a defective matrix couldn't have them.
 
I see why my intuition is wrong. For example, ##Tv_1 = \begin{pmatrix} a_{1,3} \\ a_{2,3} \\ \lambda,\\0,\\0 \end{pmatrix} ## , which isn't equal to ##\lambda v_1##. So ##\lambda v_1## isn't necessarily an eigenvector.
 
Thread 'How to define vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
5
Views
2K