How Does the Jordan Normal Form Arise from Cyclic Subspaces and Direct Sums?

Click For Summary
The discussion centers on the derivation of the Jordan normal form from cyclic subspaces and direct sums. It explains that a vector is considered (A - λI)-cyclic with a specific period if it satisfies certain linear independence conditions, leading to the formation of a Jordan basis. The conversation highlights the construction of a matrix representation from generalized eigenvectors, particularly in the context of a 3x3 matrix with a single eigenvalue. The example illustrates how to derive the Jordan form, showing that the matrix consists of λ on the diagonal and 1's on the superdiagonal. The participants conclude by recognizing the significance of direct sums and invariant subspaces in this context.
sponsoredwalk
Messages
531
Reaction score
5
I just can't figure out how you arrive at having a diagonal matrix consisting of Jordan blocks.

Going by Lang, a vector is (A - λI)-cyclic with period n if (A - λI)ⁿv = 0, for some n ∈ℕ.
It can be proven that v, (A - λI)v, ..., (A - λI)ⁿ⁻¹ are linearly independent, & so
{v, (A - λI)v, ..., (A - λI)ⁿ⁻¹} forms a basis, called the Jordan basis, for what is now known
as a cyclic vector space.

Furthermore, for each (A - λI)ⁿv we have that (A - λI)ⁿv = (A - λI)ⁿ⁺¹v + λ(A - λI)ⁿv.

Now for the life of me I just don't see how the matrix associated to this basis is a matrix
consisting of λ on the diagonal & 1's on the superdiagonal.

But assuming that works, I don't see how taking the direct sum of cyclic subspaces can
be represented as a matrix consisting of matrices on the diagonal.

Basically I'm just asking to see explicitly how you form the matrix w.r.t. the Jordan basis
& to see how you form a matrix representation of a direct sum of subspaces, appreciate
any & all help. :cool:
 
Physics news on Phys.org
Let's take the simple example where the matrix, A, is 3 by 3 and has the single eigenvalue 3. Then the Characteristic equation is (x- 3)^3= 0. Since every matrix satisfies its own characteristice equiation, it must be true that for every vector, v, (A- 3)^3= 0. It might be the case that (A- 3I)v= 0 for every vector v. In that there are three independent vectors such that Av= 3v and we can use those three vectors as a basis for the vector space. Written in that basis, A would be diagonal:
\begin{bmatrix}3 & 0 & 0 \\ 0 & 3 & 1\\ 0 & 0 & 3\end{bmatrix}.

Or, it might be that (A- 3I)v= 0 only for multiples of a single vector or linear combinations of two vectors. In the first case, it must still be true that (A- 3I)^3u= 0 for all vectors so we must have, for some vector, u, (A- 3I)v= w\ne 0 but that (A- 3I)^3v= (A- 3I)^2w= 0. Of course that is the same as saying that (A- 3I)^2w is a multiple of v, the eigenvector, and so, letting x be (A- 3I)w, (A- 3I)x= v. Those two vectors, x such that (A- 3I)x= v, and w such that (A- I3)w= x, are called "generalized eigenvectors".

Of course, to find the matrix representation of a linear transformation in a given basis, we apply the matrix to each basis vector in turn, writing the result as a linear combination of the basis vectors, so that the coefficients are the columns of the matrix.

(If you are not aware of that [very important!] fact, suppose A is a linear transformation from a three dimensional vector space to itself. Further, suppose \{v_1, v_2, v_3\} is a basis for that vector space. Then Av_1 is a vector in the space and so can be written as a linear combination of the basis vectors, say, Av_1= av_1+ bv_2+ cv_3. In that basis, v_1 itself is written as the column
\begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix}
since v_1= (1)v_1+ (0)v_2+ (0)v_3[/tex] <br /> Since Av_1= av_1+ bv_2+ cv_3, we must have<br /> Av_1= A\begin{bmatrix}1 \\ 0 \\ 0\end{bmatrix}= \begin{bmatrix}a \\ b \\ c\end{bmatrix}<br /> so obviously, the first column of a must be <br /> \begin{bmatrix}a \\ b \\ c\end{bmatrix}.)<br /> <br /> Here, our basis vectors are v, x, and w such that (A- 3I)v= 0, (A- 3I)x= v, and (A- 3I)w= u. From (A- 3I)v= 0, which is the same as Av= 3v+ 0x+ 0w, we see that the first column of the matrix is <br /> \begin{bmatrix}3 \\ 0 \\ 0\end{bmatrix}<br /> <br /> From (A- 3I)x= v, which is the same as Ax= v+ 3x+ 0w, we see that the second column of the matrix is<br /> \begin{bmatrix}1 \\ 3 \\ 0\end{bmatrix}<br /> <br /> Finally, from (A- 3I)w= x, which is the same as Aw= 0v+ x+ 3w, we see that the third column of the matrix is<br /> \begin{bmatrix}0 \\ 1 \\ 3\end{bmatrix}<br /> <br /> so that, in this ordered basis, the linear transformation is represented by the matrix<br /> \begin{bmatrix}3 &amp;amp; 1 &amp;amp; 0 \\ 0 &amp;amp; 3 &amp;amp; 1 \\ 0 &amp;amp; 0 &amp;amp; 3\end{bmatrix}
 
Last edited by a moderator:
Thanks HallsofIvy, that was an interesting read. I enjoyed the construction of the matrix from the generalized eigenvectors - I don't remember seeing it that way before.
 
Great stuff, thanks Halls. Figured out the importance of direct sums and invariant subspaces
today & am alright now.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K