Minimal Invariant Subspaces: The Role of Orthogonal Linear Transformations

  • Context: Graduate 
  • Thread starter Thread starter samkolb
  • Start date Start date
  • Tags Tags
    Invariant Subspaces
Click For Summary

Discussion Overview

The discussion revolves around the theorem concerning minimal invariant subspaces in the context of orthogonal linear transformations within n-dimensional inner product spaces. Participants explore the necessity of the orthogonality condition and its implications for the dimensions of invariant subspaces.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • Some participants question the necessity of the orthogonality condition for the theorem regarding minimal invariant subspaces, suggesting that the proof may hold for arbitrary linear transformations.
  • One participant asserts that every linear transformation on a finite dimensional real inner product space has a minimal invariant subspace of dimension 1 or 2, although this claim is later challenged.
  • Another participant provides a counterpoint, indicating that the theorem's context and specific conditions related to orthogonal matrices are crucial for understanding its validity.
  • A later reply elaborates on the structure of orthogonal matrices, explaining that they can be represented in a block diagonal form that reveals the nature of invariant subspaces, including their orthogonality and dimensionality.
  • Participants discuss the relationship between the geometry of linear transformations and the algebra of polynomial factorization, highlighting the differences between orthogonal and general linear transformations.

Areas of Agreement / Disagreement

Participants express differing views on the necessity of the orthogonality condition and the implications for the theorem. There is no consensus on whether the conclusion regarding invariant subspaces holds without the orthogonality requirement.

Contextual Notes

Some participants note that the definitions of terms and the specific context of the theorem are not clearly stated, which may contribute to misunderstandings. The discussion also highlights the complexity of the relationship between linear transformations and their invariant subspaces.

samkolb
Messages
37
Reaction score
0
I have a question about this theorem.

Let V be an n-dimensional inner product space, and let T:V-->V be an orthogonal linear transformation. Let S be a minimal invariant subspace under T. Then S is one dimensional or two dimensional.


I understand what this theorem says and I follow the proof given in my book, but I can't see any reason why the hypothesis that T be orthogonal is necessary.
 
Physics news on Phys.org
Since no one has responded to my post, I'll rephrase my question.

The proof doesn't seem to make use of the fact that T is orthogonal. It seems like it would work just as well for an arbitrary linear transformation. Does anyone know if this is a typo?
 
I answered my own question. It turns out that every linear transformation on a finite dimensional real inner product space has a minimal invariant subspace of dimension 1 or 2.
 
To be honest I don't even know the definition of minimal invariant subspace, but someone who does read your post and told me that the theorem you stated in #1 is correct but your conclusion in #3 is wrong. He also suggested that I let you know that you should have given a reference for the theorem (since it can be hard to find in textbooks), and that I tell you this:

Consider an n-cycle in SO(n), e.g. a 5-cycle in SO(5):

[tex] \left[ \begin{array}{ccccc}<br /> 0 & 1 & 0 & 0 & 0 \\<br /> 0 & 0 & 1 & 0 & 0 \\<br /> 0 & 0 & 0 & 1 & 0 \\<br /> 0 & 0 & 0 & 0 & 1 \\<br /> 1 & 0 & 0 & 0 & 0<br /> \end{array} \right][/tex]

This has invariant subspaces of dimensions 1,2,2 and as part of the theorem they are all orthogonal to each other. The one-dimensional one is the fairly obvious fixed line; the other two are planes in which our matrix effects a 1/5 and 2/5 respectively. Very similar statements hold in other odd dimensions, and slightly different ones in even dimensions.

Looking at the eigenvalues (they lie on the unit circle in C) helps to understand the algebraic-geometric pattern.

The statement for O(n) is a slight elaboration of the simpler statement for SO(n), which as I said falls into odd and even dimensional cases.

I find it useful to think of a rotation in E^n as an element of SO(n) which has a two dimensional invariant subspace in which it effects an ordinary rotation by some angle, and an orthogonal n-2 dimensional fixed subspace, the analog of rotation axis for an element of SO(3). Then every element of SO(n) can be factored into rotations, and then further factored into orthoreflections.
 
I believe that the conclusion in #3 is indeed correct, and follows from the fact that operators on finite dimensional complex spaces have eigenvectors (hence 1-dimensional invariant subspaces).
 
I got another message from the same guy, and he says he's not sure what you meant by the terms you used. He suggested that maybe you should write out the definition of your terms and sketch your proof. He also said that he should have been more explicit too, and asked me to post this:

I interpreted the question underlying Sam Kolb's Posts #1-3 to be: what's special about orthogonal matrices? Sam Kolb didn't state the theorem as given in his book or the context, and I think that's crucial. I guessed the conclusion of the theorem in his book actually states more than he acknowledged in his post, and that the "more" explains the restriction to orthogonal matrices.

I assumed he was reading a theorem given in books such as Herstein, Topics in Algebra (see p. 348): every (real) orthogonal matrix is conjugate in O(n) to a block diagonal matrix with only 1x1 and 2x2 blocks, in which the 1x1 blocks have entries [itex]\pm 1[/itex] and the 2x2 blocks have the form
[tex] \left[ \begin{array}{cc} <br /> \cos(\theta) & \sin(\theta) \\<br /> -\sin(\theta) & \cos(\theta) <br /> \end{array} \right][/tex]
In other words given any Q in O(n) there is T in O(n) such that [itex]T \, Q \, T^{-1} = T \, Q \, T^{\ast}[/itex] has the given form. Then each block corresponds to what we might call an "irreducible"or "minimal" invariant subspace, i.e. one containing no invariant nonzero proper subspaces, and these irreducible invariant subspaces are mutually orthogonal. In other words we have an orthogonal direct sum decomposition of V =R^n into irreducible invariant subspaces which are all one or two dimensional, and in the two dimensional case restrict to ordinary rotations:
[tex]V = V_1 \oplus V_2 \dots \oplus V_r[/tex]
Furthermore, the 2x2 blocks correspond to rotations in the sense I mentioned (which fix pointwise the orthogonal complement to the two dimensional invariant subspace), and the 1x1 blocks with entry -1 correspond to orthoreflections, which fix the orthogonal complement to the flipped" one-dimensional invariant subspace.

This theorem gives the nicest generalization of the notion of "the axis of rotation" to more than three dimensions.

Contrast the rational canonical form. The theorem states that any nxn matrix Q over a field F is conjugate in GL(n,F) to a block diagonal matrix in which the blocks are companion matrices for the irreducible factors of the minimal polynomial of Q, and the degree of each factor is also the dimension of the corresponding irreducible invariant subspace. Here we have a direct sum decomposition into invariant subspaces but in general not an orthogonal direct sum decomposition. Over the real field the irreducible factors will have degree one or two, which explains what dvs said.

The common theme I tried to bring out here is the relation between the geometry of what linear transformations on a vector space over F and the algebra of factoring polynomials over F.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K