Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Minimal invariant subspaces

  1. Jan 25, 2009 #1
    I have a question about this theorem.

    Let V be an n-dimensional inner product space, and let T:V-->V be an orthogonal linear transformation. Let S be a minimal invariant subspace under T. Then S is one dimensional or two dimensional.


    I understand what this theorem says and I follow the proof given in my book, but I can't see any reason why the hypothesis that T be orthogonal is necessary.
     
  2. jcsd
  3. Jan 25, 2009 #2
    Since no one has responded to my post, I'll rephrase my question.

    The proof doesn't seem to make use of the fact that T is orthogonal. It seems like it would work just as well for an arbitrary linear transformation. Does anyone know if this is a typo?
     
  4. Jan 26, 2009 #3
    I answered my own question. It turns out that every linear transformation on a finite dimensional real inner product space has a minimal invariant subspace of dimension 1 or 2.
     
  5. Jan 27, 2009 #4

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    To be honest I don't even know the definition of minimal invariant subspace, but someone who does read your post and told me that the theorem you stated in #1 is correct but your conclusion in #3 is wrong. He also suggested that I let you know that you should have given a reference for the theorem (since it can be hard to find in textbooks), and that I tell you this:

    Consider an n-cycle in SO(n), e.g. a 5-cycle in SO(5):

    [tex]
    \left[ \begin{array}{ccccc}
    0 & 1 & 0 & 0 & 0 \\
    0 & 0 & 1 & 0 & 0 \\
    0 & 0 & 0 & 1 & 0 \\
    0 & 0 & 0 & 0 & 1 \\
    1 & 0 & 0 & 0 & 0
    \end{array} \right]
    [/tex]

    This has invariant subspaces of dimensions 1,2,2 and as part of the theorem they are all orthogonal to each other. The one-dimensional one is the fairly obvious fixed line; the other two are planes in which our matrix effects a 1/5 and 2/5 respectively. Very similar statements hold in other odd dimensions, and slightly different ones in even dimensions.

    Looking at the eigenvalues (they lie on the unit circle in C) helps to understand the algebraic-geometric pattern.

    The statement for O(n) is a slight elaboration of the simpler statement for SO(n), which as I said falls into odd and even dimensional cases.

    I find it useful to think of a rotation in E^n as an element of SO(n) which has a two dimensional invariant subspace in which it effects an ordinary rotation by some angle, and an orthogonal n-2 dimensional fixed subspace, the analog of rotation axis for an element of SO(3). Then every element of SO(n) can be factored into rotations, and then further factored into orthoreflections.
     
  6. Jan 27, 2009 #5

    dvs

    User Avatar

    I believe that the conclusion in #3 is indeed correct, and follows from the fact that operators on finite dimensional complex spaces have eigenvectors (hence 1-dimensional invariant subspaces).
     
  7. Jan 27, 2009 #6

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I got another message from the same guy, and he says he's not sure what you meant by the terms you used. He suggested that maybe you should write out the definition of your terms and sketch your proof. He also said that he should have been more explicit too, and asked me to post this:

    I interpreted the question underlying Sam Kolb's Posts #1-3 to be: what's special about orthogonal matrices? Sam Kolb didn't state the theorem as given in his book or the context, and I think that's crucial. I guessed the conclusion of the theorem in his book actually states more than he acknowledged in his post, and that the "more" explains the restriction to orthogonal matrices.

    I assumed he was reading a theorem given in books such as Herstein, Topics in Algebra (see p. 348): every (real) orthogonal matrix is conjugate in O(n) to a block diagonal matrix with only 1x1 and 2x2 blocks, in which the 1x1 blocks have entries [itex]\pm 1[/itex] and the 2x2 blocks have the form
    [tex]
    \left[ \begin{array}{cc}
    \cos(\theta) & \sin(\theta) \\
    -\sin(\theta) & \cos(\theta)
    \end{array} \right]
    [/tex]
    In other words given any Q in O(n) there is T in O(n) such that [itex]T \, Q \, T^{-1} = T \, Q \, T^{\ast}[/itex] has the given form. Then each block corresponds to what we might call an "irreducible"or "minimal" invariant subspace, i.e. one containing no invariant nonzero proper subspaces, and these irreducible invariant subspaces are mutually orthogonal. In other words we have an orthogonal direct sum decomposition of V =R^n into irreducible invariant subspaces which are all one or two dimensional, and in the two dimensional case restrict to ordinary rotations:
    [tex] V = V_1 \oplus V_2 \dots \oplus V_r [/tex]
    Furthermore, the 2x2 blocks correspond to rotations in the sense I mentioned (which fix pointwise the orthogonal complement to the two dimensional invariant subspace), and the 1x1 blocks with entry -1 correspond to orthoreflections, which fix the orthogonal complement to the flipped" one-dimensional invariant subspace.

    This theorem gives the nicest generalization of the notion of "the axis of rotation" to more than three dimensions.

    Contrast the rational canonical form. The theorem states that any nxn matrix Q over a field F is conjugate in GL(n,F) to a block diagonal matrix in which the blocks are companion matrices for the irreducible factors of the minimal polynomial of Q, and the degree of each factor is also the dimension of the corresponding irreducible invariant subspace. Here we have a direct sum decomposition into invariant subspaces but in general not an orthogonal direct sum decomposition. Over the real field the irreducible factors will have degree one or two, which explains what dvs said.

    The common theme I tried to bring out here is the relation between the geometry of what linear transformations on a vector space over F and the algebra of factoring polynomials over F.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Minimal invariant subspaces
  1. Invariant subspaces (Replies: 6)

  2. T-invariant subspaces (Replies: 3)

  3. Invariant Subspace (Replies: 6)

  4. Invariant subspace (Replies: 3)

Loading...