1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Prove V=S⊕S^(⊥)

  1. Jan 30, 2013 #1
    1. The problem statement, all variables and given/known data

    Let ##S## be a subspace of an inner product space ##V##. Prove that ##V=S\oplus S^{\bot}##.

    2. Relevant equations

    The circled plus is meant to indicate the orthogonal sum of two sets.
    From an earlier exercise, I've shown that ##S^{\bot}## is a subspace of ##V##, and that ##S\cap T = \{ 0\}## (where ##S\bot T##). Don't know if they'll be helpful to this proof, but I'll leave these results up if they will be.

    3. The attempt at a solution

    I don't know where to begin. I've resorted to looking online for established proofs, but even those don't make sense to me. Could someone help me along with this proof? Thanks.
     
  2. jcsd
  3. Jan 30, 2013 #2
  4. Jan 30, 2013 #3

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    That proof looks fairly straightforward. Which step gives you doubts?
     
  5. Jan 30, 2013 #4
    Alright, I understand the first paragraph (since I proved that earlier myself), and I get the substitution of ##W## for ##S\oplus S^{\bot}##.

    • My first doubt arises from the next sentence. What's being done when you "choose" an orthonormal basis? Is that the same as saying, for instance, "Let ##\{ x_1, \ldots, x_n \}## be an orthonormal basis of S"?
    • Alongside this concern, what does it mean to "extend the basis to V"? Does it have anything to do with assuming the basis of S is not a basis of V (unless that's true in general)?
    • How do we know that ##e## is orthogonal to ##W##? Would it have something to do with linear independence?

    I get the last two sentences, so it looks like I just don't understand the introduction of ##e## into the proof, and how we know what we know about ##e##. Thanks for the reply.
     
  6. Jan 30, 2013 #5

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I don't know that the proof you are looking at is taking the most economical route. If you know an orthonormal basis on a subspace can be extended (as in Gram-Schmidt) to an orthonormal basis on the whole space why not pick an orthonormal basis for S, and extend it to an orthonormal basis for V. So you can pick an orthonormal basis for V such that {x_1,...x_k} is a basis for S and {x_1,...,x_k,x_k+1,...,x_n} is a basis for V. What could you do with that?
     
    Last edited: Jan 30, 2013
  7. Jan 30, 2013 #6
    I think I see what you're getting at, but I still have a problem with "extending" a basis to a larger (or at the least, more inclusive) space. Sorry if it sounds dumb, but I just don't know what that means.
     
  8. Jan 30, 2013 #7

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Extending a basis just means that if have a basis for a subspace {x1...xk} you can add more vectors to get a basis for the whole space {v1...vn} where vi=xi for 1<=i<=k.
     
  9. Jan 30, 2013 #8

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Yes.
    If it is a basis for V then S=V, so S is empty. It does appeal to the theorem that any orthonormal set can be extended to an orthonormal basis.
    e has been chosen to be orthogonal to each element of an orthonormal basis for W. It is therefore orthogonal to every element of W.
     
  10. Jan 30, 2013 #9
    Ooh, okay, it's starting to make a lot more sense now. Thanks to both of you for the help!

    I've started writing up the proof for myself using the point Dick made earlier:
    Here's my proof so far:
    Let ##\{ x_1, \ldots, x_n\}## be an orthonormal basis of ##S##. Assume ##V \not= S \oplus S^\bot##. Then ##\exists a\in V## such that ##a \not\in \mbox{span}\{ x_1, \ldots, x_n\}## and ##a\bot x_i \; \forall \; i=1,2,\cdots,n##. Since ##a\bot x_i##, we have that ##a\in S^\bot##.
    Where do I go from here? How do I use this last fact to show the contradiction? I would think it would involve showing that a is also an element of S, but do I necessarily know that a is orthogonal to the basis of ##S^\bot##?
     
  11. Jan 30, 2013 #10

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I was actually trying to lead you in what I think is a simpler direction. Now you are mishmashing that with the proof in the pdf file. So I can't answer that. Put the pdf proof aside and just think about it. ##{x_1,....,x_n}## is an orthonormal basis for S. If the dimension of V is m>=n, then your can extend that basis to ##{x_1,...,x_n,x_{n+1},x_{n+2},...,x_m}##, an orthonormal basis for V. Is ##x_{n+1}## e.g. in ##S^\bot##? Remember the basis is orthonormal. Also remember since you have a basis, every element of V can be written uniquely as ##v=a_1 v_1+...+a_m v_m## for some scalars ##a_i##.
     
    Last edited: Jan 30, 2013
  12. Jan 30, 2013 #11
    Talk about a eureka moment! I've learned I should really keep track of what a basis entails...

    So I've got ##x_{n+1}\in S^\bot##, since ##x_{n+1}\bot x_i, \forall i = 1,2,\ldots,n##.

    And, like you said, ##\forall v\in V##, we can write ##v## as a linear combination
    ##v = a_1 v_1+...+ a_n v_n + ... +a_m v_m##,
    the first n terms of which can be used to express some ##x\in S##, and the remainder of terms which can be used to express some ##y\in S^\bot##. In other words, ##v = x + y##. Thank you very much! Much simpler than I thought it would be. I was under the impression that I had to use the inner product at some point.

    Now, the definition I have of a direct sum says that ##v## can be written uniquely as a sum ##x+y##. I still have to show that too, right? If so, I can handle it.

    Thanks again!
     
  13. Jan 30, 2013 #12

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    My last post has the word 'unique' in it. Can you find it? The heavy lifting in this proof is showing you find an orthonormal basis for a subspace and extend it to an orthonormal basis for the whole space. The proof in the pdf used it, so I'm assuming you can use it. Can you sketch out why it's true?
     
  14. Jan 30, 2013 #13
    Oh, I didn't notice it. Thanks for pointing it out. Yes, I'm confident that I can. I'd start by supposing ##v## can be represented as
    ##v = v_1 + ... + v_n + ... + v_m = t_1 + ... + t_n + ... + t_m##, where ##v_i, t_i## are abbreviated versions of ##a_1 v_1,...## and so on.

    Then I get
    ##(v_1-t_1) + ... + (v_n-t_n) + ... + (v_m-t_m) = 0##
    Here, the first n terms are in ##S## and the remaining terms are in ##S^\bot##.

    Then, from the orthonormal basis of ##S##, I see that ##(v_1-t_1), ... (v_n-t_n), ...## are linearly independent, and so ##(v_1-t_1)=0 \Rightarrow v_1=t_1##, and the same goes for the other terms.
     
  15. Jan 30, 2013 #14

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Sure. That that's idea for proving that the representation of a vector in terms of an orthonormal basis is unique. What I meant is the whole idea that an orthonormal basis on a subspace can be extended to the whole space. If you already know that, don't worry about it. But proof of it is Gram-Schmidt orthogonization. Using that is what's making the proof easy.
     
  16. Jan 30, 2013 #15
    My instructor hasn't formally introduced it in a lecture, but I've heard him mention it at one time or another. I'll be sure to keep my eyes peeled for it in our notes. Thanks again!
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook