1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Direct Sum: Vector Spaces

  1. Feb 29, 2016 #1
    During lecture, the professor gave us a theorem he wants us to prove on our own before he goes over the theorem in lecture.

    Theorem: Let ##V_1, V_2, ... V_n## be subspaces of a vector space ##V##. Then the following statements are equivalent.
    1. ##W=\sum V_i## is a direct sum.
    2. Decomposition of the zero vector is unique.
    3. ##V_i\cap\sum_{i\neq j}V_j =\{0\}## for ##i = 1, 2, ..., n##
    4. dim##W## = ##\sum##dim##V_i##
    What I understand:
    • Definition of Basis
    • Dimensional Formula
    • Definition of Direct Sum

    My Attempt: ## 1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1##

    ##1 \rightarrow 2##
    1 state ##W=\sum V_i## is a direct sum. Then by definition there is a unique decomposition for ##\alpha \in W ## such that ## \alpha = \alpha_1 + \alpha_2 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n.## Let ##\alpha = 0##, then it is necessary obvious ##\alpha_i = 0## for all ##i##.


    ##2 \rightarrow 3##
    2 states there is a unique decomposition for ##0 = \alpha_1 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n##. Suppose there exists ##x_i \neq 0 \in V_i \cap \sum_{j\neq i}V_j##. Then ##x_i = \sum_{j\neq i} x_j## for some ##x_j \in V_j##, hence ##x_i - \sum_{j\neq i} x_j = 0##. Since ##x_i \neq 0##, then ##x_j## can not be all zero. This contradicts the fact ##0 = \alpha_1 + ... + \alpha_n## is the unique decomposition of the zero vector. Therefore ##V_i \cap \sum_{j\neq i}V_j = \{0\}##.


    ##3 \rightarrow 4##
    3 states ##V_i\cap\sum_{i\neq j}V_j =\{0\}## for ##i = 1, 2, ..., n##. This implies dim(##V_i\cap\sum_{i\neq j}V_j ##) = ##0##. Now by direct application of the dimensional formula, which states dim(##X+Y##) = dim(##X##) + dim(##Y##) - dim(##X\cap Y##). Then

    \begin{eqnarray*}
    \text{dim}(V_1+(V_2 + ... + V_n)) & = & \text{dim}(V_1) + \text{dim}(V_2 + (V_3 + ... + V_n)) - \text{dim}(V_1 \cap \sum_{2}^nV_j)\\
    & = & \text{dim}(V_1) + \text{dim}(V_2) + \text{dim}(V_3 + (V_4 +... + V_n)) - \text{dim}(V_2 \cap \sum_{3}^nV_j)\\
    \end{eqnarray*}
    repeatedly applying the dimensional formula to dim(##V_i + V_{i + 1} + ... + V_{n}##) yields
    \begin{eqnarray*}
    \text{dim}(V_1+(V_2 + ... + V_n)) & = & \text{dim}(V_1) + \text{dim}(V_2) + ... + \text{dim}(V_n)\\
    & = & \sum_{i = 1}^n\text{dim}(V_i)\\
    \end{eqnarray*}
    Where ##W = \sum_{i = 1}^n(V_i) ##

    ##4 \rightarrow 1##
    4 states dim##W## = ##\sum##dim##V_i##. By direct consequence of the dimensional formula, we know ##W = \sum_{i=1}^nV_i = \{\alpha = \alpha_1 + \alpha_2 + ... + \alpha_n \in V: \alpha_i \in V_i \text{for } i = 1,..., n\}##. We seek to show ##\forall \alpha \in W##, there exists a unique decomposition. By hypothesis, dim(##W) = m ## and dim(##V_i) = m_i## where ##m = \sum_{i = 1}^nm_i##. Now, each ##V_i## has a basis ##\Lambda_i## with ##m_i## linearly independent vectors. Since ##\alpha_i \in V_i##, there exists a unique linear combination ##\alpha_i = \sum_{k=1}^{m_i}c_{i,k}\beta_{i,k}## where ##c_{i,k}## is a scalar in the field and ##\beta_{i,k} \in \Lambda_i##. Thus ##\alpha \in W## can be written as
    \begin{eqnarray*}
    \alpha & = & \alpha_1 + \alpha+2 + ... + \alpha_n\\
    & = & (\sum_{k=1}^{m_1}c_{1,k}\beta_{1,k}) + (\sum_{k=1}^{m_2}c_{2,k}\beta_{2,k}) + ... + (\sum_{k=1}^{m_n}c_{n,k}\beta_{n,k})
    \end{eqnarray*}
    It follows by hypothesis that ##\alpha## is composed of ##m = m_1 + ... + m_n## linearly independent vectors. Thus ##\alpha## is indeed a unique decomposition ##\alpha = \alpha_1 + \alpha_2 + ... + \alpha_n## where ##\alpha_i \in V_i## for ##i = 1, 2, ..., n##; therefore, ##W = \sum_{i = 1}^nV_i## is a direct sum.

    Since ##1 \rightarrow 2 \rightarrow 3 \rightarrow 4 \rightarrow 1##, then all statements are equivalent.





    _________________________


    Now I feel like my proof overall, especially ##4 \rightarrow 1##, could be improved upon. I wanted to ask if you all have any suggestions on how I can do to make the proof better? Are there any logical errors? Is there an alternative way to prove this? I appreciate any feedback or criticism. Thank You for your time and have a wonderful day.
     
  2. jcsd
  3. Feb 29, 2016 #2

    andrewkirk

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    It looks pretty good.
    However ##4\to 1## is missing a piece. When you write
    that statement is not supported by the assumption of (4), which is simply a statement about dimensions and says nothing (directly) about the relationships between the subspaces ##V_i##. We know by supposition that the vectors in each set ##\Lambda_i\equiv\{\beta_{i,1},...,\beta_{i,m_i}\}## are mutually independent, but not that the vectors in ##\Lambda_i## are independent of those in ##\Lambda_j## for ##i\neq j##.

    I wonder whether the contrapositive might be an easier way to prove this. That is, prove that ##\neg 1\to\neg 4##. If you assume the sum is not direct it should be easy enough to identify a nonzero vector in the intersection of two subspaces which, by the dimensional formula, will entail that the dimension of the sum of subspaces is less than the sum of dimensions.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Direct Sum: Vector Spaces
  1. Direct Sum of vectors (Replies: 3)

Loading...