1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Linear Algebra Proof involving Linear Independence

  1. Sep 27, 2015 #1

    RJLiberator

    User Avatar
    Gold Member

    1. The problem statement, all variables and given/known data
    Prove that if [itex]({A_1, A_2, ..., A_k})[/itex] is a linearly independent subset of M_nxn(F), then [itex](A_1^T,A_2^T,...,A_k^T)[/itex] is also linearly independent.

    2. Relevant equations


    3. The attempt at a solution

    Have: [itex]a_1A_1^T+a_2A_2^T+...+a_kA_k^T=0[/itex] implies [itex]a_1A_1+a_2A_2+...+a_kA_k=0[/itex]

    So [itex]a_1=a_2=a_3=a_n...=0[/itex]

    ^^ This was the answer in the back of the book, but I'm not sure what it means.

    I guess I have to assume that the T means transpose here. It's safe to assume that since it's linear independent, then the transpose is also linear independent?
     
  2. jcsd
  3. Sep 27, 2015 #2
    ##A_i## denotes an n x n matrix as I understand and the said system is a subset of ##Mat_n (F)##

    Only trivial linear combination of the matrices ##A_i## produces a 0 matrix.

    EDIT: I think that was too much information. In general, what can you say about the individual sums of the elements with respective indices given that the initial system is linearly independent?
     
  4. Sep 27, 2015 #3

    Mark44

    Staff: Mentor

    There is a subtlety in the definition of linear independence that escapes many students in linear algebra. Given any set of vectors ##{v_1, v_2, \dots, v_n}##, the equation ##c_1v_1 + c_2v_2 + \dots + c_nv_n = 0## always has ##c_1 = c_2 = \dots = c_n = 0## as a solution. The difference between the vectors being linearly independent versus linearly dependent is whether the solution for the constants ##c_i## is unique. For a set of linearly independent vectors, ##c_1 = c_2 = \dots = c_n = 0## is the only solution (often called the trivial solution). For a set of linearly dependent vectors, there will also be an infinite number of other solutions.

    Here's an example. Consider the vectors ##v_1 = <1, 0>, v_2 = <0, 1>, v_3 = <1, 1>##. The equation ##c_1v_1 + c_2v_2 + c_3v_3 = 0## is obviously true when ##c_1 = c_2 = c_3 = 0##. That alone isn't enough for us to conclude that the three vectors are linearly independent. With a bit of work we can see that ##c_1 = 1, c_2 = 1, c_3 = -1## is another solution. In fact, this is only one of an infinite number of alternative solutions, so we conclude that the three vectors here are linearly dependent.

    What I've written about vectors here applies to any member of a vector space, including the matrices of the problem posted in this thread.
    Yes, T means transpose. No, you can't assume that since the set of vectors (matrices in this case) is linearly independent, then the set of transposes is also linearly independent. You have to show that this is the case.
     
  5. Sep 27, 2015 #4
    You gave 3 vectors in your example in a two dimensional space. There is always one vector that is a linear combination of the two others, provided the set of vectors span the space.

    The objective in the problem is to use the fact that the system of matrices is linearly independent. It means that the linear combination to produce a 0 matrix is trivial.

    Multiplying a matrix with a scalar, however, means each individual element of the matrix is multiplied with the same scalar.
     
    Last edited: Sep 27, 2015
  6. Sep 27, 2015 #5

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    When you're asked to prove that a set ##\{v_1,\dots,v_n\}## is linearly independent, you should almost always start the proof with "Let ##a_1,\dots,a_n## be numbers such that ##\sum_{i=1}^n a_i v_i=0##."

    This is the straightforward way to begin because the definition of "linearly independent" tells you that now it's sufficient to prove that ##a_i=0## for all ##i\in\{1,\dots,n\}##. Use the equality ##\sum_{i=1}^n a_iv_i=0## and the assumptions that were included in the problem statement.

    So in your case, you start by saying this: Let ##a_1,\dots,a_k\in\mathbb F## be such that ##\sum_{i=1}^k a_i (A_i)^T=0##.

    Then you use the assumptions to prove that this equality implies that ##a_i=0## for all ##i\in\{1,\dots,k\}##.

    I don't know what you mean exactly, but you can't assume anything that wasn't included as an assumption in the problem statement. If you mean that it's safe to assume that since ##\{A_1,\dots,A_k\}## is linearly independent, ##\{(A_1)^T,\dots,(A_k)^T\}## is too, then the answer is an extra strong "no", because you have made the statement that you want to prove one of your assumptions.
     
  7. Sep 27, 2015 #6

    Mark44

    Staff: Mentor

    I did this on purpose, to provide a simple example of a set of linearly dependent vectors. To show that this set was linearly dependent, I used only the definition of linear dependence. Of course you could use other concepts to show that there are too many vectors in my set to form a basis, which makes the set linearly dependent, but my point was that many beginning students of Linear Algebra don't get the fine point that distinguishes linear independence from linear dependence; namely, the business about the equation having only the trivial solution.
     
  8. Sep 27, 2015 #7

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Are you sure you have copied the question correctly? As stated, it is essentially trivial. A more important---and not nearly as easy---version would be: if the columns of an ##n \times n## matrix are linearly independent, then the rows are linearly independent as well. (Your version of the problem is that if a bunch of ##n \times n## matrices are linearly independent, then so are their transposes. That seems a pointless exercise to me!)
     
  9. Sep 27, 2015 #8

    RJLiberator

    User Avatar
    Gold Member

    Yeah, the question is pretty trivial after reading the responses here. I suppose that's why I was a bit mixed up on it. I felt I didn't have enough for the answer.

    But after reading the discussion here and adding a few things, I feel confident with this.

    Thanks, and a shout out to Fredrik for the extreme clarity.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Linear Algebra Proof involving Linear Independence
Loading...