1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Matrix, Vector Proof Help for Multivariable Mathematics (Linear Algebra) Course

  1. Aug 29, 2012 #1
    1. The problem statement, all variables and given/known data

    Problem 1:

    If A is an m x n matrix and Ax = 0 for all x ε ℝ^n, prove that A = O.
    If A and B are m x n matrices and Ax = Bx for all x ε ℝ^n, prove that A = B.

    (O is the 0 matrix, x is the vector x, and 0 is the 0 vector.)


    2. The attempt at a solution

    First off, I understand the problem intuitively and can make sense of the answer being true. My issue comes in trying to phrase a proof that shows it. I know some sort of explanation of the dot product method of multiplying A by x is necessary, but I can't seem to figure out how to phrase/write/show it. Any help or advice would be much appreciated!

    1. The problem statement, all variables and given/known data

    Problem 2:

    Suppose A is a symmetric matrix satisfying A^2 = O. Prove that A = O. Give an example to show that the hypothesis of symmetry is required.


    2. The attempt at a solution

    Here I know that a symmetric matrix means that A = A^T (its transpose), also that AA = O = (A^T)A, but again I run into the issue of how it is relevant and can be turned into a proof. The problems seemed very straight forward at first glance, so I wasn't motivated to ask my professor about them as I did for others, but when I was confronted with actually writing them out, I didn't know where to begin.

    Any help would be wonderful, thank you!
     
  2. jcsd
  3. Aug 29, 2012 #2
    You know the condition above holds for any vector x, maybe you should choose some specific x and see what the equation says in terms of components of A.


    Can you write A^T A as a sum of the components? What can you tell about this sum? When is it zero?

    For the second part, you can just consider an arbitrary 2x2 matrix and choose the elements so that A^2 = 0.
     
  4. Aug 29, 2012 #3
    My only thought for considering a specific x is either x does not equal 0 or x = A. Either way, I don't see how to continue the proof as I don't think choosing it to not equal 0 is an adequate statement (is it?) and I'm relatively certain I can't make it equal A or if I do it won't help.

    And I don't really understand what a sum of components for A^T A would look like as a general statement? How does one even write a sum of components for an arbitrary symmetric matrix or a matrix undergoing that operation? Could you give an example?
     
  5. Aug 29, 2012 #4
    No no no, x is a vector and A is a matrix, so certainly you can't have x=A. What if you let x=(1,0,0,0,...,0)?

    Generally the matrix A has components Aij and then matrix AT has components Aji. Do you know how you can write the components of the matrix multiplication (AT A)ij = ?

    If the arbitrarily large matrices are distracting, you can try it out first with a 2x2 matrix, just write it as
    [tex] A = \left( \begin{array}{2} a & b \\ c & d \end{array} \right) [/tex] and then figure out what you get out of the multiplication AT A.
     
  6. Aug 29, 2012 #5

    I get:

    a2 + b2 ac + bd
    ac + bd c2 + d2

    (Sorry that's not in matrix form, I don't know how to format a matrix on here.)

    And so , I assume this would be a 2x2 in ij form?

    a11 a21
    a12 a22

    Now forgive me for dragging this out, I promise I usually get these pretty quickly. And I don't mean to pull an answer out of you either, I want to understand this, but I don't exactly know where to go with this? The general case I assume would look like:

    a11 a21 . . . . ai1
    a12 a22 . . . . . .
    . . . . . . . . . . .
    . . . . . . . . . . .
    a1j . . . . . . . aij

    And consequently its transpose would look like:

    a11 a12 . . . . a1j
    a21 a22 . . . . . .
    . . . . . . . . . . .
    . . . . . . . . . . .
    ai1 . . . . . . . aji

    And the multiplication of a general form would be pretty tedious to write out, no? Is that the way this problem goes? What exactly do we need to show here?

    Thanks again!
     
  7. Aug 29, 2012 #6
    and now tell me how you can make this matrix equal to zero? In particular, how can you make the diagonal terms vanish?
     
  8. Aug 29, 2012 #7
    Set them equal to each other:

    a2 + b2 = c2 + d2

    ac + bd = ac + bd

    So the diagonals must be equal, which means they must equal 0?

    (I bet you feel like you're pulling teeth here...)
     
  9. Aug 29, 2012 #8

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    You may not have this theorem yet, but any symmetric matrix is diagonalizable. That is, if A is symmetric, there exist an invertible matrix P such that [itex]D=PAP^{-1}[/itex] with D diagonal. Then [itex]D^2= (PAP^{-1})(PAP^{-1})= PA^2P^{-1}[/itex]. If [itex]A^2= 0[/itex] then [itex]D^2= 0[/itex] and since D is diagonal, its square is the diagonal matrix with the squares of the diagonal elements of D on its diagonal- since [itex]D^2= 0[/itex], [itex]D= 0[/itex]. And then [itex]A= P^{-1}DP= 0[/itex].
     
  10. Aug 29, 2012 #9
    Correct, I don't know that yet, so I probably shouldn't use it.

    I'm pretty stuck on this one though, I don't really understand the direction clamtrox is going with this?
     
  11. Aug 29, 2012 #10
    OK, let me show you. Let's have A to be symmetric, ie.
    [tex] A = \left( \begin{array}{2} a & b \\ b & d \end{array} \right) [/tex]
    Then, the standard matrix multiplication rules give me
    [tex]A^2 = \left( \begin{array}{2} a^2+b^2 & b(a+d) \\ b(a+d) & b^2+d^2 \end{array} \right) [/tex]
    right?

    Let us then assume that [itex] A^2 = 0 [/itex]

    [tex]A^2 = \left( \begin{array}{2} a^2+b^2 & b(a+d) \\ b(a+d) & b^2+d^2 \end{array} \right) = \left( \begin{array}{2}0 &0 \\ 0 & 0 \end{array} \right) [/tex]
    This is only true if each and every element is independently zero. Let us take the 11-component: [itex] a^2 + b^2 = 0 [/itex]
    As a and b are (hopefully) real numbers, the only possible solution for this equation is [itex] a=b=0. [/itex] Therefore our matrix must be
    [tex] A = \left( \begin{array}{2} 0 & 0 \\ 0 & d \end{array} \right) [/tex]
    and I leave the last element for you. The general case works exactly like this as well, you just need to show it in a more compact manner.
     
  12. Aug 29, 2012 #11

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    It is much easier to recognize that if [itex]B = A^T A,[/itex] then, for an arbitrary column vector [itex] x \in R^n[/itex] we have that [itex] Q(x) \equiv x^T B x[/itex] is of the form
    [tex] Q(x) = (Ax)^T (Ax) = y^T y = \sum_{i=1}^n y_i^2, [/tex] where the column vector y is given as [itex] y = Ax.[/itex]

    RGV
     
  13. Aug 29, 2012 #12
    Ok, now I follow. I don't know why I was having trouble reconciling that each component had to independently equal 0, and that I just needed to show that for the general case. Thank you!
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Matrix, Vector Proof Help for Multivariable Mathematics (Linear Algebra) Course
Loading...