Matrix, Vector Proof Help for Multivariable Mathematics (Linear Algebra) Course

In summary: The general case is the same, you just have more rows and columns. And yes, the general case is what you need to prove that A=0. Let's see, so you need to prove that for any i and j, the (i,j) entry of A is 0. Well, let's write that down:a2 + b2 ac + bdac + bd c2 + d2So you want to show thata2 + b2 = 0ac + bd = 0ac + bd = 0c2 + d2 = 0And the last two equations tell you that c=0 and d=0, so that a
  • #1
dr721
23
0

Homework Statement



Problem 1:

If A is an m x n matrix and Ax = 0 for all x ε ℝ^n, prove that A = O.
If A and B are m x n matrices and Ax = Bx for all x ε ℝ^n, prove that A = B.

(O is the 0 matrix, x is the vector x, and 0 is the 0 vector.)2. The attempt at a solution

First off, I understand the problem intuitively and can make sense of the answer being true. My issue comes in trying to phrase a proof that shows it. I know some sort of explanation of the dot product method of multiplying A by x is necessary, but I can't seem to figure out how to phrase/write/show it. Any help or advice would be much appreciated!

Homework Statement



Problem 2:

Suppose A is a symmetric matrix satisfying A^2 = O. Prove that A = O. Give an example to show that the hypothesis of symmetry is required.2. The attempt at a solution

Here I know that a symmetric matrix means that A = A^T (its transpose), also that AA = O = (A^T)A, but again I run into the issue of how it is relevant and can be turned into a proof. The problems seemed very straight forward at first glance, so I wasn't motivated to ask my professor about them as I did for others, but when I was confronted with actually writing them out, I didn't know where to begin.

Any help would be wonderful, thank you!
 
Physics news on Phys.org
  • #2
dr721 said:
Problem 1:

If A is an m x n matrix and Ax = 0 for all x ε ℝ^n, prove that A = O.
If A and B are m x n matrices and Ax = Bx for all x ε ℝ^n, prove that A = B.

(O is the 0 matrix, x is the vector x, and 0 is the 0 vector.)

You know the condition above holds for any vector x, maybe you should choose some specific x and see what the equation says in terms of components of A.


dr721 said:
Problem 2:

Suppose A is a symmetric matrix satisfying A^2 = O. Prove that A = O. Give an example to show that the hypothesis of symmetry is required.

Can you write A^T A as a sum of the components? What can you tell about this sum? When is it zero?

For the second part, you can just consider an arbitrary 2x2 matrix and choose the elements so that A^2 = 0.
 
  • #3
clamtrox said:
You know the condition above holds for any vector x, maybe you should choose some specific x and see what the equation says in terms of components of A.



Can you write A^T A as a sum of the components? What can you tell about this sum? When is it zero?

For the second part, you can just consider an arbitrary 2x2 matrix and choose the elements so that A^2 = 0.

My only thought for considering a specific x is either x does not equal 0 or x = A. Either way, I don't see how to continue the proof as I don't think choosing it to not equal 0 is an adequate statement (is it?) and I'm relatively certain I can't make it equal A or if I do it won't help.

And I don't really understand what a sum of components for A^T A would look like as a general statement? How does one even write a sum of components for an arbitrary symmetric matrix or a matrix undergoing that operation? Could you give an example?
 
  • #4
dr721 said:
My only thought for considering a specific x is either x does not equal 0 or x = A. Either way, I don't see how to continue the proof as I don't think choosing it to not equal 0 is an adequate statement (is it?) and I'm relatively certain I can't make it equal A or if I do it won't help.

No no no, x is a vector and A is a matrix, so certainly you can't have x=A. What if you let x=(1,0,0,0,...,0)?

dr721 said:
And I don't really understand what a sum of components for A^T A would look like as a general statement? How does one even write a sum of components for an arbitrary symmetric matrix or a matrix undergoing that operation? Could you give an example?

Generally the matrix A has components Aij and then matrix AT has components Aji. Do you know how you can write the components of the matrix multiplication (AT A)ij = ?

If the arbitrarily large matrices are distracting, you can try it out first with a 2x2 matrix, just write it as
[tex] A = \left( \begin{array}{2} a & b \\ c & d \end{array} \right) [/tex] and then figure out what you get out of the multiplication AT A.
 
  • #5
clamtrox said:
No no no, x is a vector and A is a matrix, so certainly you can't have x=A. What if you let x=(1,0,0,0,...,0)?



Generally the matrix A has components Aij and then matrix AT has components Aji. Do you know how you can write the components of the matrix multiplication (AT A)ij = ?

If the arbitrarily large matrices are distracting, you can try it out first with a 2x2 matrix, just write it as
[tex] A = \left( \begin{array}{2} a & b \\ c & d \end{array} \right) [/tex] and then figure out what you get out of the multiplication AT A.


I get:

a2 + b2 ac + bd
ac + bd c2 + d2

(Sorry that's not in matrix form, I don't know how to format a matrix on here.)

And so , I assume this would be a 2x2 in ij form?

a11 a21
a12 a22

Now forgive me for dragging this out, I promise I usually get these pretty quickly. And I don't mean to pull an answer out of you either, I want to understand this, but I don't exactly know where to go with this? The general case I assume would look like:

a11 a21 . . . . ai1
a12 a22 . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
a1j . . . . . . . aij

And consequently its transpose would look like:

a11 a12 . . . . a1j
a21 a22 . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
ai1 . . . . . . . aji

And the multiplication of a general form would be pretty tedious to write out, no? Is that the way this problem goes? What exactly do we need to show here?

Thanks again!
 
  • #6
dr721 said:
a2 + b2 ac + bd
ac + bd c2 + d2

and now tell me how you can make this matrix equal to zero? In particular, how can you make the diagonal terms vanish?
 
  • #7
clamtrox said:
and now tell me how you can make this matrix equal to zero? In particular, how can you make the diagonal terms vanish?

Set them equal to each other:

a2 + b2 = c2 + d2

ac + bd = ac + bd

So the diagonals must be equal, which means they must equal 0?

(I bet you feel like you're pulling teeth here...)
 
  • #8
You may not have this theorem yet, but any symmetric matrix is diagonalizable. That is, if A is symmetric, there exist an invertible matrix P such that [itex]D=PAP^{-1}[/itex] with D diagonal. Then [itex]D^2= (PAP^{-1})(PAP^{-1})= PA^2P^{-1}[/itex]. If [itex]A^2= 0[/itex] then [itex]D^2= 0[/itex] and since D is diagonal, its square is the diagonal matrix with the squares of the diagonal elements of D on its diagonal- since [itex]D^2= 0[/itex], [itex]D= 0[/itex]. And then [itex]A= P^{-1}DP= 0[/itex].
 
  • #9
HallsofIvy said:
You may not have this theorem yet, but any symmetric matrix is diagonalizable. That is, if A is symmetric, there exist an invertible matrix P such that [itex]D=PAP^{-1}[/itex] with D diagonal. Then [itex]D^2= (PAP^{-1})(PAP^{-1})= PA^2P^{-1}[/itex]. If [itex]A^2= 0[/itex] then [itex]D^2= 0[/itex] and since D is diagonal, its square is the diagonal with the squares of the diagonal elements of D on its diagonal- since [itex]D^2= 0[/itex], D= 0. And then [itex]A= P^{-1}DP= 0[/itex].

Correct, I don't know that yet, so I probably shouldn't use it.

I'm pretty stuck on this one though, I don't really understand the direction clamtrox is going with this?
 
  • #10
dr721 said:
Correct, I don't know that yet, so I probably shouldn't use it.

I'm pretty stuck on this one though, I don't really understand the direction clamtrox is going with this?

OK, let me show you. Let's have A to be symmetric, ie.
[tex] A = \left( \begin{array}{2} a & b \\ b & d \end{array} \right) [/tex]
Then, the standard matrix multiplication rules give me
[tex]A^2 = \left( \begin{array}{2} a^2+b^2 & b(a+d) \\ b(a+d) & b^2+d^2 \end{array} \right) [/tex]
right?

Let us then assume that [itex] A^2 = 0 [/itex]

[tex]A^2 = \left( \begin{array}{2} a^2+b^2 & b(a+d) \\ b(a+d) & b^2+d^2 \end{array} \right) = \left( \begin{array}{2}0 &0 \\ 0 & 0 \end{array} \right) [/tex]
This is only true if each and every element is independently zero. Let us take the 11-component: [itex] a^2 + b^2 = 0 [/itex]
As a and b are (hopefully) real numbers, the only possible solution for this equation is [itex] a=b=0. [/itex] Therefore our matrix must be
[tex] A = \left( \begin{array}{2} 0 & 0 \\ 0 & d \end{array} \right) [/tex]
and I leave the last element for you. The general case works exactly like this as well, you just need to show it in a more compact manner.
 
  • #11
clamtrox said:
OK, let me show you. Let's have A to be symmetric, ie.
[tex] A = \left( \begin{array}{2} a & b \\ b & d \end{array} \right) [/tex]
Then, the standard matrix multiplication rules give me
[tex]A^2 = \left( \begin{array}{2} a^2+b^2 & b(a+d) \\ b(a+d) & b^2+d^2 \end{array} \right) [/tex]
right?

Let us then assume that [itex] A^2 = 0 [/itex]

[tex]A^2 = \left( \begin{array}{2} a^2+b^2 & b(a+d) \\ b(a+d) & b^2+d^2 \end{array} \right) = \left( \begin{array}{2}0 &0 \\ 0 & 0 \end{array} \right) [/tex]
This is only true if each and every element is independently zero. Let us take the 11-component: [itex] a^2 + b^2 = 0 [/itex]
As a and b are (hopefully) real numbers, the only possible solution for this equation is [itex] a=b=0. [/itex] Therefore our matrix must be
[tex] A = \left( \begin{array}{2} 0 & 0 \\ 0 & d \end{array} \right) [/tex]
and I leave the last element for you. The general case works exactly like this as well, you just need to show it in a more compact manner.

It is much easier to recognize that if [itex]B = A^T A,[/itex] then, for an arbitrary column vector [itex] x \in R^n[/itex] we have that [itex] Q(x) \equiv x^T B x[/itex] is of the form
[tex] Q(x) = (Ax)^T (Ax) = y^T y = \sum_{i=1}^n y_i^2, [/tex] where the column vector y is given as [itex] y = Ax.[/itex]

RGV
 
  • #12
clamtrox said:
OK, let me show you. Let's have A to be symmetric, ie.
[tex] A = \left( \begin{array}{2} a & b \\ b & d \end{array} \right) [/tex]
Then, the standard matrix multiplication rules give me
[tex]A^2 = \left( \begin{array}{2} a^2+b^2 & b(a+d) \\ b(a+d) & b^2+d^2 \end{array} \right) [/tex]
right?

Let us then assume that [itex] A^2 = 0 [/itex]

[tex]A^2 = \left( \begin{array}{2} a^2+b^2 & b(a+d) \\ b(a+d) & b^2+d^2 \end{array} \right) = \left( \begin{array}{2}0 &0 \\ 0 & 0 \end{array} \right) [/tex]
This is only true if each and every element is independently zero. Let us take the 11-component: [itex] a^2 + b^2 = 0 [/itex]
As a and b are (hopefully) real numbers, the only possible solution for this equation is [itex] a=b=0. [/itex] Therefore our matrix must be
[tex] A = \left( \begin{array}{2} 0 & 0 \\ 0 & d \end{array} \right) [/tex]
and I leave the last element for you. The general case works exactly like this as well, you just need to show it in a more compact manner.

Ok, now I follow. I don't know why I was having trouble reconciling that each component had to independently equal 0, and that I just needed to show that for the general case. Thank you!
 

1. What is the difference between a matrix and a vector?

A matrix is a rectangular array of numbers, whereas a vector is a one-dimensional array of numbers. Matrices can have multiple rows and columns, while vectors only have one row or column. Additionally, matrices are used to represent systems of linear equations, while vectors are used to represent quantities with both magnitude and direction.

2. How are matrices and vectors used in multivariable mathematics?

Matrices and vectors are essential tools in multivariable mathematics, specifically in the field of linear algebra. They are used to represent and solve systems of linear equations, perform transformations and rotations, and calculate eigenvalues and eigenvectors. They are also used in other areas of mathematics, such as calculus and statistics.

3. Can you provide an example of a matrix proof?

Sure, a common matrix proof involves showing that a matrix is invertible. For example, to prove that a 2x2 matrix A is invertible, we need to show that det(A) ≠ 0. We start by calculating the determinant of A, which is ad-bc. If we can show that ad-bc ≠ 0, then we can conclude that A is invertible. This can be done by showing that the columns of A are linearly independent.

4. How do you perform vector operations such as addition and scalar multiplication?

To add two vectors, you simply add the corresponding entries of each vector. For example, to add two 3-dimensional vectors (a1, a2, a3) and (b1, b2, b3), the result would be (a1+b1, a2+b2, a3+b3). To multiply a vector by a scalar, you multiply each entry of the vector by the scalar. For example, if we want to multiply the vector (x1, x2, x3) by the scalar k, the result would be (kx1, kx2, kx3).

5. How are matrices and vectors related to real-world applications?

Matrices and vectors have numerous real-world applications, especially in fields such as physics, engineering, and computer science. They are used to model and solve real-world problems involving multiple variables and equations, such as predicting the movement of objects or analyzing data sets. They are also used in computer graphics and image processing to represent and manipulate images, and in machine learning algorithms to process and analyze large datasets.

Similar threads

  • Calculus and Beyond Homework Help
Replies
8
Views
796
  • Calculus and Beyond Homework Help
Replies
7
Views
414
  • Calculus and Beyond Homework Help
Replies
24
Views
799
  • Calculus and Beyond Homework Help
Replies
10
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
948
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
15
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
25
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
526
Back
Top