Number of Matrices w/ a+b+c+d=0: Prove 3 Exist

In summary: Better would probably be, if you considered a reduced matrix alone. The system as linear equations ##AX=0## has only four solutions after the row reduction: all vectors ##X=(x,y)## which corresponds to ##A=0##, a single vector ##X=(0,0)## for a regular matrix ##A## (why does this solution correspond to the matrix ##\begin{bmatrix}1&0\\0&-1\end{bmatrix}##? ) and the two solutions "in between": ##x=0, y## arbitrary or ##y=0, x## arbitrary. This way you only have to find the argument, why those two can be manipulated in a way,
  • #1
Buffu
849
146

Homework Statement


Let ##A = \begin{bmatrix} a&b\\c&d \end{bmatrix}## such that ##a+b+c+d = 0##. Suppose A is a row reduced. Prove that there are exactly three such matrices.

Homework Equations

The Attempt at a Solution



1) ##\begin{bmatrix} 0&0\\0&0 \end{bmatrix}##

2) ##\begin{bmatrix} c&0\\0&-c \end{bmatrix}##

3) ##\begin{bmatrix} c&-c\\0&0 \end{bmatrix}##

4) ##\begin{bmatrix} 0&0\\c&-c \end{bmatrix}##

5) ##\begin{bmatrix} 0&c\\-c&0 \end{bmatrix}##

Where ##c \in \Bbb C##.

Is the question incorrect ?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Buffu said:

Homework Statement


Let ##A = \begin{bmatrix} a&b\\c&d \end{bmatrix}## such that ##a+b+c+d = 0##. Suppose A is a row reduced. Prove that there are exactly three such matrices.

Homework Equations

The Attempt at a Solution



1) ##\begin{bmatrix} 0&0\\0&0 \end{bmatrix}##

2) ##\begin{bmatrix} c&0\\0&-c \end{bmatrix}##

3) ##\begin{bmatrix} c&-c\\0&0 \end{bmatrix}##

4) ##\begin{bmatrix} 0&0\\c&-c \end{bmatrix}##

5) ##\begin{bmatrix} 0&c\\-c&0 \end{bmatrix}##

Where ##c \in \Bbb C##.

Is the question incorrect ?
I don't remember which operations on the rows are allowed and which are not, but my guess would be that (2) and (5) as well as (3) and (4) are the same, which gives you the three stated in the claim.
 
Last edited by a moderator:
  • #3
fresh_42 said:
I don't remember which operations on the rows are allowed and which are not, but my guess would be that (2) and (5) as well as (3) and (4) are the same, which gives you the three stated in the claim.
I am allowed to 1) interchange rows, 2) multiply by scalar 3) add two rows and replace with one of the added ones.

Why do you think 2 and 5 and 3 and 4 are same ?
 
  • #4
Well, swap rows (1). Then multiply (2) by ##-1## if you like, but as ##c## is arbitrary, this isn't even necessary. You have only three options: zero matrix, matrix with one row zero and regular matrix.
 
  • #5
Definition of row reduced in my book is given like

An m by n matrix R is row reduced if a) the first non zero entry in each non zero row of R equal 1, b) each column of R which contains the leading non zero entry of some row has all its other entries 0.

fresh_42 said:
Well, swap rows (1). Then multiply (2) by ##-1## if you like, but as ##c## is arbitrary, this isn't even necessary. You have only three options: zero matrix, matrix with one row zero and regular matrix.

Are row equivalent matrcies same ? Like if ##A## and ##R## are row equivalent then is ##A = R## true ?
 
Last edited:
  • #6
Minor correction -- the plural of matrix is matrices, not matricies. I've fixed the misspellings.
 
  • Like
Likes Buffu
  • #7
Mark44 said:
Minor correction -- the plural of matrix is matrices, not matricies. I've fixed the misspellings.

Thank you Mark, I noticed that too but I can't edit.
 
  • #8
Buffu said:
Definition of row reduced in my book is given like

An m by n matrix R is row reduced if a) the first non zero entry in each non zero row of R equal 1, b) each column of R which contains the leading non zero entry of some row has all its other entries 0.
Are row equivalent matrcies same ? Like if ##A## and ##R## are row equivalent then is ##A = R## true ?
Not really. You still can multiply the matrices by ##-\frac{1}{c}##, as ##c \neq 0##. This leaves you with exactly three different matrices. By multiplication of a row with a scalar you can even turn ##-1## into ##1##.
 
Last edited:
  • Like
Likes Buffu
  • #9
fresh_42 said:
Not really. You still can multiply the matrices by ##-\frac{1}{c}##, as ##c \neq 0##. This leaves you with exactly three different matrices. By multiplication of a row with a scalar you can even turn ##-1## into ##1##.

Ok so I got three matrices.
I am asked to prove that there are only 3 such matrices. Do I still have to prove there are not more such matrices or this fine ?
 
  • #10
Buffu said:
Ok so I got three matrices.
I am asked to prove that there are only 3 such matrices. Do I still have to prove there are not more such matrices or this fine ?
Yes, you should find an argument for it. You have three solutions but this doesn't guarantee that there are no others. I would start with an arbitrary matrix and do the row manipulations. This will give you many cases, as you will have to distinguish between matrix entries being zero or not.

Better would probably be, if you considered a reduced matrix alone. The system as linear equations ##AX=0## has only four solutions after the row reduction: all vectors ##X=(x,y)## which corresponds to ##A=0##, a single vector ##X=(0,0)## for a regular matrix ##A## (why does this solution correspond to the matrix ##\begin{bmatrix}1&0\\0&-1\end{bmatrix}## ? ) and the two solutions "in between": ##x=0, y## arbitrary or ##y=0, x## arbitrary. This way you only have to find the argument, why those two can be manipulated in a way, such that both correspond to the matrix ##\begin{bmatrix}1&-1\\0&0\end{bmatrix}##. And why is this latter matrix equivalent to ##\begin{bmatrix}1&0\\-1&0\end{bmatrix}## ?
 
  • #11
fresh_42 said:
Yes, you should find an argument for it. You have three solutions but this doesn't guarantee that there are no others. I would start with an arbitrary matrix and do the row manipulations. This will give you many cases, as you will have to distinguish between matrix entries being zero or not.

Better would probably be, if you considered a reduced matrix alone. The system as linear equations ##AX=0## has only four solutions after the row reduction: all vectors ##X=(x,y)## which corresponds to ##A=0##, a single vector ##X=(0,0)## for a regular matrix ##A## (why does this solution correspond to the matrix ##\begin{bmatrix}1&0\\0&-1\end{bmatrix}## ? ) and the two solutions "in between": ##x=0, y## arbitrary or ##y=0, x## arbitrary. This way you only have to find the argument, why those two can be manipulated in a way, such that both correspond to the matrix ##\begin{bmatrix}1&-1\\0&0\end{bmatrix}##. And why is this latter matrix equivalent to ##\begin{bmatrix}1&0\\-1&0\end{bmatrix}## ?
Ok here is my attempt,

If ##A = 0^{2\times 2}## is the solution, then by definition of row reduced and by ##4\times 0 = 0##, we get this as one of the matrix.

##\begin{bmatrix} a & b \\ c & d \end{bmatrix}##

If we say ##a## is leading non zero entry in first row then

If ##a = 1## then ##c = 0##. Now we have to choose ##b,d##, we can't choose ##d \ne 0## because else ##d = 1 ## and ##b = 0## which implies sum of entries 2. So we have to choose ##d= 0## which forces to choose ##b = -1##.

We can also choose ##b## as the leading non zero entry in the first row which will force ##d = 0## and to choose a non zero ##c## which has to be ##c = 1## to match the definition of row reduced that makes the sum of entries ##2##.
Therefore ##b## cannot be leading non zero entry in row one.

Now doing the same for second row, if we assume ##c## is leading non zero entry then ##c = 1## and ##a = 0##, we get by applying same logic as in the case of ##a = 1## above, that ##c = 1## and ##d = -1## and rest zero.

Now if ##d## if leading non zero entry in second row in second row then ##d = 1## and ##b = 0## which forces ## a = -1## and ##c = 0##.
So if ##a = -1## which is bad as per the definition of row reduced which means ##a = 1## and sum of entries is ##2##, therefore ##d## can not be leading non zero entry.

Hence we are left with three matrices, 1), 3) and 4) of the original post.

I realized that 2 and 5 does not match the definition of row reduced as given the book.
 
  • #12
Buffu said:
Ok here is my attempt,

If ##A = 0^{2\times 2}## is the solution, then by definition of row reduced and by ##4\times 0 = 0##, we get this as one of the matrix.

##\begin{bmatrix} a & b \\ c & d \end{bmatrix}##

If we say ##a## is leading non zero entry in first row then

If ##a = 1## then ##c = 0##. Now we have to choose ##b,d##, we can't choose ##d \ne 0## because else ##d = 1 ## and ##b = 0## which implies sum of entries 2. So we have to choose ##d= 0## which forces to choose ##b = -1##.
Because you dealt with ##A=0## already you may assume ##a \neq 0## or ##b \neq 0## without loss of generality for otherwise you would simply swap rows. This leaves you with two cases here.
We can also choose ##b## as the leading non zero entry in the first row which will force ##d = 0## and to choose a non zero ##c## which has to be ##c = 1## to match the definition of row reduced that makes the sum of entries ##2##.
Therefore ##b## cannot be leading non zero entry in row one.

Now doing the same for second row, if we assume ##c## is leading non zero entry then ##c = 1## and ##a = 0##, we get by applying same logic as in the case of ##a = 1## above, that ##c = 1## and ##d = -1## and rest zero.

Now if ##d## if leading non zero entry in second row in second row then ##d = 1## and ##b = 0## which forces ## a = -1## and ##c = 0##.
So if ##a = -1## which is bad as per the definition of row reduced which means ##a = 1## and sum of entries is ##2##, therefore ##d## can not be leading non zero entry.

Hence we are left with three matrices, 1), 3) and 4) of the original post.

I realized that 2 and 5 does not match the definition of row reduced as given the book.
You pointed out the same problem I have with this exercise. If multiplication of rows is allowed, then the condition ##a+b+c+d=0## can be destroyed. You can always turn a ##-1## into a ##1##. But it gives you three possibilities anyway: ##0\, , \,1## and ##\begin{bmatrix}1&0\\0&0\end{bmatrix}##. The only question open is whether you are allowed to swap or add columns, too. If not, I'm not sure what to do with ##\begin{bmatrix}0&1\\0&0\end{bmatrix}##.
 
  • #13
fresh_42 said:
Because you dealt with ##A=0## already you may assume ##a \neq 0## or ##b \neq 0## without loss of generality for otherwise you would simply swap rows. This leaves you with two cases here.

You pointed out the same problem I have with this exercise. If multiplication of rows is allowed, then the condition ##a+b+c+d=0## can be destroyed. You can always turn a ##-1## into a ##1##. But it gives you three possibilities anyway: ##0\, , \,1## and ##\begin{bmatrix}1&0\\0&0\end{bmatrix}##. The only question open is whether you are allowed to swap or add columns, too. If not, I'm not sure what to do with ##\begin{bmatrix}0&1\\0&0\end{bmatrix}##.

I think we should assume row equivalent matrices are different. Like If ##A## can be made into ##R## by a row operation then we should consider ##A## and ##R## different.

Do you think this solves the problem ?
 
  • #14
I think with all operations of row reduction allowed, inclusively swapping rows and multiplication by scalars, we get four possibilities:
$$
\begin{bmatrix}0&0\\0&0\end{bmatrix}\; , \; \begin{bmatrix}1&0\\0&0\end{bmatrix}\; , \;\begin{bmatrix}0&1\\0&0\end{bmatrix}\; , \;\begin{bmatrix}1&0\\0&1\end{bmatrix}
$$
which are indeed different: ##A X = 0\, , \, A(x,y)^t=(x,0)^t \, , \,A(x,y)^t=(y,0)^t\, , \,AX=X##.

If we only allow operations which preserve the condition ##a+b+c+d = 0## then the solution is only three:
$$
\begin{bmatrix}0&0\\0&0\end{bmatrix}\; , \; \begin{bmatrix}1&-1\\0&0\end{bmatrix}\; , \;\begin{bmatrix}1&0\\0&-1\end{bmatrix}
$$
 
  • #15
fresh_42 said:
I think with all operations of row reduction allowed, inclusively swapping rows and multiplication by scalars, we get four possibilities:
$$
\begin{bmatrix}0&0\\0&0\end{bmatrix}\; , \; \begin{bmatrix}1&0\\0&0\end{bmatrix}\; , \;\begin{bmatrix}0&1\\0&0\end{bmatrix}\; , \;\begin{bmatrix}1&0\\0&1\end{bmatrix}
$$
which are indeed different: ##A X = 0\, , \, A(x,y)^t=(x,0)^t \, , \,A(x,y)^t=(y,0)^t\, , \,AX=X##.

If we only allow operations which preserve the condition ##a+b+c+d = 0## then the solution is only three:
$$
\begin{bmatrix}0&0\\0&0\end{bmatrix}\; , \; \begin{bmatrix}1&-1\\0&0\end{bmatrix}\; , \;\begin{bmatrix}1&0\\0&-1\end{bmatrix}
$$

##\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}## is not a row reduced because the leading non zero should be ##1## by definition.
So I guess only 2 such matrix are there.
 
  • #16
Buffu said:
##\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}## is not a row reduced because the leading non zero should be ##1## by definition.
So I guess only 2 such matrix are there.
The leading non-zero of ##\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}## isn't ##1## here? What ever this means. But you have to say where this matrix belongs to: it fulfills the condition, is regular, and has two eigenvalues, ##1## and ##-1## which means there is a vector ##X## with ##AX=X## and one with ##AY=-Y##. If you apply the full row reduction then it gets ##\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}## which destroys the condition ##a+b+c+d = 0## or you keep the condition and get ##\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}## as reduced matrix.

Mathematically, there is a huge difference between ##\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}## and ##\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}## and both are very different from ##\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}## , ##\begin{bmatrix} 1 & -1 \\ 0 & 0 \end{bmatrix}## or ##\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}##.

To be honest, I would not spend too much effort on these kind of exercises. The way it is posed doesn't get you anywhere. The more as it is not clear what to do with this condition, which by the way isn't of any interest either.
 
  • Like
Likes Buffu
  • #17
fresh_42 said:
The leading non-zero of ##\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}## isn't ##1## here? What ever this means. But you have to say where this matrix belongs to: it fulfills the condition, is regular, and has two eigenvalues, ##1## and ##-1## which means there is a vector ##X## with ##AX=X## and one with ##AY=-Y##. If you apply the full row reduction then it gets ##\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}## which destroys the condition ##a+b+c+d = 0## or you keep the condition and get ##\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}## as reduced matrix.

Mathematically, there is a huge difference between ##\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}## and ##\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}## and both are very different from ##\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}## , ##\begin{bmatrix} 1 & -1 \\ 0 & 0 \end{bmatrix}## or ##\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}##.

To be honest, I would not spend too much effort on these kind of exercises. The way it is posed doesn't get you anywhere. The more as it is not clear what to do with this condition, which by the way isn't of any interest either.

The definition says that leading non zero entry of each non zero row should be ##1##.

Yes you are correct on the fact that this exercise is worthless. I will mark this thread solved.
 

1. How do you prove that there are 3 matrices that satisfy the equation a+b+c+d=0?

To prove that there are 3 matrices that satisfy the equation a+b+c+d=0, we must show that there are at least 3 distinct solutions to the equation. This can be done by finding 3 matrices with different values for a, b, c, and d that all add up to 0 when plugged into the equation.

2. What is the significance of having 3 matrices that satisfy a+b+c+d=0?

The significance of having 3 matrices that satisfy a+b+c+d=0 is that it shows that there is more than one solution to the equation. This is important in mathematics as it allows for more flexibility and versatility in problem solving.

3. Can you provide an example of 3 matrices that satisfy a+b+c+d=0?

Yes, an example of 3 matrices that satisfy a+b+c+d=0 is:
[1, 2, -3, -4]
[-2, -1, 3, 0]
[1, -1, 1, -1]
When these matrices are added together, each column will have a sum of 0, satisfying the equation.

4. What is the relationship between the values of a, b, c, and d in the 3 matrices that satisfy a+b+c+d=0?

The relationship between the values of a, b, c, and d in the 3 matrices that satisfy a+b+c+d=0 is that they will always add up to 0. This is because the equation states that the sum of all the values in each column must equal 0, therefore the values of a, b, c, and d must be chosen in a way that satisfies this condition.

5. How does the number of matrices that satisfy a+b+c+d=0 change if the equation is modified?

The number of matrices that satisfy a+b+c+d=0 may change if the equation is modified. This will depend on the modification made and how it affects the solution set. If the equation is made more restrictive, the number of matrices that satisfy it may decrease, and if it is made more lenient, the number may increase. It is important to carefully analyze the changes made to the equation in order to determine how it will affect the number of solutions.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
981
  • Calculus and Beyond Homework Help
Replies
28
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
383
  • Calculus and Beyond Homework Help
Replies
3
Views
568
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
901
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Differential Equations
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
969
Back
Top