# Number of matrices

Tags:
1. Jun 9, 2017

### Buffu

1. The problem statement, all variables and given/known data
Let $A = \begin{bmatrix} a&b\\c&d \end{bmatrix}$ such that $a+b+c+d = 0$. Suppose A is a row reduced. Prove that there are exactly three such matrices.

2. Relevant equations

3. The attempt at a solution

1) $\begin{bmatrix} 0&0\\0&0 \end{bmatrix}$

2) $\begin{bmatrix} c&0\\0&-c \end{bmatrix}$

3) $\begin{bmatrix} c&-c\\0&0 \end{bmatrix}$

4) $\begin{bmatrix} 0&0\\c&-c \end{bmatrix}$

5) $\begin{bmatrix} 0&c\\-c&0 \end{bmatrix}$

Where $c \in \Bbb C$.

Is the question incorrect ?

Last edited by a moderator: Jun 9, 2017
2. Jun 9, 2017

### Staff: Mentor

I don't remember which operations on the rows are allowed and which are not, but my guess would be that (2) and (5) as well as (3) and (4) are the same, which gives you the three stated in the claim.

Last edited by a moderator: Jun 9, 2017
3. Jun 9, 2017

### Buffu

I am allowed to 1) interchange rows, 2) multiply by scalar 3) add two rows and replace with one of the added ones.

Why do you think 2 and 5 and 3 and 4 are same ?

4. Jun 9, 2017

### Staff: Mentor

Well, swap rows (1). Then multiply (2) by $-1$ if you like, but as $c$ is arbitrary, this isn't even necessary. You have only three options: zero matrix, matrix with one row zero and regular matrix.

5. Jun 9, 2017

### Buffu

Definition of row reduced in my book is given like

An m by n matrix R is row reduced if a) the first non zero entry in each non zero row of R equal 1, b) each column of R which contains the leading non zero entry of some row has all its other entries 0.

Are row equivalent matrcies same ? Like if $A$ and $R$ are row equivalent then is $A = R$ true ?

Last edited: Jun 9, 2017
6. Jun 9, 2017

### Staff: Mentor

Minor correction -- the plural of matrix is matrices, not matricies. I've fixed the misspellings.

7. Jun 9, 2017

### Buffu

Thank you Mark, I noticed that too but I can't edit.

8. Jun 9, 2017

### Staff: Mentor

Not really. You still can multiply the matrices by $-\frac{1}{c}$, as $c \neq 0$. This leaves you with exactly three different matrices. By multiplication of a row with a scalar you can even turn $-1$ into $1$.

Last edited: Jun 9, 2017
9. Jun 10, 2017

### Buffu

Ok so I got three matrices.
I am asked to prove that there are only 3 such matrices. Do I still have to prove there are not more such matrices or this fine ?

10. Jun 10, 2017

### Staff: Mentor

Yes, you should find an argument for it. You have three solutions but this doesn't guarantee that there are no others. I would start with an arbitrary matrix and do the row manipulations. This will give you many cases, as you will have to distinguish between matrix entries being zero or not.

Better would probably be, if you considered a reduced matrix alone. The system as linear equations $AX=0$ has only four solutions after the row reduction: all vectors $X=(x,y)$ which corresponds to $A=0$, a single vector $X=(0,0)$ for a regular matrix $A$ (why does this solution correspond to the matrix $\begin{bmatrix}1&0\\0&-1\end{bmatrix}$ ? ) and the two solutions "in between": $x=0, y$ arbitrary or $y=0, x$ arbitrary. This way you only have to find the argument, why those two can be manipulated in a way, such that both correspond to the matrix $\begin{bmatrix}1&-1\\0&0\end{bmatrix}$. And why is this latter matrix equivalent to $\begin{bmatrix}1&0\\-1&0\end{bmatrix}$ ?

11. Jun 11, 2017

### Buffu

Ok here is my attempt,

If $A = 0^{2\times 2}$ is the solution, then by definition of row reduced and by $4\times 0 = 0$, we get this as one of the matrix.

$\begin{bmatrix} a & b \\ c & d \end{bmatrix}$

If we say $a$ is leading non zero entry in first row then

If $a = 1$ then $c = 0$. Now we have to choose $b,d$, we can't choose $d \ne 0$ because else $d = 1$ and $b = 0$ which implies sum of entries 2. So we have to choose $d= 0$ which forces to choose $b = -1$.

We can also choose $b$ as the leading non zero entry in the first row which will force $d = 0$ and to choose a non zero $c$ which has to be $c = 1$ to match the definition of row reduced that makes the sum of entries $2$.
Therefore $b$ cannot be leading non zero entry in row one.

Now doing the same for second row, if we assume $c$ is leading non zero entry then $c = 1$ and $a = 0$, we get by applying same logic as in the case of $a = 1$ above, that $c = 1$ and $d = -1$ and rest zero.

Now if $d$ if leading non zero entry in second row in second row then $d = 1$ and $b = 0$ which forces $a = -1$ and $c = 0$.
So if $a = -1$ which is bad as per the definition of row reduced which means $a = 1$ and sum of entries is $2$, therefore $d$ can not be leading non zero entry.

Hence we are left with three matrices, 1), 3) and 4) of the original post.

I realised that 2 and 5 does not match the definition of row reduced as given the book.

12. Jun 11, 2017

### Staff: Mentor

Because you dealt with $A=0$ already you may assume $a \neq 0$ or $b \neq 0$ without loss of generality for otherwise you would simply swap rows. This leaves you with two cases here.
You pointed out the same problem I have with this exercise. If multiplication of rows is allowed, then the condition $a+b+c+d=0$ can be destroyed. You can always turn a $-1$ into a $1$. But it gives you three possibilities anyway: $0\, , \,1$ and $\begin{bmatrix}1&0\\0&0\end{bmatrix}$. The only question open is whether you are allowed to swap or add columns, too. If not, I'm not sure what to do with $\begin{bmatrix}0&1\\0&0\end{bmatrix}$.

13. Jun 11, 2017

### Buffu

I think we should assume row equivalent matrices are different. Like If $A$ can be made into $R$ by a row operation then we should consider $A$ and $R$ different.

Do you think this solves the problem ?

14. Jun 11, 2017

### Staff: Mentor

I think with all operations of row reduction allowed, inclusively swapping rows and multiplication by scalars, we get four possibilities:
$$\begin{bmatrix}0&0\\0&0\end{bmatrix}\; , \; \begin{bmatrix}1&0\\0&0\end{bmatrix}\; , \;\begin{bmatrix}0&1\\0&0\end{bmatrix}\; , \;\begin{bmatrix}1&0\\0&1\end{bmatrix}$$
which are indeed different: $A X = 0\, , \, A(x,y)^t=(x,0)^t \, , \,A(x,y)^t=(y,0)^t\, , \,AX=X$.

If we only allow operations which preserve the condition $a+b+c+d = 0$ then the solution is only three:
$$\begin{bmatrix}0&0\\0&0\end{bmatrix}\; , \; \begin{bmatrix}1&-1\\0&0\end{bmatrix}\; , \;\begin{bmatrix}1&0\\0&-1\end{bmatrix}$$

15. Jun 11, 2017

### Buffu

$\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$ is not a row reduced because the leading non zero should be $1$ by definition.
So I guess only 2 such matrix are there.

16. Jun 11, 2017

### Staff: Mentor

The leading non-zero of $\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$ isn't $1$ here? What ever this means. But you have to say where this matrix belongs to: it fulfills the condition, is regular, and has two eigenvalues, $1$ and $-1$ which means there is a vector $X$ with $AX=X$ and one with $AY=-Y$. If you apply the full row reduction then it gets $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ which destroys the condition $a+b+c+d = 0$ or you keep the condition and get $\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$ as reduced matrix.

Mathematically, there is a huge difference between $\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$ and $\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}$ and both are very different from $\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$ , $\begin{bmatrix} 1 & -1 \\ 0 & 0 \end{bmatrix}$ or $\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$.

To be honest, I would not spend too much effort on these kind of exercises. The way it is posed doesn't get you anywhere. The more as it is not clear what to do with this condition, which by the way isn't of any interest either.

17. Jun 11, 2017

### Buffu

The definition says that leading non zero entry of each non zero row should be $1$.

Yes you are correct on the fact that this exercise is worthless. I will mark this thread solved.