Proving facts about matrices without determinants

In summary, we discussed three problems related to invertible matrices. In the first problem, we showed that if the square of a matrix A is equal to zero, then A is not invertible. In the second problem, we discussed that if AB is equal to zero, then A can be invertible. Lastly, we explored the relationship between invertibility of AB and the invertibility of A and B. We found that if AB is invertible, then both A and B must be invertible, but the converse is not necessarily true.
  • #1
Mr Davis 97
1,462
44

Homework Statement


Let ##A## and ##B## be ##n \times n## matrices

1) Suppose ##A^2 = 0##. Prove that ##A## is not invertible.
2) Suppose ##AB=0##. Could ##A## be invertible.
3) If ##AB## is invertible, then ##A## and ##B## are invertible

Homework Equations

The Attempt at a Solution


1) Suppose that ##A## were invertible. Then ##A^2 = 0 \Rightarrow A = 0##. This is a contradiction, because we know that the zero matrix is not invertible.

So I think this is one way of doing it. What are some others?

2) It seems that we can proceed in a similar way as 1). Assume that ##A## is invertible, then ##B=0##. Thus, since ##A## is can be an arbitrary matrix, it can be invertible.

Is this the correct way of doing this?

3) This is the one that I am having the most trouble with. Assume that ##AB## is invertible. So we know that there exists a ##n \times n## matrix ##C## such that ##C(AB) = (AB)C = I_n##. So ##(CA)B = A(BC) = I_n##. So we can conclude that ##B## has a left-inverse and that ##A## has a right-inverse. But I don't see how we can conclude that ##B## has a right-inverse and ##A## has a left-inverse.
 
Physics news on Phys.org
  • #2
Mr Davis 97 said:
1) Suppose that ##A## were invertible. Then ##A^2 = 0 \Rightarrow A = 0##. This is a contradiction, because we know that the zero matrix is not invertible.

So I think this is one way of doing it. What are some others?
How do you know ##A^2=0## implies ##A=0##? There are non-zero matrices that square to 0.

2) It seems that we can proceed in a similar way as 1). Assume that ##A## is invertible, then ##B=0##. Thus, since ##A## is can be an arbitrary matrix, it can be invertible.

Is this the correct way of doing this?
Same problem. Completely misread the second question.

You should add one more step to each the first proof.
 
Last edited:
  • #3
Mr Davis 97 said:

Homework Statement


Let ##A## and ##B## be ##n \times n## matrices

1) Suppose ##A^2 = 0##. Prove that ##A## is not invertible.
2) Suppose ##AB=0##. Could ##A## be invertible.
3) If ##AB## is invertible, then ##A## and ##B## are invertible

Homework Equations

The Attempt at a Solution


1) Suppose that ##A## were invertible. Then ##A^2 = 0 \Rightarrow A = 0##. This is a contradiction, because we know that the zero matrix is not invertible.
It doesn't follow that if ##A^2 = 0## then A must necessarily be the zero matrix. Consider ##A = \begin{bmatrix} 0 & 1 \\ 0 & 1 \end{bmatrix}##.
Mr Davis 97 said:
So I think this is one way of doing it. What are some others?

2) It seems that we can proceed in a similar way as 1). Assume that ##A## is invertible, then ##B=0##. Thus, since ##A## is can be an arbitrary matrix, it can be invertible.

Is this the correct way of doing this?

3) This is the one that I am having the most trouble with. Assume that ##AB## is invertible. So we know that there exists a ##n \times n## matrix ##C## such that ##C(AB) = (AB)C = I_n##. So ##(CA)B = A(BC) = I_n##. So we can conclude that ##B## has a left-inverse and that ##A## has a right-inverse. But I don't see how we can conclude that ##B## has a right-inverse and ##A## has a left-inverse.
All you are given is that AB is invertible, which means it is a square matrix. Does this necessarily mean that both A and B have to be square matrices as well?
 
  • #4
So for the first problem, I am assuming that ##A## is invertible, and then deriving a contradiction. I am not just assuming that the zero matrix is the only matrix that squares to zero.
If I assume that ##A## is invertible, then ##A^2 = 0 \Rightarrow A^{-1}A^2 = A^{-1} 0 \Rightarrow A = 0##, but zero can't be invertible.

And for the third problem, we are given that ##A## and ##B## are square matrices.
 
  • #5
Mr Davis 97 said:
1) Suppose that ##A## were invertible. Then ##A^2 = 0 \Rightarrow A = 0##. This is a contradiction, because we know that the zero matrix is not invertible.
Right.
So I think this is one way of doing it. What are some others?
Of course the determinant. Another way is to assume ##A \neq 0## which implies, there is a vector ##v## with ##Av \neq 0## and ##Av \in \operatorname{ker} A##.
 
  • Like
Likes Mr Davis 97
  • #6
Mr Davis 97 said:
2) It seems that we can proceed in a similar way as 1). Assume that ##A## is invertible, then ##B=0##. Thus, since ##A## is can be an arbitrary matrix, it can be invertible.

Is this the correct way of doing this?
No. You don't have an arbitrary matrix. You have certain matrices ##A,B## with ##AB=0##.
Of course ##A## can be invertible, namely if ##B=0## as you have said. So the general answer is "yes" by your own reasoning.
 
  • #7
fresh_42 said:
No. You don't have an arbitrary matrix. You have certain matrices ##A,B## with ##AB=0##.
Of course ##A## can be invertible, namely if ##B=0## as you have said. So the general answer is "yes" by your own reasoning.
So I am actually confused by my own reasoning. To show that ##A## can be invertible, don't I need to also find a sufficient condition? I showed that ##A## being invertible implies that ##B=0##, which means that ##B## being ##0## is necessary. But this is not sufficient to show that there is an instance where ##A## can be invertible, right? Don't I need a sufficient condition?
 
  • #8
You could just give an example. For example, A=I and B=0 satisfied AB=0 but A is invertible.
 
  • #9
Mr Davis 97 said:
So I am actually confused by my own reasoning. To show that ##A## can be invertible, don't I need to also find a sufficient condition? I showed that ##A## being invertible implies that ##B=0##, which means that ##B## being ##0## is necessary. But this is not sufficient to show that there is an instance where ##A## can be invertible, right? Don't I need a sufficient condition?
The question is: ##(\; \exists A \; \exists B \, : \, A \textrm{ invertible } \;\wedge \; AB=0\;)## true or false?
There is no statement about an equivalence or an implication.
 
  • #10
fresh_42 said:
The question is: ##(\; \exists A \; \exists B \, : \, A \textrm{ invertible } \;\wedge \; AB=0\;)## true or false?
There is no statement about an equivalence or an implication.
So to completely answer the question, do I need to supply an example of an invertible matrix ##A##, such as ##I_n##, as vela said?
 
  • #11
Mr Davis 97 said:
So to completely answer the question, do I need to supply an example of an invertible matrix ##A##, such as ##I_n##, as vela said?
If the statement is false, all that's needed is a single counterexample. If the statement is true, no amount of examples would be sufficient. Also, even though the questions are about n x n matrices, you can usually think about things using 2 x 2 or 3 x 3 matrices, especially when you're looking for counterexamples.
 
  • #12
So what about the last problem?
 
  • #13
Mr Davis 97 said:
So what about the last problem?
That's probably a trick I can't remember. Try to show (in general for any (non Abelian) group) that left- and right-inverses cannot be different by using associativity.
 
  • #14
Mark44 said:
If the statement is false, all that's needed is a single counterexample. If the statement is true, no amount of examples would be sufficient. Also, even though the questions are about n x n matrices, you can usually think about things using 2 x 2 or 3 x 3 matrices, especially when you're looking for counterexamples.
Of course, answering the question posed depends on determining what the correct statement is. In this case, it's probably a bit easier and clearer to step back and look at the question as a whole. The question asks: if AB=0, is it possible for A to be invertible? Supplying an example definitively shows the answer to the question is yes.
 
  • #15
fresh_42 said:
That's probably a trick I can't remember. Try to show (in general for any (non Abelian) group) that left- and right-inverses cannot be different by using associativity.
The complication here is showing that if a left (right) inverse exists, that the right (left) inverse also exists.
 
  • #16
vela said:
The complication here is showing that if a left (right) inverse exists, that the right (left) inverse also exists.
Yes, and it's one of those "Why didn't I see this?" questions, once you know the right equation.
 
  • #17
Argh, without determinants. Must learn how to read :(If we go by definition: ##X## singular if and only if ##X## not invertible.
Alternatively, could argue assuming for a contradiction ##B ## was singular, but then there would exist a vector ##x\neq 0 ## s.t ##Bx = 0 ##, therefore ##(AB)x = 0 ## (why?) making ##AB ## singular, a contradiction. Therefore ##B ## must be invertible, therefore ##A ## is invertible.

I can also tell you that using only semigroup properties is not enough to tackle this problem.
Specifically, the existence of L-inverse does not, in general, guarantee the existence of R-inverse.

Since determinants are evil, you should use vectors or perhaps you can think of something ingenious!

What do you think of this proposition?
##AB ## invertible only if ##BA ## invertible
 
Last edited:
  • #18
nuuskur said:
Argh, without determinants. Must learn how to read :(Alternatively, could argue assuming for a contradiction ##B ## was singular, but then there would exist a vector ##x\neq 0 ## s.t ##Bx = 0 ##, therefore ##(AB)x = 0 ## (why?) making ##AB ## singular, a contradiction. Therefore ##B ## must be invertible, therefore ##A ## is invertible.
Doesn't this argument only show that ##B## is injective, not necessarily surjective too?
 
  • #19
Since you haven't specified any exotic properties of the ring of matrices you are dealing with, I'm assuming we are talking about matrices over Fields not division rings or god knows what else.

The implications (rather straightforward) are the following
[tex]
B \mbox{ invertible} \iff\ \left (Bx=0\Longrightarrow x=0\right )
[/tex]
You should prove it, if you are not convinced, but the proof is obvious.
This will solve the problem at hand.
 
  • #20
nuuskur said:
Since you haven't specified any exotic properties of the ring of matrices you are dealing with, I'm assuming we are talking about matrices over Fields not division rings or god knows what else.

The implications (rather straightforward) are the following
[tex]
B \mbox{ invertible} \iff\ \left (Bx=0\Longrightarrow x=0\right )
[/tex]
You should prove it, if you are not convinced, but the proof is obvious.
This will solve the problem at hand.
One also needs a dimension argument to conclude that injectivity is sufficient.
 
  • #21
Mr Davis 97 said:

Homework Statement


Let ##A## and ##B## be ##n \times n## matrices

3) If ##AB## is invertible, then ##A## and ##B## are invertible
Here's a longish, different approach that leans heavily on orthogonality. Technically this requires a satisfactory inner product definition (with respect to norms and such) so you do lose a touch of generality, though.

For 3, I'm very tempted to make an argument using quadratic forms, but I'll take another approach. If I can assume you are using complex numbers as a field, and you are comfortable with Gram-Schmidt:

consider ##\mathbf{AB} = \mathbf C##

(Important reminder: all matrices are square of same finite dimension. Among other things, this means that unitary matrices are full rank in the below decompositions. Also, for avoidance of doubt the ##^H## denotes conjugate transpose.)

For all ##\mathbf z \neq \mathbf 0##, we know that we can we can solve for an exact, unique ##\mathbf x## from ##\mathbf {Cx} = \mathbf z##, because we are are told ##\mathbf C^{-1}## exists. This is equivalent to saying ##\big(\mathbf{ QR}\big)\mathbf x = \mathbf z##, where ##\mathbf Q## is unitary and there are no zeros along diagonals of ##\mathbf R##. This leads to ##\mathbf{Rx} = \mathbf Q^H \mathbf z##, and we can interpret the diagonals of ##\mathbf R## as being pivots in a row echelon form matrix -- i.e. the end result of Gaussian Elimination. Note that ##\mathbf Q^H \mathbf z \neq \mathbf 0## (why?).

Alternatively, do QR factorization on ##\mathbf A## and modified QR on ##\mathbf B##. You get ##\big(\mathbf{P T}\big)\big(\mathbf{U V}^H\big)\mathbf x = \mathbf z##, where ##\mathbf T## and ##\mathbf U## are upper triangular and ##\mathbf P## and ##\mathbf V## are unitary. Where ##\mathbf V^H \mathbf x = \mathbf y##, the above equation can be rewritten as ##\mathbf{T}\mathbf{U }\mathbf y = \mathbf{P^H z}##. We can only solve for a unique answer for ##\mathbf y## (and in turn ##\mathbf x## which is just one full rank matrix multiplication away), iff there are no zeros along the diagonal (pivots) of the matrix given by ##\big(\mathbf{T}\mathbf{U }\big)##. From here observe that an upper triangular matrix times an upper triangular matrix = upper triangular matrix, and that for kth diagonal entry of the resulting matrix, matrix multiplication simply means multiplying ##\mathbf T_{k,k} * \mathbf U_{k,k}##. Thus the resulting upper triangular matrix only has no zeros along its diagonal if and only if ##\mathbf U## and ##\mathbf T## have no zeros along their diagonals, which is to say that ##\mathbf A## and ##\mathbf B## must both be full rank. (This setup has direct application to your second question.)

Either way you look at it, you get Upper Triangular Matrix times unknown vector = some known vector that is not the zero vector. If it is solvable in the first approach, then it must be in the second approach too (because matrix multiplication is associative), and the reason is that there are no zeros along the diagonal of the Upper Triangular Matrix.
- - - - -
This is perhaps a bit overkill, but incidentally, it gets you quite close to showing that determinants multiply (or more humbly, that the magnitudes do), and QR factorization gives a very nice geometric feel for the (magnitude of the) determinant. I also tend to think understanding Gram-Schmidt is... quite important for numerous other things.

Note that I purposely left a few open items in here. If you look at ##\mathbf B = \mathbf{UV}^H##, that is not a decomposition I've seen done elsewhere. But if you understand how to use Gram-Schmidt to do typical QR decomposition, you can derive this alternative form.

It's also worth pointing out that if you prove (3), then you have also proved (1)-- i.e. prove (3) then consider the special case where ##\mathbf B:= \mathbf A##, which tells you that ## \mathbf{T}\mathbf{U} = \mathbf 0##, which at a bare minimum means there must be at least n zeros amongst the 2n diagonals of ##\mathbf{T}##, ##\mathbf{U}## (it tells you more than this but its sufficient to know that there is at least one zero along the diagonal of one of those triangular matrices in order to blow up invertibility)...

And using the above approach (i.e. reducing things to two triangular matrices that multiply) gives you the tools to visualize the answer to number (2).
 

1. What are matrices?

Matrices are rectangular arrays of numbers or symbols that are arranged in rows and columns. They are commonly used in mathematics and science to represent and manipulate data, equations, and transformations.

2. How are determinants used in proving facts about matrices?

Determinants are used to calculate certain properties of matrices, such as their invertibility and eigenvalues. However, there are alternative methods for proving facts about matrices that do not rely on determinants.

3. What are some alternative methods for proving facts about matrices without determinants?

Some alternative methods include using elementary row or column operations, Gaussian elimination, and the concept of linear independence. These methods can be used to prove facts such as matrix equivalence, rank, and linear transformations.

4. Are there any disadvantages to proving facts about matrices without determinants?

While there are alternative methods, determinants are still a useful tool in matrix calculations. Without using determinants, some proofs may be more complicated or require more steps. Additionally, some properties, such as the Cayley-Hamilton theorem, rely on determinants for their proof.

5. Can I still use determinants in matrix calculations if I choose to prove facts without them?

Yes, determinants can still be used in matrix calculations even if you choose to prove facts without them. They are a helpful tool for finding eigenvalues and solving systems of equations, among other applications.

Similar threads

  • Calculus and Beyond Homework Help
2
Replies
40
Views
3K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
397
  • Calculus and Beyond Homework Help
Replies
6
Views
896
  • Calculus and Beyond Homework Help
Replies
4
Views
5K
  • Calculus and Beyond Homework Help
Replies
4
Views
4K
  • Calculus and Beyond Homework Help
Replies
25
Views
2K
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
3K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
Back
Top