Is AB Invertible If n < m and B has a Non-Trivial Kernel?

I don't think that's quite right. I think you mean to say that##I'm not sure what you're trying to say here. The proof is fine as it is. The first thing I said is that the proof is fine as it is. If you want to make it a little clearer, note that since ##B \mathbf X_0 = 0## for some non-zero ##\mathbf X_0##, this means that the rank of ##AB## is strictly less than the number of rows; hence it is not invertible. I'm not sure what you're trying to say here.
  • #1
Buffu
849
146
If ##A## is ##m \times n## matrix, ##B## is an ##n \times m## matrix and ##n < m##. Then show that ##AB## is not invertible.

Let ##R## be the reduced echelon form of ##AB## and let ##AB## be invertible.

##I = P(AB)## where ##P## is some invertible matrix.

Since ##n < m## and ##B## is ##n \times m## therefore there is a non trivial solution to ##B\mathbf X = 0##
Let it be ##\bf X_0##

##I\mathbf X = P(AB)\mathbf X_0 \implies \mathbf X_0 = PA (B \mathbf X_0) = PA \times 0 = 0##

Which means ##\mathbf X_0 = 0##, which is contradiction. Therefore ##AB## is not invertible.

Is this correct ?
 
Physics news on Phys.org
  • #2
This look ok, though I don't really think the proof by contradiction is necessary?

Notation wise, I would probably have used ##\mathbf x## to denote a vector in ##B \mathbf x = 0## -- as is, I'm not sure if the X is a matrix or vector, but it would be customary to use a vector here. Further, I wouldn't write ##A \times 0##... that looks like a cross product?

I typically avoid proofs by contradiction if feasible. It occurs to me that when you note that ##B \mathbf x_0 = 0## for some ##\mathbf x_0 \neq 0##, this tells us right away that ##AB## is not invertible. By definition an invertible matrix does not map any non-zero vector to the zero vector and ##(AB)\mathbf x_0 = A(B\mathbf x_0) = A \mathbf 0 = \mathbf 0##, but clearly ##(AB)## does so and hence is not invertible.
 
  • Like
Likes Buffu
  • #3
StoneTemplePython said:
This look ok, though I don't really think the proof by contradiction is necessary?

Notation wise, I would probably have used ##\mathbf x## to denote a vector in ##B \mathbf x = 0## -- as is, I'm not sure if the X is a matrix or vector, but it would be customary to use a vector here. Further, I wouldn't write ##A \times 0##... that looks like a cross product?

I typically avoid proofs by contradiction if feasible. It occurs to me that when you note that ##B \mathbf x_0 = 0## for some ##\mathbf x_0 \neq 0##, this tells us right away that ##AB## is not invertible. By definition an invertible matrix does not map any non-zero vector to the zero vector and ##(AB)\mathbf x_0 = A(B\mathbf x_0) = A \mathbf 0 = \mathbf 0##, but clearly ##(AB)## does so and hence is not invertible.

Thank you, I made it more complicated :(. I used bold to indicate it is a vector.

Why do you avoid proof by contradiction ?
 
  • #4
I suppose it is a matter of taste, but if I can show something directly, I try to show it directly. It tends to make a lot more sense to me, I can generally directly attach it into a lattice work of other ideas and theorems, is easier to remember and so on.

The tradeoff is that some things can only be shown by contradiction, or in some other cases, perhaps the proof by contradiction is a paragraph and a direct proof (say, a derivation) takes 3 pages. So you have to pick your spots.

Perhaps more idiosyncratically, I would add that I've become fond of proofs that have a "visual" component to them. In this case, we could note

##AB = \mathbf a_1 \widetilde{\mathbf b_1}^T + \mathbf a_1 \widetilde{\mathbf b_2}^T + ... + \mathbf a_n \widetilde{\mathbf b_n}^T##

which is to say ##(AB)## is an m x m matrix that is comprised of a series of ##n## rank one updates. When you add ##n## rank one matrices, you have a matrix with at most rank ##n##. Hence ##(AB) ## has at most rank ##n \lt m## and isn't invertible.
 
  • Like
Likes Buffu
  • #5
StoneTemplePython said:
I suppose it is a matter of taste, but if I can show something directly, I try to show it directly. It tends to make a lot more sense to me, I can generally directly attach it into a lattice work of other ideas and theorems, is easier to remember and so on.

The tradeoff is that some things can only be shown by contradiction, or in some other cases, perhaps the proof by contradiction is a paragraph and a direct proof (say, a derivation) takes 3 pages. So you have to pick your spots.

Perhaps more idiosyncratically, I would add that I've become fond of proofs that have a "visual" component to them. In this case, we could note

##AB = \mathbf a_1 \widetilde{\mathbf b_1}^T + \mathbf a_1 \widetilde{\mathbf b_2}^T + ... + \mathbf a_n \widetilde{\mathbf b_n}^T##

which is to say ##(AB)## is an m x m matrix that is comprised of a series of ##n## rank one updates. When you add ##n## rank one matrices, you have a matrix with at most rank ##n##. Hence ##(AB) ## has at most rank ##n \lt m## and isn't invertible.

I also like that type of proofs but the more elegant a proof is the more difficult it is to derive.
 
  • #6
Buffu said:
I also like that type of proofs but the more elegant a proof is the more difficult it is to derive.

Don't worry about it. Your proof worked fine.
 
  • Like
Likes Buffu
  • #7
Buffu said:
If ##A## is ##m \times n## matrix, ##B## is an ##n \times m## matrix and ##n < m##. Then show that ##AB## is not invertible.

Let ##R## be the reduced echelon form of ##AB## and let ##AB## be invertible.

##I = P(AB)## where ##P## is some invertible matrix.

Since ##n < m## and ##B## is ##n \times m## therefore there is a non trivial solution to ##B\mathbf X = 0##
Let it be ##\bf X_0##

##I\mathbf X = P(AB)\mathbf X_0 \implies \mathbf X_0 = PA (B \mathbf X_0) = PA \times 0 = 0##

Which means ##\mathbf X_0 = 0##, which is contradiction. Therefore ##AB## is not invertible.

Is this correct ?
EDIT OWise, you seem to be saying/arguing that the kernel of B is contained in the kernel of AB, which is true, and proves the point.
Just to nitpick a bit; you may want to put an index in your identity I , to know whether it is the ##n \times n ; m \times m##, etc. Here , it is ##n \times n ## , by matrix product rules. Specially if you are doing proofs, it is a good idea.
 
  • Like
Likes Buffu

What is the definition of an invertible matrix?

An invertible matrix is a square matrix that has a unique inverse, meaning that it can be multiplied by another matrix to give the identity matrix.

How do you know if a matrix is invertible?

A matrix is invertible if its determinant is not equal to zero. This means that the matrix has a unique solution for every system of linear equations.

What is the significance of an invertible matrix?

An invertible matrix is important in linear algebra because it allows for the easy solution of systems of linear equations and is necessary for certain operations, such as matrix division.

Can all matrices be inverted?

No, not all matrices can be inverted. Only square matrices with a non-zero determinant can be inverted. Matrices with a determinant of zero are called singular matrices and do not have an inverse.

How do you find the inverse of a matrix?

The inverse of a matrix can be found by using the Gauss-Jordan elimination method or by using matrix operations such as row operations and elementary matrices. It can also be found using software or calculators with an inverse function.

Similar threads

Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
927
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
735
Replies
7
Views
832
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
4K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Back
Top