Prove B is invertible if AB = I

  • Thread starter songoku
  • Start date
  • #1
2,166
290
Homework Statement
If AB = I for square matrices A and B, show that B is invertible. (Do not assume A is invertible)
Relevant Equations
##A.A^{-1}=I##
Definition of inverse: Matrix ##P## is invertible if there is matrix ##Q## such that ##PQ = I## and ##QP = I##

So since ##AB = I## is given, first I need to show ##BA = I## to be able to prove that ##B## is invertible? If yes, how to show ##BA = I##?

I thought the answer to this question is straightforward, because ##AB = I##, it means that ##A## is inverse of ##B##, so ##B## has inverse then ##B## is invertible?

Thanks
 

Answers and Replies

  • #2
Homework Statement:: If AB = I for square matrices A and B, show that B is invertible. (Do not assume A is invertible)
Relevant Equations:: ##A.A^{-1}=I##

Definition of inverse: Matrix ##P## is invertible if there is matrix ##Q## such that ##PQ = I## and ##QP = I##

So since ##AB = I## is given, first I need to show ##BA = I## to be able to prove that ##B## is invertible? If yes, how to show ##BA = I##?

I thought the answer to this question is straightforward, because ##AB = I##, it means that ##A## is inverse of ##B##, so ##B## has inverse then ##B## is invertible?

Thanks
The answer is straightforward if we have a group, but this isn't on our list of assumptions.

I would determine the kernel of ##B## and reason with the finite dimensions.
 
  • #3
I thought the answer to this question is straightforward, because ##AB = I##, it means that ##A## is inverse of ##B##, so ##B## has inverse then ##B## is invertible?
If you simply treat matrices as elements of a ring, then it can't be proved - as there are rings with elements that have one-sided inverses.

That means you need to use something specific about matrices (over and above the ring axioms).
 
  • Like
Likes songoku and Delta2
  • #4
The answer is straightforward if we have a group, but this isn't on our list of assumptions.

I would determine the kernel of ##B## and reason with the finite dimensions.

If you simply treat matrices as elements of a ring, then it can't be proved - as there are rings with elements that have one-sided inverses.

That means you need to use something specific about matrices (over and above the ring axioms).
Sorry my lesson has not covered all those things. What I have learned so far are linear transformations, subspace, inverse, dimension and rank.

Is it possible to answer the question only based on what I learned?

Thanks
 
  • #5
Sorry my lesson has not covered all those things. What I have learned so far are linear transformations, subspace, inverse, dimension and rank.

Is it possible to answer the question only based on what I learned?

Thanks
You need the rank, or injectivity, and the dimension formula, or an embedding argument.
 
  • #6
I think the main point here is that you would like to just do matrix operations to go from BA and end up with I. But you cannot, it turns out there are other objects with addition and multiplication where being an inverse multiplying on the left does not imply being an inverse multiplying on the right. So you have to use something special about matrices being linear transformations. The best you can do with just matrix math is observing things like ##(BA)B=B(AB)=BI=B##.

So ##BA## is a matrix which when you multiply it on the right by ##B##, you still get ##B## (and ##B## isn't arbitrary here, it's your specific matrix). That doesn't feel like a lot, but if you think carefully about what it means for a square matrix to be full rank, you can prove that ##BA## must be the identity map (I suggest a proof by contradiction)
 
  • Like
Likes songoku, Delta2 and PeroK
  • #7
$$AB = I$$
$$ABB^{-1} = IB^{-1}$$
$$A = IB^{-1}$$
$$BA = BIB^{-1}$$
$$BA = BB^{-1}$$
$$BA = I$$

EDIT: Apparently you're not allowed to inverse neither A nor B, so this isn't a solution. You sure you read the question right?
 
Last edited:
  • #8
$$AB = I$$
$$ABB^{-1} = IB^{-1}$$
$$A = IB^{-1}$$
$$BA = BIB^{-1}$$
$$BA = BB^{-1}$$
$$BA = I$$

You assumed B is invertible! This is not a solution.
 
  • #9
You assumed B is invertible! This is not a solution.
Ohh haha, thought I A was the only non-invertable one.

Yeah no, then there's no solutions I can think of.
 
  • #10
You assumed B is invertible! This is not a solution.
@Addez123 ... what by the way holds true for your equation ##AA^{-1}=I##, too, in your first post. You must not assume that ##A## is invertibel.

Again: Is ##B## injective? What do you know about linear (injective) transformations?
 
  • #11
Addez is not the person who started the thread :)
 
  • #15
You need the rank
I think the main point here is that you would like to just do matrix operations to go from BA and end up with I. But you cannot, it turns out there are other objects with addition and multiplication where being an inverse multiplying on the left does not imply being an inverse multiplying on the right. So you have to use something special about matrices being linear transformations. The best you can do with just matrix math is observing things like ##(BA)B=B(AB)=BI=B##.

So ##BA## is a matrix which when you multiply it on the right by ##B##, you still get ##B## (and ##B## isn't arbitrary here, it's your specific matrix). That doesn't feel like a lot, but if you think carefully about what it means for a square matrix to be full rank, you can prove that ##BA## must be the identity map (I suggest a proof by contradiction)
What it means for a square matrix to be full rank is that the columns of that matrix form a linearly independent set but I don't know how to use that fact to proceed.

Proof by contradiction means that I start from assumption that B is not invertible?

or injectivity
Again: Is ##B## injective? What do you know about linear (injective) transformations?
I don't know whether B is injective or not.

What I know about linear transformation to be injective (one to one) is that ##B(x)=0## has only trivial solution.

and the dimension formula, or an embedding argument.
By dimension formula, do you mean dim (col B) + dim (nul B) = n (assume B has n columns)?

EDIT: Apparently you're not allowed to inverse neither A nor B, so this isn't a solution. You sure you read the question right?
The only thing I am sure about this question is I am 100% sure I posted the question correctly as I got it and I am also sure I don't know how to solve it
 
  • #16
What it means for a square matrix to be full rank is that the columns of that matrix form a linearly independent set but I don't know how to use that fact to proceed.
The vectors in a matrix probably won't help here very much. Of course, you can use them, but I think it is easier without them.
Proof by contradiction means that I start from assumption that B is not invertible?



I don't know whether B is injective or not.

What I know about linear transformation to be injective (one to one) is that ##B(x)=0## has only trivial solution.
Yes. An injective function satisfies ##f(x)=f(y) \Longrightarrow x=y## and if ##f## is linear as in our case, this means ##f(x-y)=0 \Longrightarrow x-y=0## or equivalent ##f(v)=0 \Longrightarrow v=0.##

So assume ##B(v)=0.## What does that tell you, given that ##AB=I\;?##
By dimension formula, do you mean dim (col B) + dim (nul B) = n (assume B has n columns)?
Yes, although it is better known as rank-nullity theorem:
##\operatorname{rank} B + \operatorname{null} B = \dim\operatorname{im} B + \dim\operatorname{ker}B= dim (col B) + dim (nul B) = n= \dim V= \dim \operatorname{domain} B.##

Since you have (hopefully) shown that ##\dim\operatorname{ker}B=0## you will get the rank of ##B## from that formula.
 
  • #17
The only thing I am sure about this question is I am 100% sure I posted the question correctly as I got it and I am also sure I don't know how to solve it
The question is right.
 
  • #18
Yes. An injective function satisfies ##f(x)=f(y) \Longrightarrow x=y## and if ##f## is linear as in our case, this means ##f(x-y)=0 \Longrightarrow x-y=0## or equivalent ##f(v)=0 \Longrightarrow v=0.##

So assume ##B(v)=0.## What does that tell you, given that ##AB=I\;?##
Assume ##B## is ##m\times m## matrix and ##B(v)=0##

$$AB=I$$
$$AB(v)=I(v)$$
$$A(0)=v$$
$$v=0$$

Yes, although it is better known as rank-nullity theorem:
##\operatorname{rank} B + \operatorname{null} B = \dim\operatorname{im} B + \dim\operatorname{ker}B= dim (col B) + dim (nul B) = n= \dim V= \dim \operatorname{domain} B.##

Since you have (hopefully) shown that ##\dim\operatorname{ker}B=0## you will get the rank of ##B## from that formula.
I have not learned about "ker" but I searched a bit about it.

Dim (nul B) = 0, then rank B = ##m \rightarrow## B has full rank, so B has ##m## pivot columns and B is row equivalent to ##I##. This means that B is invertible

Is this correct? Thanks
 
  • #19
Are we allowed to use properties of determinants; specifically for Det(AB)?.
Or of Rank-Nullity and Rank of a product?
 
  • #20
Are we allowed to use properties of determinants; specifically for Det(AB)?.
Or of Rank-Nullity and Rank of a product?
My lesson has not covered those things. For rank, the notes from the teacher only state "rank theorem", not Rank - Nullity and I only learn about rank of a matrix so far, not yet about rank of product of matrix.

Thanks
 
  • #21
My lesson has not covered those things. For rank, the notes from the teacher only state "rank theorem", not Rank - Nullity and I only learn about rank of a matrix so far, not yet about rank of product of matrix.

Thanks
It may be reasonably feasible to provide a broad. i. e., not the sharpest possible, bound for the rank of the product. If not, maybe we can drop this option.
 
  • #22
Assume ##B## is ##m\times m## matrix and ##B(v)=0##

$$AB=I$$
$$AB(v)=I(v)$$
$$A(0)=v$$
$$v=0$$
This is a good start. You've shown that ##B## is one-to-one - although critically you haven't actually said that. Is that what you set out to show? That ##B## is one-to-one.

In any case, if you can show that ##B## is also onto, then you have shown that ##B## is invertible.
I have not learned about "ker" but I searched a bit about it.

Dim (nul B) = 0, then rank B = ##m \rightarrow## B has full rank, so B has ##m## pivot columns and B is row equivalent to ##I##. This means that B is invertible

Is this correct? Thanks
There's no need to talk about full rank, pivot columns or row equivalence. You should have been aiming to show that ##B## is onto (or surjective, if that's the terminology you are familiar with).
 
  • #24
Is that what you set out to show? That ##B## is one-to-one.
Yes

There's no need to talk about full rank, pivot columns or row equivalence. You should have been aiming to show that ##B## is onto (or surjective, if that's the terminology you are familiar with).
I think I can show ##B## is onto but using pivot point. Since ##B## has pivot point on each row, ##B## is onto.

How to show it without using pivot?

Thanks
 
  • #25
My lesson has not covered those things. For rank, the notes from the teacher only state "rank theorem", not Rank - Nullity and I only learn about rank of a matrix so far, not yet about rank of product of matrix.

Thanks

What is the rank theorem?
 
  • #27
If B has n columns, rank B + dim nul B = n

This is also known as the rank nullity theorem. So what is dim nul B, given what you have figured out so far?
 
  • #28
This is also known as the rank nullity theorem. So what is dim nul B, given what you have figured out so far?
dim nul B = 0
 
  • #29
dim nul B = 0

This is correct, and if will be more educational if you think about what other conclusions you can now make.
 
  • #30
This is correct, and if will be more educational if you think about what other conclusions you can now make.
I have tried to answer the question in post #18
Dim (nul B) = 0, then rank B = ##m \rightarrow## B has full rank, so B has ##m## pivot columns and B is row equivalent to ##I##. This means that B is invertible

Or do you refer to post #24?
I think I can show ##B## is onto but using pivot point. Since ##B## has pivot point on each row, ##B## is onto.

How to show it without using pivot?

Thanks
 
  • #31
I think I can show ##B## is onto but using pivot point. Since ##B## has pivot point on each row, ##B## is onto.

How to show it without using pivot?

Thanks
I must admit I've never heard of "pivot point" in linear algebra. You can show that ##B## is onto if it is one-to-one by using a basis for the vector space, ##V##, where ##B: V \to V## is a linear mapping.

Let ##\{ e_1, e_2 \dots e_n\}## be a basis for V and consider ##\{ Be_1, Be_2 \dots Be_n\}##. By the linearity of ##B## we have:
$$a_1Be_1 + a_2Be_2 + \dots a_nBa_n = 0 \ \Rightarrow \ B(a_1e_1 + \dots + a_ne_n) = 0$$(And, as ##B## is one-to-one and the ##e_i## are linearly independent):
$$\Rightarrow \ a_1e_1 + \dots + a_ne_n = 0 \ \Rightarrow \ a_1 = a_2 \dots = a_n = 0$$This shows that the vectors ##Be_i## are linearly independent and hence a basis for ##V##.

Finally, if ##v \in V##, then for some scalars ##v_i##:
$$v = v_1Be_1 + \dots + v_nBe_n = B(v_1e_1 + \dots + v_ne_n)$$And, as ##v## was arbitrary, we see that ##B## is onto.
 
  • #32
This shows that the vectors ##Be_i## are linearly independent and hence a basis for ##V##.

Finally, if ##v \in V##, then for some scalars ##v_i##:
$$v = v_1Be_1 + \dots + v_nBe_n = B(v_1e_1 + \dots + v_ne_n)$$And, as ##v## was arbitrary, we see that ##B## is onto.
And how does this differ from the rank-nullity theorem the OP already correctly used? You use surjectivity to prove surjectivity. But I don't want to confuse the OP more than he already is. His solution might not have been perfectly phrased, nevertheless, it was correct (post #18).

The idea with the Pivots is ok. In the end, we do have not enough information on what he may use according to his book, and what lies ahead.
 
  • #33
And how does this differ from the rank-nullity theorem the OP already correctly used?
The OP asked how to prove it without using pivots, so I showed him.

You use surjectivity to prove surjectivity.
I used injectivity to prove surjectivity. Which seems like an important concept in linear algebra and a neat solution.
 
  • #34
The OP asked how to prove it without using pivots, so I showed him.


I used injectivity to prove surjectivity. Which seems like an important concept in linear algebra and a neat solution.
Yes, and it is called the rank-nullity theorem.
 
  • #35
I must admit I've never heard of "pivot point" in linear algebra.
Maybe the term that you are familiar with is "pivot" or "pivot element" or "pivot position"

Thank you very much for all the help and explanation fresh_42, PeroK, Office_Shredder, Addez123, WWGD
 
  • Like
Likes Delta2, WWGD and fresh_42

Suggested for: Prove B is invertible if AB = I

Back
Top