Showing that B is invertible if A = BAB, where A is invertib

  • Thread starter Thread starter Mr Davis 97
  • Start date Start date
Click For Summary
SUMMARY

The discussion centers on proving the invertibility of matrix B given that A = BAB and A is invertible. Participants establish that if A is invertible, then B must also be invertible by demonstrating that both a left inverse (C = A-1B) and a right inverse (D = ABA-1) exist, leading to the conclusion that C = D. The conversation emphasizes the importance of determinants in confirming the invertibility of matrices and clarifies that the existence of one-sided inverses does not automatically imply the existence of two-sided inverses without additional conditions.

PREREQUISITES
  • Understanding of matrix algebra and properties of square matrices.
  • Knowledge of determinants and their role in matrix invertibility.
  • Familiarity with left and right inverses of matrices.
  • Basic concepts of linear algebra, including the associative property of matrix multiplication.
NEXT STEPS
  • Study the properties of determinants in relation to matrix invertibility.
  • Learn about left and right inverses and their implications in linear algebra.
  • Explore theorems related to invertible matrices and their applications.
  • Investigate counterexamples in non-commutative rings to understand the limitations of matrix properties.
USEFUL FOR

Mathematicians, students of linear algebra, and anyone interested in understanding matrix properties and their implications in higher mathematics.

Mr Davis 97
Messages
1,461
Reaction score
44

Homework Statement


Let A and B be square matrices. Suppose that A is invertible and A = BAB. Show that B is invertible.

Homework Equations

The Attempt at a Solution


First, since BAB = A, and A is invertible, this means that B(ABA-1) = I. However, to show that B is invertible, I also need to show that (ABA-1)B = I, but I don't see how to do this...
 
Physics news on Phys.org
With the same method you can construct a (possibly different) left inverse C of B.
Then you have CB = 1 = BD. (In your case you already have D = ABA-1.)
Now show that in general C=D has to hold.
 
fresh_42 said:
With the same method you can construct a (possibly different) left inverse C of B.
Then you have CB = 1 = BD. (In your case you already have D = ABA-1.)
Now show that in general C=D has to hold.
Well ABA-1 = A-1BA only if A = A-1, right? But we aren't given that in the problem.
 
Mr Davis 97 said:
Well ABA-1 = A-1BA only if A = A-1, right? But we aren't given that in the problem.
And it is not necessary. (Probably it isn't true either, but I haven't thought about a counterexample.)
I read your question as you have found C=A-1BA.

So forget now the A and prove: CB=BD implies C=D, which is useful to know in other cases, too.

Edit: I forgot to repeat that CB=BD=1 which is important here.
 
Last edited:
fresh_42 said:
And it is not necessary. (Probably it isn't true either, but I haven't thought about a counterexample.)
I read your question as you have found C=A-1BA.

So forget now the A and prove: CB=BD implies C=D, which is useful to know in other cases, too.

Edit: I forgot to repeat that CB=BD=1 which is important here.
I'm not seeing how to do it. Since we aren't given that C, B, or D have an inverse, it doesn't seem possible to manipulate the equation CB = BD to get C = D,
 
Mr Davis 97 said:
I'm not seeing how to do it. Since we aren't given that C, B, or D have an inverse, it doesn't seem possible to manipulate the equation CB = BD to get C = D,
You have found a left inverse C of B and a right inverse D of B. (C=A-1BA and D=ABA-1)
Thus you know CB=1=BD. What is CBD?
 
Mr Davis 97 said:
I'm not seeing how to do it. Since we aren't given that C, B, or D have an inverse, it doesn't seem possible to manipulate the equation CB = BD to get C = D,
To quote for another of your threads:
Ray Vickson said:
There is a standard theorem which states that if a square matrix A has a right (left) inverse B, then it also has a left (right) inverse, and that is equal to B as well. You should expand your understanding by trying to prove that theorem.
 
There's also that useful little concept called the determinant!
 
DrClaude said:
To quote for another of your threads:
So are you saying that to find an inverse for an nxn matrix I only need to show that AC = I, then automatically CA = I too?
 
  • #10
Mr Davis 97 said:
So are you saying that to find an inverse for an nxn matrix I only need to show that AC = I, then automatically CA = I too?

You should be able to prove that easily enough. And, again, I would like to mention the d-word: "determinant".
 
  • #11
PeroK said:
You should be able to prove that easily enough. And, again, I would like to mention the d-word: "determinant".
Well, using determinants, on ##AC = CA##, we get that ##det(A) det(C) = det(C) det(A)##, which is a true equation, but I don't see how we can then go back to say that AC = CA as result
 
  • #12
Mr Davis 97 said:
Well, using determinants, on ##AC = CA##, we get that ##det(A) det(C) = det(C) det(A)##, which is a true equation, but I don't see how we can then go back to say that AC = CA as result

The determinant is related to the existence of an inverse.
 
  • #13
PeroK said:
The determinant is related to the existence of an inverse.
Well we know that ##det(A) det(C) = det(C) det(A) = 1##, so both AC and CA have an inverse. Does this imply that A has an inverse and that inverse is C?
 
  • #14
Mr Davis 97 said:
Well we know that ##det(A) det(C) = det(C) det(A) = 1##, so both AC and CA have an inverse. Does this imply that A has an inverse and that inverse is C?

If ##det(A) det(C) = 1##, what does that tell you about ##det(A)## and ##det(C)##?
 
  • #15
PeroK said:
If ##det(A) det(C) = 1##, what does that tell you about ##det(A)## and ##det(C)##?
That neither is zero, and thus they're both invertible. Thus, C is the inverse of A, and A is the inverse of C, since CA = I. Is that right?

Also, is this only true for n x n matrices because of general m x n matrices we can't take determinants?
 
  • #16
Mr Davis 97 said:
That neither is zero, and thus they're both invertible. Thus, C is the inverse of A, and A is the inverse of C, since CA = I. Is that right?

Also, is this only true for n x n matrices because of general m x n matrices we can't take determinants?

You might like to tidy it up by saying that if ##AC = I## then ##det(C) \ne 0## hence ##C^{-1}## exists and:

##ACC^{-1} = IC^{-1}##, hence ##A = C^{-1}##

For general m x n matrices, ##AC## and ##CA## are generally not defined.
 
  • #17
PeroK said:
You might like to tidy it up by saying that if ##AC = I## then ##det(C) \ne 0## hence ##C^{-1}## exists and:

##ACC^{-1} = IC^{-1}##, hence ##A = C^{-1}##

For general m x n matrices, ##AC## and ##CA## are generally not defined.
Just out of curiosity, is there any simple way to prove it without determinants?
 
  • #18
Mr Davis 97 said:
Just out of curiosity, is there any simple way to prove it without determinants?

It's not generally true in a ring that a left inverse implies a right inverse. So, you need more than the ring axioms.
 
  • #19
Mr Davis 97 said:
Just out of curiosity, is there any simple way to prove it without determinants?
If you already have a left inverse ##CB = I## and a right inverse ##BD = I## as in your case, then all you need is associativity to prove they are equal: ## C = C \cdot I = C\cdot (BD) = (CB)\cdot D = I\cdot D = D##

And to not confuse you with what @PeroK has said: You cannot automatically assume that the existence of a one-sided inverse implies one from the other side. However, you've proven the existence of both inverses to ##B## in posts #1, #3. Strictly speaking, that's all you needed to do. The equality of both hasn't been required. But as you can see, it's easy to do in this case.
 
  • Like
Likes Mr Davis 97 and PeroK
  • #20
fresh_42 said:
If you already have a left inverse ##CB = I## and a right inverse ##BD = I## as in your case, then all you need is associativity to prove they are equal: ## C = C \cdot I = C\cdot (BD) = (CB)\cdot D = I\cdot D = D##

And to not confuse you with what @PeroK has said: You cannot automatically assume that the existence of a one-sided inverse implies one from the other side. However, you've proven the existence of both inverses to ##B## in posts #1, #3. Strictly speaking, that's all you needed to do. The equality of both hasn't been required. But as you can see, it's easy to do in this case.
Ah, okay, thanks for clearing that all up. So just to be clear, if I have found that AC= I, then this does not guarantee that A has an inverse. I need to show also that there exists a D such that DA = I, and if this is the case, it must be so that D = C.
 
  • #21
Mr Davis 97 said:
Ah, okay, thanks for clearing that all up. So just to be clear, if I have found that AC= I, then this does not guarantee that A has an inverse. I need to show also that there exists a D such that DA = I, and if this is the case, it must be so that D = C.
Yes.
But "this does not guarantee that A has an inverse" only in the sense of a two-sided inverse. With "AC=I" it has a right inverse.

Let's consider functions ##f : \mathbb{N} \rightarrow \mathbb{N}##.
Then for ##f_1(x) := x+1## and ##f_2(x) := \begin{cases} x-1 & \text{ for } x \geq 2 \\ 1 & \text{ for } x=1 \end{cases} ##
we get ##f_2 \circ f_1 = id_\mathbb{N}## and ##f_1 \circ f_2 \neq id_\mathbb{N}##. This is an example where ##f_2## has a right inverse but no left inverse.
 
  • #22
Perhaps to summarise:

1) In this particular problem, you are dealing with square matrices and given that ##A = BAB## and ##A## is invertible.

The simplest way to solve this is to consider determinants, as you can see immediately that ##B## must be invertible.

As has been shown, however, you only need the ring axioms to show that ##B## is invertible in this case.

2) There was then a related question:

For square matrices if ##AB = I## then ##A## and ##B## are invertible and, of course, ##A = B^{-1}##.

Again, considering determinants shows this immediately.

But, if ##A## & ##B## are elements of an arbitrary ring, then it does not follow that ##BA = I##. As shown by the above counterexample.

3) The moral is that matrices have specific properties, in particular in terms of inverses, beyond those that all rings have. And these properties are often encapsulated by the equivalence of having an inverse with a having non-zero determinant.
 
  • Like
Likes Mr Davis 97
  • #23
fresh_42 said:
Yes.
But "this does not guarantee that A has an inverse" only in the sense of a two-sided inverse. With "AC=I" it has a right inverse.

Let's consider functions ##f : \mathbb{N} \rightarrow \mathbb{N}##.
Then for ##f_1(x) := x+1## and ##f_2(x) := \begin{cases} x-1 & \text{ for } x \geq 2 \\ 1 & \text{ for } x=1 \end{cases} ##
we get ##f_2 \circ f_1 = id_\mathbb{N}## and ##f_1 \circ f_2 \neq id_\mathbb{N}##. This is an example where ##f_2## has a right inverse but no left inverse.

Wait, so if I have ##AC = I##, then this does not guarantee that A has a left inverse, right? However, if I take the determinant, we see that ##det(A) det(B) = 1##,so A and B both must be invertible. But I thought that ##AC = I## doesn't guarantee a left-inverse?
 
  • #24
Mr Davis 97 said:
Wait, so if I have ##AC = I##, then this does not guarantee that A has a left inverse, right? However, if I take the determinant, we see that ##det(A) det(B) = 1##,so A and B both must be invertible. But I thought that ##AC = I## doesn't guarantee a left-inverse?
PeroK said:
[Emphasis by me.] "3) The moral is that matrices have specific properties, in particular in terms of inverses, beyond those that all rings have. And these properties are often encapsulated by the equivalence of having an inverse with a having non-zero determinant."

Beside this, how did you find ##B\, ##? ##AC=I## gives you ##\det(A) \det(C) = 1##, so both, ##A## and ##C## are invertible and ##C## is a right inverse of ##A## which is a left inverse of ##C##. How come ##B## as a left inverse of ##A## into play without any additional conditions or deviations? Will ##B=C## always do the job? How can we guarantee that ##\det (C) \det(A) = 1## implies ##CA = I\,##? And how, that ##\det (C) \det(A) = 1## even holds? Where is it stated, that our matrix elements have a commutative multiplication?

I have no good counterexamples at hand, for in most common cases, a right inverse matrix is also the left inverse matrix, and as shown above, together with associativity they are the same then. I know of non associative examples, but they also lack of the classical unity matrix.

Nevertheless, one has to be careful about what is explicitly given and what is somehow automatically assumed for the reasons: "How can it be not the case?" or "It has always been so!" Both are no valuable mathematical arguments. The fact, that counterexamples might be difficult to find, does not mean there are none.
 

Similar threads

  • · Replies 40 ·
2
Replies
40
Views
5K
Replies
8
Views
5K
  • · Replies 4 ·
Replies
4
Views
4K
Replies
5
Views
11K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
4
Views
5K
  • · Replies 12 ·
Replies
12
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K