An interesting matrix identity problem

In summary, the conversation discussed different approaches to proving that if the product of two 2x2 matrices is equal to the 2x2 identity matrix, then the inverse of one matrix multiplied by the other is also equal to the identity matrix. One method involved using idempotent matrices and their corresponding projections, while another method involved showing that the matrix B is surjective. The conversation also touched on the concept of left and right inverses and the implications for a matrix to be invertible.
  • #1
vsage
My linear algebra professor likes to use theorems he expects us to prove later in his proofs of theorems in class. Well it's not like that because it makes sense but they're usually more of side notes. Today he asked us to prove this:

Given A and B are 2 by 2 matrices over the field R, prove that if A*B = I(2x2) where I(2x2) is the 2x2 identity matrix, B*A = I(2x2).

Now I filled up a good page just to prove this and ended up turning A and B into

[tex]A = \left( \begin{array}{cc} a1 & a2\\
a3 & a4 \end{array} \right) , B = \left( \begin{array}{cc} b1 & b2 \\ b3 & b4 \end{array} \right)[/tex]

into

[tex]A = \left( \begin{array}{cc} a1 & a2\\ a3 & a4 \end{array} \right) , B = \frac{1}{a1a4-a3a2} \left( \begin{array}{cc}a4 & -a2\\ -a3 & a1 \end{array} \right)[/tex]

and moving on from there it was pretty easy to prove that [tex]AB = BA = \left( \begin{array}{cc} 1 & 0\\0 & 1 \end{array} \right)[/tex]

Is there another way? I wasted so much time and my professor's proofs are usually so much more elegant. I can scan a copy of my work if you're still confused as to what I actually did (unbeautifully)

(Please excuse this post it will take awhile to get right I am learning LaTeX)
 
Physics news on Phys.org
  • #2
here's one way of doing it let me use 1 for the identity

1=AB, as given implies

B=BAB

implies

BA=BABA

let C=BA, we want to show C=1 too.

C satisfies C^2=C, so its characteristic polynomial divides x^2-x.

this implies that C is one of

[tex]\left( \begin{array}{cc} 1 & 0\\0 & 1 \end{array} \right)[/tex]

[tex]\left( \begin{array}{cc} 1 & 0\\0 & 0 \end{array} \right)[/tex]

[tex]\left( \begin{array}{cc} 0 & 0\\0 & 1 \end{array} \right)[/tex]

[tex]\left( \begin{array}{cc} 0 & 0\\0 & 0 \end{array} \right)[/tex]

but C has only 0 in its kernel, hence it must be the identity.

In general for more dimensions, we just say C is 'idempotent' since its square is itself. idempotent matrices correspond exactly to projection onto some span of some subset of the basis vectors. C has trivial kernel, and the only such idempotent is the identity matrix.
 
  • #3
how about this:
A(BA)=(AB)A
A(BA)=IA
A(BA)=A
A^-1*A(BA)=A^-1*A
BA=I
 
  • #4
that presupposes that A has a left inverse.
 
  • #5
if it has a right inverse, it is invertible, and thus has a left inverse. that is, if it has a right inverse, its columnspace spans R2, which means its determinant is nonzero, which means its transposes determinant is nonzero, which means its rowspace spans R2, which means it has a left inverse. i haven't done linear algebra in a while, so i might not be thorough, but i just think there's a simpler proof of this than the one you gave.
 
Last edited:
  • #6
I get the impression that they haven't yet proven that.
 
  • #7
Mine is a very simple proof, if we omit the details, which the OP may not be familiar with.

BA is clearly idempotent with trivial kernel, hence it is the identity.

It's a one liner. Of course simplicity depends on what we assume familiarity with.

To prove that it isn't as trivial as you might think, consider the left (L) and right (R) shift operators on l^2 with the standard basis.

RL=I

so R possesses a right inverse, but R is not invertible.

It is in fact a standard lemma in group theory that if something has a left and right inverse then they are the same. However having a left inverse does not imply that something is invertible. We both need finite dimensionality.

Besides, this is a proof that applies to any linear map on a finite dimensional vector space, and does not require you to pick a basis (though I did pick one to make it clearer), and as such would generally be considered better by a pure mathematician, though if you aren't one of those you're free to think something else is better.
 
  • #8
Sorry for being so late in responding. Yes Mattgrime my professor gave a very similar proof to that one in class. It was very set theory based as opposed to strictly linear algebra. Thank you for all your input everyone it has been enlightening :)
 
  • #9
As usual I am too late. but this seems easy enough to do in an elementary way.

I.e. AB = I implies that (BA)B = B(AB) = B, which implies that BA is the identity on the image of B, so you just need to show that multiplication by B is surjective. Since the determinant is non zero this follows, but here is another simple argument without determinants:

If not, then every vector form Bx would have form cv for some fixed v. And then every vector of form ABx would have form A(cv) = c(Av). But not every vector x has this form, so we cannot always have ABx = x, contradiction.
 

What is an interesting matrix identity problem?

An interesting matrix identity problem is a mathematical puzzle that involves manipulating matrices and their properties in order to find a solution that satisfies a given set of conditions.

What is the importance of studying matrix identity problems?

Studying matrix identity problems can help improve problem-solving skills and deepen understanding of linear algebra and matrix operations. These problems also have practical applications in fields such as physics, engineering, and computer science.

What are some common strategies for solving matrix identity problems?

Some common strategies for solving matrix identity problems include using properties of matrices such as commutativity, associativity, and distributivity, as well as using techniques like matrix multiplication and inverse operations.

Are there any tips for approaching matrix identity problems?

One tip for approaching matrix identity problems is to break down the problem into smaller, more manageable parts. It can also be helpful to work backwards from the desired end result and use trial and error to find a solution.

Can matrix identity problems have multiple solutions?

Yes, matrix identity problems can have multiple solutions. This is because matrices can be manipulated in different ways to satisfy the given conditions, resulting in different solutions.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
966
  • Linear and Abstract Algebra
Replies
5
Views
942
  • Linear and Abstract Algebra
Replies
1
Views
711
  • Linear and Abstract Algebra
Replies
19
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
961
  • Linear and Abstract Algebra
Replies
7
Views
927
Back
Top