A question on invertible matrixes

  • Thread starter Thread starter kostas230
  • Start date Start date
kostas230
Messages
96
Reaction score
3
I'm making notes for linear algebra, and I'm using the weaker definition for the invertible matrix:

"A matrix A is called invertible it there exists a matrix B such that BA = I"

How do we prove that AB = I using only matrix theory?
 
Physics news on Phys.org
You mean how one would prove that AB = I provided that BA = I?

If you assume that BA = I and thus that A^{-1} = B and A = B^{-1} you could try multiplying from the left with B^{-1} and from the right with A^{-1}, which gives you

B^{-1}BAA^{-1} = B^{-1}IA^{-1} \ .​
 
Square, you are assuming what should be proved: You assume that there is a matrix ##A^{-1}## such that ##AA^{-1}=A^{-1}A=I##, but this is what should be proved.

To prove that ##BA=I## implies ##AB=I## is not entirely easy. First of all, we must assume that ##A## and/or ##B## are square matrices. Otherwise, this is false. For example, if

##B=[1\quad 1]## and ##A=[2\quad -1]^{T}##, then ##BA=[1]=I_1## and ##AB\neq I_2##.

So, assume that ##A## and ##B## are ##n\times n##-matrices, for some ##n\ge 1##, such that ##BA=I##.

Now, consider the homogeneous system

##A\bf x= 0## (1).

If ##\bf x## is a solution of (1), then

##{\bf x}=I{\bf x}=(BA){\bf x}=B(A{\bf x})=B{\bf 0}={\bf 0}##.

This means that the homogeneous system (1) has only the trivial solution ##{\bf x} ={\bf 0}##. Thus, if we solve (1) by elimination, transforming ##A## to reduced echelon form by a sequence of elementary row operations, the resulting reduced echelon form must be ##I## (otherwise, there would be non-pivot columns which correspond to free variables).

Now, consider the inhomogeneous system

##A{\bf x} ={\bf b}##, (2),

where ##\bf b## is an arbitrary vector in ##R^n##.

If we perform the same sequence of row operations as in the solution of (1), the resulting coefficient matrix is again ##I##, since we started with the same coefficient matrix ##A##.
This means that (2) has a unique solution.

This holds for all vectors ##{\bf b}\in R^n##. In particular, it holds for the standard basis vectors ##{\bf e_1}, {\bf e_2}\dots, {\bf e_n}##. Let us denote the solutions for these vectors by ##{\bf x_1}, {\bf x_2},\dots,{\bf x_n}##, respectively.

Let ##X## be the ##n\times n##-matrix with the vectors ##{\bf x_1}, {\bf x_2},\dots,{\bf x_n}## as columms, that is ##X=[{\bf x_1}\, {\bf x_2}\dots{\bf x_n}]##. Also, notice that ##[{\bf e_1}\,{\bf e_2}\dots{\bf e_n}]=I##.

It follows that ##AX=I##.

Next, ##B=BI=B(AX)=(BA)X=IX=X##, that is, ##B=X##.
Hence, ##AB=I##.
 
Last edited:
it is not true unless the matrices are square.
 
Back
Top