# Non-square matrices and inverses

1. Dec 1, 2012

### Bipolarity

I now know that inverses are only defined for square matrices. My question is: is this because inverses for non-square matrices do not exist, i.e. there is no (m by n) matrix A for which there exists an (n by m) matrix B such that both AB = I and BA = I is true?

Or is it just done for convenience? In this cases, can we indeed find a (m by n) matrix A for which there exists an (n by m) matrix B such that both AB = I and BA = I is true?

Thanks!

BiP

2. Dec 1, 2012

### Michael Redei

AB will be an (m by m) matrix, and BA will be (n by n), so they can't both be equal to the same matrix I.

If n>m and the m columns of A are linearly independent, you can find several matrices B with AB = I(m by m), and these matrices B are "right inverses" of A. You can't, however, find a matrix B with BA = I(n by n), for the following reason:

Let rank(M) stand for the maximum number of linearly independent rows or columns of M. (The numbers can be proven to be equal.) Then rank(M) ≤ min{ columns of M, rows of M } and rank(AB) ≤ min{ rank(A), rank(B) }.

In your case, rank(AB) ≤ m, but rank(I(n by n)) = n.

(If n<m, you can find several matrices "left inverse" matrices B with BA = I(n by n), but no matrix B with AB = I(m by m).)

3. Dec 1, 2012

### Bipolarity

I see. What if we want the (m by n) matrix A so that AB = I where I is (m by m) and BA = I where I is (n by n).

Does such a matrix exist?

BiP

4. Dec 1, 2012

5. Dec 1, 2012

### Bipolarity

Thanks! Could you provide an example of such a matrix, i.e. an (m by n) matrix A such that there is an (n by m) matrix B for which $AB = I_{m}$ and $BA = I_{n}$ where the subscript of I denotes its size.

BiP

6. Dec 1, 2012

### Michael Redei

You can't have both, unless $m=n$ and $A$ is an invertible square matrix. Consider this simple example:
$$\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 1 & 1 \end{array} \right) \cdot \left( \begin{array}{ccc} a & b & c \\ d & e & f \end{array} \right) = \left(\begin{array}{ccc} a & b & c \\ d & e & f \\ a + d & b + e & c + f \end{array} \right) = \left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right),$$i.e. both $c=f=0$ and $c+f=1$, which is impossible. But you can have
$$\left( \begin{array}{ccc} a & b & c \\ d & e & f \end{array} \right) \cdot \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 1 & 1 \end{array} \right) = \left(\begin{array}{cc} a + c & b + c \\ d + f & e + f \end{array} \right) = \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right)$$if you set
$$\left( \begin{array}{ccc} a & b & c \\ d & e & f \end{array} \right) = \left( \begin{array}{ccc} a & (a-1) & (1-a) \\ d & (d+1) & -d \end{array} \right).$$

7. Dec 2, 2012

### Bipolarity

Thanks! But is this true for general matrices of size (m by n)? How would one prove it in the general case? It seems really difficult to extrapolate this proof to such general forms

BiP

8. Dec 2, 2012

### Michael Redei

More generally, a "m by n" matrix, A, with m columns and n rows, represents a linear transformation from an n dimensional vector space, U, to an m dimensional vector space, V. If m> n, then A maps U into an n dimensional subspace of V. Since n< m, the mapping is not "onto"- there exist vectors, v, in V that are not in the that subspace. That means there was no "u" in U such that Au= v and so no vector for "$A^{-1}$" to map v back to.