Non-square matrices and inverses

  • Thread starter Bipolarity
  • Start date
  • Tags
    Matrices
In summary, the existence of matrix inverses is dependent on the dimensions of the matrices, with square matrices being the only ones for which inverse matrices always exist. This is due to the properties of linear transformations and the relationship between the ranks of the matrices involved.
  • #1
Bipolarity
776
2
I now know that inverses are only defined for square matrices. My question is: is this because inverses for non-square matrices do not exist, i.e. there is no (m by n) matrix A for which there exists an (n by m) matrix B such that both AB = I and BA = I is true?

Or is it just done for convenience? In this cases, can we indeed find a (m by n) matrix A for which there exists an (n by m) matrix B such that both AB = I and BA = I is true?

Thanks!

BiP
 
Physics news on Phys.org
  • #2
AB will be an (m by m) matrix, and BA will be (n by n), so they can't both be equal to the same matrix I.

If n>m and the m columns of A are linearly independent, you can find several matrices B with AB = I(m by m), and these matrices B are "right inverses" of A. You can't, however, find a matrix B with BA = I(n by n), for the following reason:

Let rank(M) stand for the maximum number of linearly independent rows or columns of M. (The numbers can be proven to be equal.) Then rank(M) ≤ min{ columns of M, rows of M } and rank(AB) ≤ min{ rank(A), rank(B) }.

In your case, rank(AB) ≤ m, but rank(I(n by n)) = n.

(If n<m, you can find several matrices "left inverse" matrices B with BA = I(n by n), but no matrix B with AB = I(m by m).)
 
  • #3
Michael Redei said:
AB will be an (m by m) matrix, and BA will be (n by n), so they can't both be equal to the same matrix I.

If n>m and the m columns of A are linearly independent, you can find several matrices B with AB = I(m by m), and these matrices B are "right inverses" of A. You can't, however, find a matrix B with BA = I(n by n), for the following reason:

Let rank(M) stand for the maximum number of linearly independent rows or columns of M. (The numbers can be proven to be equal.) Then rank(M) ≤ min{ columns of M, rows of M } and rank(AB) ≤ min{ rank(A), rank(B) }.

In your case, rank(AB) ≤ m, but rank(I(n by n)) = n.

(If n<m, you can find several matrices "left inverse" matrices B with BA = I(n by n), but no matrix B with AB = I(m by m).)

I see. What if we want the (m by n) matrix A so that AB = I where I is (m by m) and BA = I where I is (n by n).

Does such a matrix exist?

BiP
 
  • #4
  • #5
Robert1986 said:
Yes, it is called the pseudoinverse: http://en.wikipedia.org/wiki/Moore–Penrose_pseudoinverse

Thanks! Could you provide an example of such a matrix, i.e. an (m by n) matrix A such that there is an (n by m) matrix B for which [itex] AB = I_{m} [/itex] and [itex] BA = I_{n} [/itex] where the subscript of I denotes its size.

BiP
 
  • #6
Bipolarity said:
I see. What if we want the (m by n) matrix A so that AB = I where I is (m by m) and BA = I where I is (n by n).

You can't have both, unless [itex]m=n[/itex] and [itex]A[/itex] is an invertible square matrix. Consider this simple example:
[tex]
\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 1 & 1 \end{array} \right) \cdot
\left( \begin{array}{ccc} a & b & c \\ d & e & f \end{array} \right) =
\left(\begin{array}{ccc} a & b & c \\ d & e & f \\ a + d & b + e & c + f \end{array} \right) =
\left(\begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right),
[/tex]i.e. both [itex]c=f=0[/itex] and [itex]c+f=1[/itex], which is impossible. But you can have
[tex]
\left( \begin{array}{ccc} a & b & c \\ d & e & f \end{array} \right) \cdot
\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \\ 1 & 1 \end{array} \right) =
\left(\begin{array}{cc} a + c & b + c \\ d + f & e + f \end{array} \right) =
\left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right)
[/tex]if you set
[tex]
\left( \begin{array}{ccc} a & b & c \\ d & e & f \end{array} \right) =
\left( \begin{array}{ccc} a & (a-1) & (1-a) \\ d & (d+1) & -d \end{array} \right).
[/tex]
 
  • #7
Thanks! But is this true for general matrices of size (m by n)? How would one prove it in the general case? It seems really difficult to extrapolate this proof to such general forms

BiP
 
  • #8
Please have another look at what I wrote about ranks.
 
  • #9
More generally, a "m by n" matrix, A, with m columns and n rows, represents a linear transformation from an n dimensional vector space, U, to an m dimensional vector space, V. If m> n, then A maps U into an n dimensional subspace of V. Since n< m, the mapping is not "onto"- there exist vectors, v, in V that are not in the that subspace. That means there was no "u" in U such that Au= v and so no vector for "[itex]A^{-1}[/itex]" to map v back to.

If, on the other hand, n> , then A maps U into all of V, but NOT "one to one". That is, given v in V, there must exist at least 2 (actually and infinite number) vectors, u1 and u2, such that A(u1)= A(u2)= v. A single inverse matrix cannot map v back to both u1 and u2.
 

1. What is a non-square matrix?

A non-square matrix is a matrix that does not have an equal number of rows and columns. In other words, it is not a square-shaped matrix. Non-square matrices can range in size and can have any number of rows and columns, as long as the number of rows is not equal to the number of columns.

2. How are non-square matrices different from square matrices?

The main difference between non-square matrices and square matrices is their size and shape. Square matrices have an equal number of rows and columns, while non-square matrices do not. This affects the operations that can be performed on them, such as finding determinants and calculating inverses.

3. What is an inverse of a non-square matrix?

The inverse of a non-square matrix is a matrix that, when multiplied with the original matrix, results in the identity matrix. This means that the inverse "undoes" the original matrix. However, not all non-square matrices have an inverse. Only non-square matrices with a non-zero determinant have an inverse.

4. How is the inverse of a non-square matrix calculated?

The inverse of a non-square matrix is calculated using a specific formula, depending on the size of the matrix. For a 2x2 matrix, the inverse is calculated by swapping the elements on the main diagonal, changing the sign of the elements on the off-diagonal, and dividing each element by the determinant of the original matrix. For larger non-square matrices, more complex methods such as Gauss-Jordan elimination are used.

5. Can non-square matrices be used to solve systems of equations?

Yes, non-square matrices can be used to solve systems of equations. This is done by first converting the system of equations into a matrix form, with the coefficients of the variables as the elements of the matrix. The inverse of this matrix can then be calculated, and the solution to the system of equations can be found by multiplying the inverse with the constant terms of the equations.

Similar threads

Replies
7
Views
814
  • Linear and Abstract Algebra
Replies
1
Views
617
  • Linear and Abstract Algebra
Replies
5
Views
851
  • Linear and Abstract Algebra
Replies
2
Views
565
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
Replies
34
Views
2K
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
882
Back
Top