MHB Proving statements regarding invertible matrices

  • Thread starter Thread starter Bruce Wayne1
  • Start date Start date
  • Tags Tags
    Matrices
Bruce Wayne1
Messages
15
Reaction score
0
I've been watching the OCW Linear Algebra course and I have a textbook, and I came across something that I think is fascinating: The Invertible Matrix Theorem. I've seen some proofs and I find that a lot of the statements are linked in different orders and sometimes the author will site one tiny little theorem and justify 5 statements at once. I've been able to prove a few on my own. I'm looking at some and I don't understand why they work exactly. OK. I want to prove that the columns of matrix A span Rn and that this is equivalent to the linear transformation x->Ax maps Rn onto Rn. I would like to prove the converse is also true.

So my thinking is:

Assume the columns of matrix A span Rn. By the definition of Ax , for each b in Rn , the equation Ax=b has a solution. This implies that T(x)=b has at least one solution.

That's when I get myself totally confused. I think I'm missing a few intermediary steps in the proof. How do I do this proof?
 
Physics news on Phys.org
Hi there. How many other parts of the invertible matrix theorem may we use while doing this proof? If we look at this list of components of this theorem, you are trying to prove that (h) implies (i) and the converse. Can we use (a)-(g)? If not you'll have to define and prove some other stuff first I believe.
 
Jameson said:
Hi there. How many other parts of the invertible matrix theorem may we use while doing this proof? If we look at this list of components of this theorem, you are trying to prove that ( h) implies (i) and the converse. Can we use (a)-(g)? If not you'll have to define and prove some other stuff first I believe.

Correct, I am trying to prove ( h)<-->(i) from that list. I'm trying to do it directly between those two, without using the others. However, I do understand (g) can be used, as I tried, but how do I justify it? In other words, I need to have it justified. I'm not looking to just say "oh, statement blah is equivalent, done" because that's how the book does it and I'm trying to justify each step.
 
Bruce Wayne said:
I've been watching the OCW Linear Algebra course and I have a textbook, and I came across something that I think is fascinating: The Invertible Matrix Theorem. I've seen some proofs and I find that a lot of the statements are linked in different orders and sometimes the author will site one tiny little theorem and justify 5 statements at once. I've been able to prove a few on my own. I'm looking at some and I don't understand why they work exactly. OK. I want to prove that the columns of matrix A span Rn and that this is equivalent to the linear transformation x->Ax maps Rn onto Rn. I would like to prove the converse is also true.

So my thinking is:

Assume the columns of matrix A span Rn. By the definition of Ax , for each b in Rn , the equation Ax=b has a solution. This implies that T(x)=b has at least one solution.

That's when I get myself totally confused. I think I'm missing a few intermediary steps in the proof. How do I do this proof?

Hail, mighty B! (Bow)

In any proof you should always start from the definitions.

Definition from wiki:
the span of a set S may be defined as the set of all finite linear combinations of elements of S.

In your case that means that any vector $\mathbf v \in \mathbb R^n$ can be written as
$\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$
where $\mathbf a_i$ is the set of column vectors of the matrix $A$, and $\lambda_i$ are the factors making up the linear combination with respect to these column vectors.The Definition from wiki about a function that maps onto:
a function f with domain X and codomain Y is surjective if for every y in Y there exists at least one x in X with f(x)=y.

In your case that means that for any vector $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.
So let's write your first half of the proof a bit neater.Lemma 1
If the columns of matrix $A$ span $\mathbb R^n$, then the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$.

Proof
Assume the columns of matrix $A$ span $\mathbb R^n$.

Then any vector $\mathbf v \in \mathbb R^n$ can be written as
$\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$
where $\mathbf a_i$ is the set of column vectors of the matrix $A$, and $\lambda_i$ are the factors making up the linear combination with respect to these column vectors.

Using the definition of matrix multiplication, we can rewrite this as:
$$\mathbf v = A \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n\end{bmatrix}$$
In other words, for any vector $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.

Therefore the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$. $\qquad \blacksquare$
Perhaps you can do the second (converse) half of the proof?
 
Last edited:
I like Serena said:
Hail, mighty B! (Bow)

In any proof you should always start from the definitions.

Definition from wiki:
the span of a set S may be defined as the set of all finite linear combinations of elements of S.

In your case that means that any vector $\mathbf v \in \mathbb R^n$ can be written as
$\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$
where $\mathbf a_i$ is the set of column vectors of the matrix $A$, and $\lambda_i$ are the factors making up the linear combination with respect to these column vectors.The Definition from wiki about a function that maps onto:
a function f with domain X and codomain Y is surjective if for every y in Y there exists at least one x in X with f(x)=y.

In your case that means that for any vector $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.
So let's write your first half of the proof a bit neater.Lemma 1
If the columns of matrix $A$ span $\mathbb R^n$, then the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$.

Proof
Assume the columns of matrix $A$ span $\mathbb R^n$.

Then any vector $\mathbf v \in \mathbb R^n$ can be written as
$\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$
where $\mathbf a_i$ is the set of column vectors of the matrix $A$, and $\lambda_i$ are the factors making up the linear combination with respect to these column vectors.

Using the definition of matrix multiplication, we can rewrite this as:
$$\mathbf v = A \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n\end{bmatrix}$$
In other words, for any vector $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.

Therefore the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$. $\qquad \blacksquare$
Perhaps you can do the second (converse) half of the proof?


Thanks. Here's my attempt:

Assume the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$.

For all $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.

$A \mathbf x = \mathbf v$ can be rewritten as $$\mathbf v = A \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n\end{bmatrix}$$

This can further be rewritten as $\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$

Therefore the columns of matrix $A$ span $\mathbb R^n$. $\qquad \blacksquare$
 
Bruce Wayne said:
Thanks. Here's my attempt:

Assume the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$.

For all $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.

$A \mathbf x = \mathbf v$ can be rewritten as $$\mathbf v = A \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n\end{bmatrix}$$

This can further be rewritten as $\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$

Therefore the columns of matrix $A$ span $\mathbb R^n$. $\qquad \blacksquare$

Looks just fine! ;)
 
Thanks! It was a lot simpler than I had anticipated. (Smile)
 
Back
Top