Proving statements regarding invertible matrices

  • Context: MHB 
  • Thread starter Thread starter Bruce Wayne1
  • Start date Start date
  • Tags Tags
    Matrices
Click For Summary

Discussion Overview

The discussion revolves around proving statements related to the Invertible Matrix Theorem, specifically the equivalence between the columns of a matrix spanning Rn and the linear transformation defined by that matrix mapping Rn onto Rn. Participants explore the proof structure, definitions, and necessary intermediary steps involved in establishing these equivalences.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant expresses confusion about proving that the columns of matrix A span Rn is equivalent to the linear transformation x->Ax mapping Rn onto Rn, and seeks clarification on intermediary steps in the proof.
  • Another participant questions whether other parts of the Invertible Matrix Theorem can be used in the proof, specifically asking if components (a)-(g) are permissible.
  • Some participants emphasize the importance of starting from definitions, providing definitions of span and surjectivity to clarify the proof structure.
  • A participant attempts to outline a proof that if the columns of matrix A span Rn, then the transformation maps Rn onto Rn, and seeks feedback on the justification of each step.
  • Another participant presents a converse proof, assuming the transformation maps Rn onto Rn and concluding that the columns of matrix A must span Rn, while also seeking confirmation on the correctness of their reasoning.

Areas of Agreement / Disagreement

Participants generally agree on the definitions and the structure of the proofs but express differing views on the use of other components of the Invertible Matrix Theorem and the justification of steps within the proofs. The discussion remains unresolved regarding the necessity and applicability of those components.

Contextual Notes

Participants highlight the need for clear definitions and justifications in proofs, indicating that some steps may require additional clarification or proof of their own. There is an acknowledgment of potential gaps in understanding the relationships between the various statements of the Invertible Matrix Theorem.

Bruce Wayne1
Messages
15
Reaction score
0
I've been watching the OCW Linear Algebra course and I have a textbook, and I came across something that I think is fascinating: The Invertible Matrix Theorem. I've seen some proofs and I find that a lot of the statements are linked in different orders and sometimes the author will site one tiny little theorem and justify 5 statements at once. I've been able to prove a few on my own. I'm looking at some and I don't understand why they work exactly. OK. I want to prove that the columns of matrix A span Rn and that this is equivalent to the linear transformation x->Ax maps Rn onto Rn. I would like to prove the converse is also true.

So my thinking is:

Assume the columns of matrix A span Rn. By the definition of Ax , for each b in Rn , the equation Ax=b has a solution. This implies that T(x)=b has at least one solution.

That's when I get myself totally confused. I think I'm missing a few intermediary steps in the proof. How do I do this proof?
 
Physics news on Phys.org
Hi there. How many other parts of the invertible matrix theorem may we use while doing this proof? If we look at this list of components of this theorem, you are trying to prove that (h) implies (i) and the converse. Can we use (a)-(g)? If not you'll have to define and prove some other stuff first I believe.
 
Jameson said:
Hi there. How many other parts of the invertible matrix theorem may we use while doing this proof? If we look at this list of components of this theorem, you are trying to prove that ( h) implies (i) and the converse. Can we use (a)-(g)? If not you'll have to define and prove some other stuff first I believe.

Correct, I am trying to prove ( h)<-->(i) from that list. I'm trying to do it directly between those two, without using the others. However, I do understand (g) can be used, as I tried, but how do I justify it? In other words, I need to have it justified. I'm not looking to just say "oh, statement blah is equivalent, done" because that's how the book does it and I'm trying to justify each step.
 
Bruce Wayne said:
I've been watching the OCW Linear Algebra course and I have a textbook, and I came across something that I think is fascinating: The Invertible Matrix Theorem. I've seen some proofs and I find that a lot of the statements are linked in different orders and sometimes the author will site one tiny little theorem and justify 5 statements at once. I've been able to prove a few on my own. I'm looking at some and I don't understand why they work exactly. OK. I want to prove that the columns of matrix A span Rn and that this is equivalent to the linear transformation x->Ax maps Rn onto Rn. I would like to prove the converse is also true.

So my thinking is:

Assume the columns of matrix A span Rn. By the definition of Ax , for each b in Rn , the equation Ax=b has a solution. This implies that T(x)=b has at least one solution.

That's when I get myself totally confused. I think I'm missing a few intermediary steps in the proof. How do I do this proof?

Hail, mighty B! (Bow)

In any proof you should always start from the definitions.

Definition from wiki:
the span of a set S may be defined as the set of all finite linear combinations of elements of S.

In your case that means that any vector $\mathbf v \in \mathbb R^n$ can be written as
$\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$
where $\mathbf a_i$ is the set of column vectors of the matrix $A$, and $\lambda_i$ are the factors making up the linear combination with respect to these column vectors.The Definition from wiki about a function that maps onto:
a function f with domain X and codomain Y is surjective if for every y in Y there exists at least one x in X with f(x)=y.

In your case that means that for any vector $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.
So let's write your first half of the proof a bit neater.Lemma 1
If the columns of matrix $A$ span $\mathbb R^n$, then the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$.

Proof
Assume the columns of matrix $A$ span $\mathbb R^n$.

Then any vector $\mathbf v \in \mathbb R^n$ can be written as
$\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$
where $\mathbf a_i$ is the set of column vectors of the matrix $A$, and $\lambda_i$ are the factors making up the linear combination with respect to these column vectors.

Using the definition of matrix multiplication, we can rewrite this as:
$$\mathbf v = A \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n\end{bmatrix}$$
In other words, for any vector $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.

Therefore the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$. $\qquad \blacksquare$
Perhaps you can do the second (converse) half of the proof?
 
Last edited:
I like Serena said:
Hail, mighty B! (Bow)

In any proof you should always start from the definitions.

Definition from wiki:
the span of a set S may be defined as the set of all finite linear combinations of elements of S.

In your case that means that any vector $\mathbf v \in \mathbb R^n$ can be written as
$\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$
where $\mathbf a_i$ is the set of column vectors of the matrix $A$, and $\lambda_i$ are the factors making up the linear combination with respect to these column vectors.The Definition from wiki about a function that maps onto:
a function f with domain X and codomain Y is surjective if for every y in Y there exists at least one x in X with f(x)=y.

In your case that means that for any vector $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.
So let's write your first half of the proof a bit neater.Lemma 1
If the columns of matrix $A$ span $\mathbb R^n$, then the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$.

Proof
Assume the columns of matrix $A$ span $\mathbb R^n$.

Then any vector $\mathbf v \in \mathbb R^n$ can be written as
$\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$
where $\mathbf a_i$ is the set of column vectors of the matrix $A$, and $\lambda_i$ are the factors making up the linear combination with respect to these column vectors.

Using the definition of matrix multiplication, we can rewrite this as:
$$\mathbf v = A \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n\end{bmatrix}$$
In other words, for any vector $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.

Therefore the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$. $\qquad \blacksquare$
Perhaps you can do the second (converse) half of the proof?


Thanks. Here's my attempt:

Assume the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$.

For all $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.

$A \mathbf x = \mathbf v$ can be rewritten as $$\mathbf v = A \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n\end{bmatrix}$$

This can further be rewritten as $\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$

Therefore the columns of matrix $A$ span $\mathbb R^n$. $\qquad \blacksquare$
 
Bruce Wayne said:
Thanks. Here's my attempt:

Assume the linear transformation $\mathbf x \mapsto A \mathbf x$ maps $\mathbb R^n$ onto $\mathbb R^n$.

For all $\mathbf v \in \mathbb R^n$ there is at least one vector $\mathbf x \in \mathbb R^n$ such that $A \mathbf x = \mathbf v$.

$A \mathbf x = \mathbf v$ can be rewritten as $$\mathbf v = A \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\ \lambda_n\end{bmatrix}$$

This can further be rewritten as $\qquad \mathbf v = \lambda_1 \mathbf a_1 + \lambda_2 \mathbf a_2 + ... + \lambda_n \mathbf a_n$

Therefore the columns of matrix $A$ span $\mathbb R^n$. $\qquad \blacksquare$

Looks just fine! ;)
 
Thanks! It was a lot simpler than I had anticipated. (Smile)
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 9 ·
Replies
9
Views
5K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K