Linear Transformations: Explaining the Theorem

Click For Summary
SUMMARY

Every linear transformation T: Rn -> Rm can be represented as a matrix transformation using a unique m x n matrix A, where the columns of A are formed by applying T to the standard basis vectors of Rn. The theorem states that A = [T(e1) T(e2) ... T(en)], which illustrates how linear transformations correspond to matrix multiplication. This relationship holds true specifically for standard bases in Rn and Rm, and different bases may yield different matrix representations of the same linear transformation.

PREREQUISITES
  • Understanding of linear transformations in vector spaces
  • Familiarity with matrix representation and multiplication
  • Knowledge of standard basis vectors in Rn
  • Basic concepts of vector spaces and their properties
NEXT STEPS
  • Study the properties of linear transformations in depth
  • Learn about matrix representation of linear transformations in different bases
  • Explore the concept of vector spaces and their dimensions
  • Investigate the relationship between linear transformations and eigenvalues/eigenvectors
USEFUL FOR

Students of linear algebra, mathematicians, and educators seeking to deepen their understanding of linear transformations and their matrix representations.

finkeljo
Messages
10
Reaction score
0
I don't quite understand the idea that (as my book says) every linear transformation with domain Rn and codomain Rm is a matrix transofrmation... I mean i get the idea of what a linear transformation is (sorta like a function) but it gives the theorem:

Let T: Rn -> Rm be linear. Then there is a unique m x n matrix

A=[T(e1)T(e2)...T(en)]

Can some one just explain that a little bit? It may seem simple but I don't think my book does a good job providing enough background for the theorems they state.
 
Physics news on Phys.org
finkeljo said:
Can some one just explain that a little bit?
See this post. Ask if there's something you don't understand.
 
Note that
[tex]\begin{bmatrix}a_{11} & a_{12} & ... & a_{1n} \\ a_{21} & a_{23} & ... & a_{2n} \\ ... & ... & ... & ... \\ a_{m1}& a_{m2} & ... & a_{mn}\end{bmatrix}\begin{bmatrix}1 \\ 0 \\ ... \\ 0\end{bmatrix}= \begin{bmatrix}a_{11} \\ a_{21} \\ ... \\a_{m1}\end{bmatrix}[/tex]

What do you get if you multiply that same matrix by
[tex]\begin{bmatrix} 0 & 1 & ... & 0\end{bmatrix}[/tex]
etc.?

Do you see that applying any linear transformation to the basis vectors in[math]R^n[math] gives you the columns of the matrix representation?

(This is, by the way, "unique" only in the standard bases for [itex]R^n[/itex] and [itex]R^m[/itex]. If L is a linear transformation from vector space U to vector space V, you can get different matrix representations for every different choice of basis for U or V.)
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 19 ·
Replies
19
Views
4K
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
4K