Composition of Linear Transformation and Matrix Multiplication

Click For Summary
SUMMARY

The discussion centers on Theorem 2.14 from the textbook "Linear Algebra" by Friedberg, Insel, and Spence, which states that for finite-dimensional vector spaces V and W with ordered bases B and C, respectively, a linear transformation T: V→W satisfies [T(u)]C = [T]BCB for each u in V. Participants analyze the proof, particularly the equivalence g = Tf, where g is defined as g(a) = aT(u). The conversation also touches on the implications of linearity and the representation of transformations in different bases, leading to questions about the notation and reasoning in the theorem.

PREREQUISITES
  • Understanding of linear transformations and their properties
  • Familiarity with finite-dimensional vector spaces and ordered bases
  • Knowledge of matrix representation of linear transformations
  • Basic concepts of matrix multiplication and vector notation
NEXT STEPS
  • Study the proof of Theorem 2.11 regarding composition of linear transformations
  • Explore the concept of isometries and their properties in linear algebra
  • Learn about the relationship between linear transformations and matrix representations
  • Investigate different bases for vector spaces and their impact on linear transformations
USEFUL FOR

Students of linear algebra, mathematicians, and educators seeking to deepen their understanding of linear transformations, matrix multiplication, and the theoretical foundations of vector spaces.

jeff1evesque
Messages
312
Reaction score
0
I am reading (theorem 2.14) from a textbook, and don't understand how g = Tf and (#1) line of reasoning. The theorem and proof is as follows:

Theorem 2.14: Let V and W be finite-dimensional vector spaces having ordered bases B and C, respectively, and let T: V-->W be linear. Then, for each u in V, we have
[T(u)]C = [T]BCB.

Proof: Fix u in V, and define the linear transformations f: F --> V by f(a) = au and
g: F-->W by g(a) = aT(u) for all a in F. Let A = {1} be the standard
ordered basis for F. Notice that g = Tf.. Identifying
column vectors as matrices and using Theorem 2.11, we obtain

(#1) [T(u)]C = [g(1)]C = [g]AC = [Tf]AC = [T]BC[f]AB = [T]BC[f(1)]B = [T]BCB.

-------------------------------

Theorem 2.11: Let V, W, and Z be finite-dimensional vector spaces with ordered bases, A,B, and C, respectively. Let T: V-->W and U: W-->Z be linear transformations. Then
[UT]AC = BC[T]AB
 
Physics news on Phys.org
jeff1evesque said:
I am reading (theorem 2.14) from a textbook, and don't understand how g = Tf and (#1) line of reasoning. The theorem and proof is as follows:

Theorem 2.14: Let V and W be finite-dimensional vector spaces having ordered bases B and C, respectively, and let T: V-->W be linear. Then, for each u in V, we have
[T(u)]C = [T]BCB.

Proof: Fix u in V, and define the linear transformations f: F --> V by f(a) = au and
g: F-->W by g(a) = aT(u) for all a in F. Let A = {1} be the standard
ordered basis for F. Notice that g = Tf..].

The defining property of "linear transformation", T, is T(au+ bv)= aT(u+ bT(v). In particular, T(au)= aT(u),

g(u)= aT(u)= T(au)= T(f(u))= Tf.


Identifying
column vectors as matrices and using Theorem 2.11, we obtain

(#1) [T(u)]C = [g(1)]C = [g]AC = [Tf]AC = [T]BC[f]AB = [T]BC[f(1)]B = [T]BCB.

-------------------------------

Theorem 2.11: Let V, W, and Z be finite-dimensional vector spaces with ordered bases, A,B, and C, respectively. Let T: V-->W and U: W-->Z be linear transformations. Then
[UT]AC = BC[T]AB
 
The defining property of "linear transformation", T, is T(au+ bv)= aT(u)+ bT(v). In particular, T(au)= aT(u),

g(u)= aT(u)= T(au)= T(f(u))= Tf.

Wouldn't it be:
g(a) = aT(u) = T(au) = T(f(a)) = Tf.
But why can we replace f(a) by f?
--------------------------------------------------------------

Lastly, how does this line happen:
[T(u)]C = [g(1)]C = [g]AC

Thanks a lot,

JL
 
Your textbook sucks. It is obscuring a simple principle behind mountains of useless notation.

To understand what is going on, consider the following example carefully:
Let U be a 3D vector space with basis e1, e2, e3.
Let V be a 2D vector space with basis b1, b2,
Let T:U->V be linear.

(The same reasoning generalizes to different numbers of dimensions)

http://img187.imageshack.us/img187/8246/r3tor21.png

Question: For some u in U, can we write T(u) in a simpler form?

Answer: Yes! If you express u in the basis given for U, then u = u1 e1 + u2 e2 + u3 e3 for some scalars u1, u2, u3.

http://img155.imageshack.us/img155/9341/r3tor32.png

But then by linearity,
T(u) = u1 T(e1) + u2 T(e2) + u3 T(e3)

But we could pull the same trick on each of those T(ei)'s by writing them in the basis of V (remember, if x is in U, then T(x) is in V by definition).
http://img249.imageshack.us/img249/6382/r3tor23.png

Doing this gets us:
T(e1) = T11 b1 + T21 b2
T(e2) = T12 b1 + T22 b2
T(e3) = T13 b1 + T23 b2

for 6 different scalars T11, T21 etc. You may think of these scalars Tij as "defining" the transformation T for when it is expressed in these bases.

Now we can substitute this back into the expression from before to get a very simple form for T(u):
T(u) = u1 (T11 b1 + T21 b2) + u2 (T12 b1 + T22 b2) + u3 (T13 b1 + T23 b2)

= (u1 T11 + u2 T12 + u3 T13)b1 + (u1 T21 + u2 T22 + u3 T23)b2

= (T11 T12 T13)*(u1 u2 u3)T b1 + (T21 T22 T23)*(u1 u2 u3)T b2

In other words, if we write u in a given basis (here e1, e2, e3), and we want to know T(u) in a given basis (here b1, b2), it is equivalent to computing the matrix multiplication:

T(\textbf{u})_{b_1, b_2. basis} = \left[\begin{matrix} T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23}\end{matrix}\right] \left[\begin{matrix} u_1 \\ u_1 \\ u_3\end{matrix}\right] = \left[\begin{matrix}T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23}\end{matrix}\right] \textbf{u}_{e_1, e_2, e_3. basis}
 
Last edited by a moderator:
If we know g(a) = aT(u) for all a in F.
Then g = T(u).

How is this computation performed-by cancellation?
g(a) = aT(u) <==> g(aa-1) = aa-1T(u)

If we can get rid of a by cancellation, did i place a-1 in the correct position or do i place it after aT(u)?

THanks,


JL
 
maze said:
Your textbook sucks. It is obscuring a simple principle behind mountains of useless notation.


Yeah, what book is that? I've seen it presented much more clearly
 
Back to my last post- is my reasoning correct?

If we know g(a) = aT(u) for all a in F.
Then g = T(u).

How is this computation performed-by cancellation?
g(a) = aT(u) <==> g(aa-1) = aa-1T(u)

If we can get rid of a by cancellation, did i place a-1 in the correct position or do i place it after aT(u)?

THanks,


JL


and this book is intended to portray the theory aspect of linear algebra (with limited actual application). The book is by Friedberg, Insel, Spence, “Linear Algebra” (Prentice Hall, 4th edition).
 
I have a weird way of visualizing linear transformations involving a corn field as the basis, areas of corn turned into bushels as the e's, and taking them to market to get paid when multiplying the matrix.

I don't know if it would help or hurt anyone, or if it is even plausible as a way of looking at it, but it seems to help me. lol
 
I still am not sure how [g(1)]C = [g]AC occurs. I understand everything elsethanks,JL
 
  • #10
Isometry

Hi, I have a question:
I am trying to find an isometry such that T(aU+bV)≠aT(U)+bT(V).
I have tried so many possibilities. I gave T(X)=MX given that M is a matrix that doesn't have an inverse. But i still can't find a nice matrix that will make the proposition possible.
Help please
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
975