Composition of Linear Transformation and Matrix Multiplication

Click For Summary

Discussion Overview

The discussion revolves around the understanding of linear transformations and their representation through matrices, specifically focusing on Theorem 2.14 from a linear algebra textbook. Participants explore the relationships between linear transformations, their definitions, and how they can be expressed in terms of matrix multiplication and bases of vector spaces.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants express confusion regarding the proof of Theorem 2.14, particularly the line of reasoning that leads to the conclusion that g = Tf.
  • There is a discussion about the defining property of linear transformations, with some participants noting that T(au) = aT(u) and questioning how this relates to the expressions for g.
  • One participant suggests that the notation and presentation in the textbook may obscure the underlying principles of linear transformations.
  • Another participant provides an illustrative example involving vector spaces of different dimensions to clarify how linear transformations can be expressed in simpler forms using matrix multiplication.
  • Some participants inquire about the mechanics of cancellation in the context of defining g and its relationship to T(u).
  • A participant shares a personal visualization of linear transformations using a metaphor involving corn fields and market transactions, suggesting that unique perspectives may aid understanding.
  • There is a request for clarification on the transition from [g(1)]C to [g]AC, indicating ongoing uncertainty about specific steps in the proof.
  • One participant poses a separate question about finding an isometry that does not satisfy the linearity condition, indicating a broader exploration of linear transformations.

Areas of Agreement / Disagreement

Participants express varying levels of understanding regarding the theorem and its proof, with some agreeing that the textbook's notation is complex while others seek clarification on specific mathematical steps. The discussion remains unresolved on several points, particularly regarding the interpretation of g and its equivalence to Tf.

Contextual Notes

Participants highlight limitations in the textbook's presentation, suggesting that the notation may obscure fundamental concepts. There are also unresolved questions about specific mathematical manipulations and definitions related to linear transformations.

Who May Find This Useful

This discussion may be useful for students and educators in linear algebra, particularly those grappling with the concepts of linear transformations, matrix representations, and the implications of different bases in vector spaces.

jeff1evesque
Messages
312
Reaction score
0
I am reading (theorem 2.14) from a textbook, and don't understand how g = Tf and (#1) line of reasoning. The theorem and proof is as follows:

Theorem 2.14: Let V and W be finite-dimensional vector spaces having ordered bases B and C, respectively, and let T: V-->W be linear. Then, for each u in V, we have
[T(u)]C = [T]BCB.

Proof: Fix u in V, and define the linear transformations f: F --> V by f(a) = au and
g: F-->W by g(a) = aT(u) for all a in F. Let A = {1} be the standard
ordered basis for F. Notice that g = Tf.. Identifying
column vectors as matrices and using Theorem 2.11, we obtain

(#1) [T(u)]C = [g(1)]C = [g]AC = [Tf]AC = [T]BC[f]AB = [T]BC[f(1)]B = [T]BCB.

-------------------------------

Theorem 2.11: Let V, W, and Z be finite-dimensional vector spaces with ordered bases, A,B, and C, respectively. Let T: V-->W and U: W-->Z be linear transformations. Then
[UT]AC = BC[T]AB
 
Physics news on Phys.org
jeff1evesque said:
I am reading (theorem 2.14) from a textbook, and don't understand how g = Tf and (#1) line of reasoning. The theorem and proof is as follows:

Theorem 2.14: Let V and W be finite-dimensional vector spaces having ordered bases B and C, respectively, and let T: V-->W be linear. Then, for each u in V, we have
[T(u)]C = [T]BCB.

Proof: Fix u in V, and define the linear transformations f: F --> V by f(a) = au and
g: F-->W by g(a) = aT(u) for all a in F. Let A = {1} be the standard
ordered basis for F. Notice that g = Tf..].

The defining property of "linear transformation", T, is T(au+ bv)= aT(u+ bT(v). In particular, T(au)= aT(u),

g(u)= aT(u)= T(au)= T(f(u))= Tf.


Identifying
column vectors as matrices and using Theorem 2.11, we obtain

(#1) [T(u)]C = [g(1)]C = [g]AC = [Tf]AC = [T]BC[f]AB = [T]BC[f(1)]B = [T]BCB.

-------------------------------

Theorem 2.11: Let V, W, and Z be finite-dimensional vector spaces with ordered bases, A,B, and C, respectively. Let T: V-->W and U: W-->Z be linear transformations. Then
[UT]AC = BC[T]AB
 
The defining property of "linear transformation", T, is T(au+ bv)= aT(u)+ bT(v). In particular, T(au)= aT(u),

g(u)= aT(u)= T(au)= T(f(u))= Tf.

Wouldn't it be:
g(a) = aT(u) = T(au) = T(f(a)) = Tf.
But why can we replace f(a) by f?
--------------------------------------------------------------

Lastly, how does this line happen:
[T(u)]C = [g(1)]C = [g]AC

Thanks a lot,

JL
 
Your textbook sucks. It is obscuring a simple principle behind mountains of useless notation.

To understand what is going on, consider the following example carefully:
Let U be a 3D vector space with basis e1, e2, e3.
Let V be a 2D vector space with basis b1, b2,
Let T:U->V be linear.

(The same reasoning generalizes to different numbers of dimensions)

http://img187.imageshack.us/img187/8246/r3tor21.png

Question: For some u in U, can we write T(u) in a simpler form?

Answer: Yes! If you express u in the basis given for U, then u = u1 e1 + u2 e2 + u3 e3 for some scalars u1, u2, u3.

http://img155.imageshack.us/img155/9341/r3tor32.png

But then by linearity,
T(u) = u1 T(e1) + u2 T(e2) + u3 T(e3)

But we could pull the same trick on each of those T(ei)'s by writing them in the basis of V (remember, if x is in U, then T(x) is in V by definition).
http://img249.imageshack.us/img249/6382/r3tor23.png

Doing this gets us:
T(e1) = T11 b1 + T21 b2
T(e2) = T12 b1 + T22 b2
T(e3) = T13 b1 + T23 b2

for 6 different scalars T11, T21 etc. You may think of these scalars Tij as "defining" the transformation T for when it is expressed in these bases.

Now we can substitute this back into the expression from before to get a very simple form for T(u):
T(u) = u1 (T11 b1 + T21 b2) + u2 (T12 b1 + T22 b2) + u3 (T13 b1 + T23 b2)

= (u1 T11 + u2 T12 + u3 T13)b1 + (u1 T21 + u2 T22 + u3 T23)b2

= (T11 T12 T13)*(u1 u2 u3)T b1 + (T21 T22 T23)*(u1 u2 u3)T b2

In other words, if we write u in a given basis (here e1, e2, e3), and we want to know T(u) in a given basis (here b1, b2), it is equivalent to computing the matrix multiplication:

T(\textbf{u})_{b_1, b_2. basis} = \left[\begin{matrix} T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23}\end{matrix}\right] \left[\begin{matrix} u_1 \\ u_1 \\ u_3\end{matrix}\right] = \left[\begin{matrix}T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23}\end{matrix}\right] \textbf{u}_{e_1, e_2, e_3. basis}
 
Last edited by a moderator:
If we know g(a) = aT(u) for all a in F.
Then g = T(u).

How is this computation performed-by cancellation?
g(a) = aT(u) <==> g(aa-1) = aa-1T(u)

If we can get rid of a by cancellation, did i place a-1 in the correct position or do i place it after aT(u)?

THanks,


JL
 
maze said:
Your textbook sucks. It is obscuring a simple principle behind mountains of useless notation.


Yeah, what book is that? I've seen it presented much more clearly
 
Back to my last post- is my reasoning correct?

If we know g(a) = aT(u) for all a in F.
Then g = T(u).

How is this computation performed-by cancellation?
g(a) = aT(u) <==> g(aa-1) = aa-1T(u)

If we can get rid of a by cancellation, did i place a-1 in the correct position or do i place it after aT(u)?

THanks,


JL


and this book is intended to portray the theory aspect of linear algebra (with limited actual application). The book is by Friedberg, Insel, Spence, “Linear Algebra” (Prentice Hall, 4th edition).
 
I have a weird way of visualizing linear transformations involving a corn field as the basis, areas of corn turned into bushels as the e's, and taking them to market to get paid when multiplying the matrix.

I don't know if it would help or hurt anyone, or if it is even plausible as a way of looking at it, but it seems to help me. lol
 
I still am not sure how [g(1)]C = [g]AC occurs. I understand everything elsethanks,JL
 
  • #10
Isometry

Hi, I have a question:
I am trying to find an isometry such that T(aU+bV)≠aT(U)+bT(V).
I have tried so many possibilities. I gave T(X)=MX given that M is a matrix that doesn't have an inverse. But i still can't find a nice matrix that will make the proposition possible.
Help please
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K