Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Composition of Linear Transformation and Matrix Multiplication

  1. Mar 18, 2009 #1
    I am reading (theorem 2.14) from a textbook, and don't understand how g = Tf and (#1) line of reasoning. The theorem and proof is as follows:

    Theorem 2.14: Let V and W be finite-dimensional vector spaces having ordered bases B and C, respectively, and let T: V-->W be linear. Then, for each u in V, we have
    [T(u)]C = [T]BCB.

    Proof: Fix u in V, and define the linear transformations f: F --> V by f(a) = au and
    g: F-->W by g(a) = aT(u) for all a in F. Let A = {1} be the standard
    ordered basis for F. Notice that g = Tf.. Identifying
    column vectors as matrices and using Theorem 2.11, we obtain

    (#1) [T(u)]C = [g(1)]C = [g]AC = [Tf]AC = [T]BC[f]AB = [T]BC[f(1)]B = [T]BCB.

    -------------------------------

    Theorem 2.11: Let V, W, and Z be finite-dimensional vector spaces with ordered bases, A,B, and C, respectively. Let T: V-->W and U: W-->Z be linear transformations. Then
    [UT]AC = BC[T]AB
     
  2. jcsd
  3. Mar 19, 2009 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor


    The defining property of "linear transformation", T, is T(au+ bv)= aT(u+ bT(v). In particular, T(au)= aT(u),

    g(u)= aT(u)= T(au)= T(f(u))= Tf.


     
  4. Mar 19, 2009 #3
    Wouldn't it be:
    g(a) = aT(u) = T(au) = T(f(a)) = Tf.
    But why can we replace f(a) by f?
    --------------------------------------------------------------

    Lastly, how does this line happen:
    [T(u)]C = [g(1)]C = [g]AC

    Thanks a lot,

    JL
     
  5. Mar 19, 2009 #4
    Your textbook sucks. It is obscuring a simple principle behind mountains of useless notation.

    To understand what is going on, consider the following example carefully:
    Let U be a 3D vector space with basis e1, e2, e3.
    Let V be a 2D vector space with basis b1, b2,
    Let T:U->V be linear.

    (The same reasoning generalizes to different numbers of dimensions)

    http://img187.imageshack.us/img187/8246/r3tor21.png [Broken]

    Question: For some u in U, can we write T(u) in a simpler form?

    Answer: Yes! If you express u in the basis given for U, then u = u1 e1 + u2 e2 + u3 e3 for some scalars u1, u2, u3.

    http://img155.imageshack.us/img155/9341/r3tor32.png [Broken]

    But then by linearity,
    T(u) = u1 T(e1) + u2 T(e2) + u3 T(e3)

    But we could pull the same trick on each of those T(ei)'s by writing them in the basis of V (remember, if x is in U, then T(x) is in V by definition).
    http://img249.imageshack.us/img249/6382/r3tor23.png [Broken]

    Doing this gets us:
    T(e1) = T11 b1 + T21 b2
    T(e2) = T12 b1 + T22 b2
    T(e3) = T13 b1 + T23 b2

    for 6 different scalars T11, T21 etc. You may think of these scalars Tij as "defining" the transformation T for when it is expressed in these bases.

    Now we can substitute this back into the expression from before to get a very simple form for T(u):
    T(u) = u1 (T11 b1 + T21 b2) + u2 (T12 b1 + T22 b2) + u3 (T13 b1 + T23 b2)

    = (u1 T11 + u2 T12 + u3 T13)b1 + (u1 T21 + u2 T22 + u3 T23)b2

    = (T11 T12 T13)*(u1 u2 u3)T b1 + (T21 T22 T23)*(u1 u2 u3)T b2

    In other words, if we write u in a given basis (here e1, e2, e3), and we want to know T(u) in a given basis (here b1, b2), it is equivalent to computing the matrix multiplication:

    [tex]T(\textbf{u})_{b_1, b_2. basis} = \left[\begin{matrix} T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23}\end{matrix}\right] \left[\begin{matrix} u_1 \\ u_1 \\ u_3\end{matrix}\right] = \left[\begin{matrix}T_{11} & T_{12} & T_{13} \\ T_{21} & T_{22} & T_{23}\end{matrix}\right] \textbf{u}_{e_1, e_2, e_3. basis}[/tex]
     
    Last edited by a moderator: May 4, 2017
  6. Mar 19, 2009 #5
    If we know g(a) = aT(u) for all a in F.
    Then g = T(u).

    How is this computation performed-by cancellation?
    g(a) = aT(u) <==> g(aa-1) = aa-1T(u)

    If we can get rid of a by cancellation, did i place a-1 in the correct position or do i place it after aT(u)?

    THanks,


    JL
     
  7. Mar 21, 2009 #6

    Yeah, what book is that? I've seen it presented much more clearly
     
  8. Mar 21, 2009 #7
    Back to my last post- is my reasoning correct?

    If we know g(a) = aT(u) for all a in F.
    Then g = T(u).

    How is this computation performed-by cancellation?
    g(a) = aT(u) <==> g(aa-1) = aa-1T(u)

    If we can get rid of a by cancellation, did i place a-1 in the correct position or do i place it after aT(u)?

    THanks,


    JL


    and this book is intended to portray the theory aspect of linear algebra (with limited actual application). The book is by Friedberg, Insel, Spence, “Linear Algebra” (Prentice Hall, 4th edition).
     
  9. Mar 23, 2009 #8
    I have a weird way of visualizing linear transformations involving a corn field as the basis, areas of corn turned into bushels as the e's, and taking them to market to get paid when multiplying the matrix.

    I don't know if it would help or hurt anyone, or if it is even plausible as a way of looking at it, but it seems to help me. lol
     
  10. Apr 2, 2009 #9
    I still am not sure how [g(1)]C = [g]AC occurs. I understand everything else


    thanks,


    JL
     
  11. Oct 19, 2009 #10
    Isometry

    Hi, I have a question:
    I am trying to find an isometry such that T(aU+bV)≠aT(U)+bT(V).
    I have tried so many possibilities. I gave T(X)=MX given that M is a matrix that doesn't have an inverse. But i still can't find a nice matrix that will make the proposition possible.
    Help please
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook