Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Linear operators & mappings between vector spaces

  1. Aug 22, 2014 #1
    Hi,

    I'm having a bit of difficulty with the following definition of a linear mapping between two vector spaces:

    Suppose we have two [itex]n[/itex]-dimensional vector spaces [itex]V[/itex] and [itex]W[/itex] and a set of linearly independent vectors [itex]\mathcal{S} = \lbrace \mathbf{v}_{i}\rbrace_{i=1, \ldots , n}[/itex] which forms a basis for [itex]V[/itex]. We define the linear operator [itex]T[/itex] which maps the basis vectors [itex]\mathbf{v}_{j}[/itex] to their representations in [itex] W[/itex], i.e. [itex]T:V \rightarrow W[/itex], as [tex] T\left(\mathbf{v}_{j}\right)= \sum_{i=1}^{n} T_{ij}\mathbf{w}_{i} [/tex] where [itex]\mathcal{B}= \lbrace\mathbf{w}_{i} \rbrace_{i=1, \ldots , n}[/itex] is a basis for [itex]W[/itex].

    What I'm struggling with is, why is the transformation expressed as [itex]\sum_{i=1}^{n} T_{ij}\mathbf{w}_{i}[/itex] and not [itex]\sum_{j=1}^{n} T_{ij}\mathbf{w}_{j}[/itex] ? Is it purely definition, or is there some deeper meaning behind it? (and if so, is there any way of deriving this expression?).

    Sorry to ask a probably very trivial question, but it's been bugging me, and I can't seem to find a satisfactory answer from trawling the internet.
     
  2. jcsd
  3. Aug 23, 2014 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    The left hand side [itex] T(v_j) [/itex] has a [itex] j [/itex] in it. If you summed over [itex] j [/itex] on the right hand side you wouldn't have any [itex] j [/itex] remaining on the right hand side. So the equation would make a claim that [itex] T(v_j) [/itex] is a sum involving subscripts that are constant except for the index [itex] i [/itex]. How would we interpret that?

    Mathematics is traditionally written in a sloppy way from the viewpoint of logic. An equation with a with unknowns like [itex] i [/itex] and [itex] j [/itex] is used with the preceeding quantifiers "for each [itex] i [/itex] and for each [itex] j [/itex]" omitted. So the statement [itex] T(v_j) = T_{i\ 1} w_1 + T_{i\ 2} w_2 + .... T_{i\ n} w_n [/itex] would be interpreted to be a condition that was true for each pair of indices [itex] i , j [/itex]. However this wouldn't make sense as definition unless terms like [itex] T_{i\ 1} w_1 [/itex] had the same value for all indices [itex] i [/itex]. We don't want that restriction in the definition of a linear transformation.

    You could also ask why we don't define the transformation by [itex] T(v_j) = \sum_{i=1}^n T_{j\ i} w_i [/itex].

    It's just a matter of tradition. Linear transformations are associated with matrices. The traditional way to think of this is to think of a column vector v being transformed by being left multipled by a matrix T. If you wanted to think of a transformation as a row vector v being right multiplied by a matrix T, you'd change the order of T's indexes [itex] i,j [/itex] in the definition.
     
  4. Aug 23, 2014 #3

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    That first definition ensures that ##(Tv_j)_i## (the ith component of ##Tv_j## in the ##\mathcal B## basis) is ##T_{ij}##.

    Recall that when we're given a linear operator T, we define the matrix [T] corresponding to it by ##[T]_{ij}=(Tv_j)_i##. (See the FAQ post I linked to in your other thread). The reason that we use this convention rather than ##[T]_{ij}=(Tv_i)_j## is that it's nice to have the matrix equation that corresponds to y=Tx be [y]=[T][x], rather than ##[y]^T=[x]^T[T]##.

    The convention you're asking about ensures that ##[T]_{ij}=T_{ij}##.
     
  5. Aug 23, 2014 #4
    Thanks. Sorry, when I wrote it with the sum over the other index, I had meant in the form that you've put above.

    I guess I was just trying to make sense of it really, as usually, if one expresses a vector in terms of a column matrix, with respect to a given ordered basis, for example
    [tex] \left[ \mathbf{v}\right]_{ \mathcal{B}} = \left( \begin{matrix} a_{1} \\ \vdots \\ a_{n} \end{matrix} \right) [/tex]
    where [itex]\mathcal{B} [/itex] is an [itex]n[/itex]-dimensional basis for [itex]V[/itex] and [itex]\mathbf{v} \in V [/itex]. The components [itex] a_{i} [/itex] of the column matrix are the components of the vector [itex] \mathbf{v}[/itex] with respect to the basis vectors [itex] \mathbf{v}_{i} [/itex]. ( [itex] \left[\, \cdot \,\right]_{\mathcal{ B}} [/itex] defines an isomorphism between [itex] V[/itex] and [itex] \mathbb{R}^{n}[/itex] ).

    Then, for some linear operator [itex]\mathcal{A}[/itex], we have that, with respect to the basis [itex]\mathcal{B}[/itex]: [tex] \left[\mathcal{ A} \left( \mathbf{v} \right) \right]_{\mathcal{ B}} = \left( \sum_{j=1}^{n} A_{ij} a_{j} \right) [/tex]
    where the [itex] a_{j} [/itex] are the components of [itex] \mathbf{v} [/itex] with respect to the basis vectors [itex] \mathbf{v}_{i} \in \mathcal{B} [/itex] (the right-hand side of the equation denotes a column matrix whose components are [itex]\sum_{j=1}^{n} A_{ij} a_{j} [/itex] ).
    In my mind, I rationalised it by using that one can the basis vectors as the standard basis in the basis that they define, i.e. [tex] \left[ \mathbf{v}_{i}\right]_{ \mathcal{B}} = \left( \begin{matrix} 0 \\ \vdots \\ 1 \\ \vdots \\ 0 \end{matrix} \right) [/tex]
    where [itex] \mathbf{v}_{i} \in \mathcal{B} [/itex]. And in that sense, for the [itex] j^{th}[/itex] basis vector [itex] \mathbf{v}_{j} \in \mathcal{B} [/itex], the vector "picks off" the elements of the [itex] j^{th} [/itex] column of the matrix representation of [itex] \mathcal{A} [/itex] in the basis [itex] \mathcal{B} [/itex], such that [tex] \left[ \mathcal{A} \left( \mathbf{v}_{j} \right) \right]_{\mathcal{B} } = \left( \begin{matrix} A_{1j} \\ \vdots \\ A_{nj} \end{matrix} \right) = A_{1j} \left( \begin{matrix} 1 \\ 0 \\ \vdots \\ 0 \end{matrix} \right) + \cdots + A_{nj} \left( \begin{matrix} 0 \\ \vdots \\ 0 \\ 1 \end{matrix} \right) = A_{1j} \left[ \mathbf{v}_{1}\right]_{ \mathcal{B}} + \cdots + A_{nj} \left[ \mathbf{v}_{n}\right]_{ \mathcal{B}} = \left[ A_{1j} \mathbf{v}_{1} + \cdots + A_{nj} \mathbf{v}_{n} \right]_{ \mathcal{ B}} = \left[ \sum_{i=1}^{n} A_{ij} \mathbf{v}_{i} \right]_{\mathcal{B}} [/tex]
    from which one could imply that [tex] \mathcal{A} \left( \mathbf{v}_{j} \right) = \sum_{i=1}^{n} A_{ij} \mathbf{v}_{i} [/tex] however, I'm not convinced that I've got this write and it didn't seem to be the most general way of doing it?!
     
  6. Aug 23, 2014 #5
    Cheers Fredrik. My question actually arose after reading your FAQ, I was just a bit unsure. So, is the way that an operator acts on the basis vectors defined that way so that one recovers the matrix equation [itex] \left[ \mathbf{y}\right] = \left[\mathcal{T} \right] \left[\mathbf{x} \right] [/itex]. instead of [itex] \left[ \mathbf{y}\right]^{T} = \left[\mathbf{x} \right]^{T} \left[\mathcal{T} \right] [/itex]?
    Also, is it correct to deduce it the way I did in the above post?
     
  7. Aug 23, 2014 #6

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Your calculation looks OK, but there are some inaccuracies. I see nothing in the calculation that indicates a misunderstanding, but the formula above is weird, because it says that a matrix is equal to a number. The right-hand side is the ith component of the left-hand side.

    It's also a little confusing to use the same letter of the alphabet (v) for the basis vectors and the arbitrary vector. I will write ##\mathbf x=\sum x_i\mathbf v_i## instead.

    If you want to prove that ##A\mathbf v_j=\sum_{i=1}^n A_{ij}\mathbf v_i##, where ##A_{ij}## is the row i, column j component of the matrix representation of A, you really only have to realize the left-hand side is a vector, and can therefore be expressed as a linear combination of the basis vectors: ##A\mathbf v_j = \sum_{i=1}^n (A\mathbf v_j)_i \mathbf v_i##. Then you recall that ##A_{ij}## is defined by ##A_{ij}=(A\mathbf v_j)_i##.

    Then you can use this result to evaluate ##A\mathbf x##, where ##\mathbf x## is an arbitrary vector.
    $$A\mathbf x =A\bigg(\sum_j x_j \mathbf v_j\bigg) =\sum_j x_j A\mathbf v_j =\sum_j x_j\bigg(\sum_i A_{ij}\mathbf v_i\bigg) =\sum_i\bigg(\sum_j A_{ij}x_j\bigg)\mathbf v_i.$$ This implies that the ith component of ##A\mathbf x## is
    $$(A\mathbf x)_i =\sum_j A_{ij} x_j.$$
     
  8. Aug 23, 2014 #7
    Cheers Fredrik, that's most helpful!

    Yes, the formula [itex] \mathcal{A} \left[ \left(\mathbf{v} \right) \right]_{\mathcal{B}} = \left(\sum_{j=1}^{n} A_{ij} a_{j}\right) [/itex] was just laziness on my part - I didn't want to write out all the components of the column vector, so I implied it, all be it a bit ambiguously, through the parentheses around the sum. Apologies for the confusion though, and thank you very much for your help once again.
     
  9. Aug 31, 2014 #8

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    Either way works. They are just picking one way. The notation is simpler.
     
  10. Sep 3, 2014 #9
    The question features basis vectors, whereas a routine operation is calculation of components; one rarely refers to the basis directly. For components we have, in standard notation, an intuitive order of indices (Tv)k = Tkv, where k is the index of a row and enumerates both columns of the matrix Tk and components v.

    Original poster wants to see how images of basis vectors of the first space are expressed from basis vectors of the second space. Yes, there is a reversion, because bases are a concept dual to components, namely (ei)k = δki.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Linear operators & mappings between vector spaces
Loading...