Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Coefficients of a vector regarded as a function of a vector

  1. Mar 5, 2016 #1
    I am reading Segei Winitzki's book: Linear Algebra via Exterior Products ...

    I am currently focused on Section 1.6: Dual (conjugate) vector space ... ...

    I need help in order to get a clear understanding of the notion or concept of coefficients of a vector [itex]v[/itex] as linear functions (covectors) of the vector [itex]v[/itex] ...

    The relevant part of Winitzki's text reads as follows:


    ?temp_hash=4082c7aaf4ddda88f4ad9183a4813769.png



    In the above text we read:


    " ... ... So the coefficients [itex]v_k, \ 1 \leq k \leq n[/itex], are linear functions of the vector [itex]v[/itex] ; therefore they are covectors ... ... "


    Now, how and in what way exactly are the coefficients [itex]v_k[/itex] a function of the vector [itex]v[/itex] ... ... ?


    To indicate my confusion ... if the coefficient [itex]v_k[/itex] is a linear function of the vector [itex]v[/itex] then [itex]v_k(v)[/itex] must be equal to something ... but what? ... indeed what does [itex]v_k(v)[/itex] mean? ... further, what, if anything, would [itex]v_k(w)[/itex] mean where [itex]w[/itex] is any other vector? ... and further yet, how do we formally and rigorously prove that [itex]v_k[/itex] is linear? ... what would the formal proof look like?... ...

    Hope someone can help ...

    Peter

    ============================================================================

    *** NOTE ***

    To indicate Winitzki's approach to the dual space and his notation I am providing the text of his introduction to Section 1.6 on the dual or conjugate vector space ... ... as follows ... ...


    ?temp_hash=4082c7aaf4ddda88f4ad9183a4813769.png
    ?temp_hash=4082c7aaf4ddda88f4ad9183a4813769.png
     

    Attached Files:

    Last edited: Mar 5, 2016
  2. jcsd
  3. Mar 5, 2016 #2

    Buzz Bloom

    User Avatar
    Gold Member

    Hi Math:

    One way to look at the vi components of v WRT the basis vectors ei is:
    vi = v DOT ei .​
    where DOT is the dot product operator.

    Hope that helps.

    Regards,
    Buzz
     
  4. Mar 5, 2016 #3
    Thanks for the help Buzz Bloom ...

    ... BUT ... while your interpretation [itex] v_i (v) = v \cdot e_i [/itex] works in a way ...

    ... it then defines [itex]v_i[/itex] as a function with only one domain value, namely [itex]v[/itex] ... and only one image namely [itex]v \cdot e_i = v_i[/itex] ...

    Is that right?

    Peter
     
  5. Mar 6, 2016 #4

    Buzz Bloom

    User Avatar
    Gold Member

    Hi Math:

    I am not sure I understand what is puzzling you. What other domain value than v do you think might be plausible? Your use of the term "image" also seems odd.
    The following are quotes from Wikipedia.
    In mathematics, an image is the subset of a https://www.physicsforums.com/javascript:void(0) [Broken]'s codomain which is the output of the function on a subset of its domain.
    In mathematics, the codomain or target set of a https://www.physicsforums.com/javascript:void(0) [Broken] is the https://www.physicsforums.com/javascript:void(0) [Broken] Y into which all of the https://www.physicsforums.com/javascript:void(0) [Broken] of the function is constrained to fall.​
    As I interpret these definitions, for a single valued function the image is always unique. Do you think it might be possible for vi(v) to be a multi-valued function?

    Regards,
    Buzz
     
    Last edited by a moderator: May 7, 2017
  6. Mar 6, 2016 #5

    lavinia

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    A function on a vector space is linear if ##L(aV + bW) = aL(V) + bL(W)## for arbitrary scalars ##a## and ##b## and arbitrary vectors ##W## and ##V##.

    If one has a basis for the vector space then a linear function is determined completely by its values on the basis vectors. For instance the function that assigns zero to all but the i'th basis vector and 1 to the i'th is an example. It just picks out the i'th coefficient of a vector with respect to this basis.
     
  7. Mar 7, 2016 #6

    Erland

    User Avatar
    Science Advisor

    Given a basis, each coordinate ##v_k## is uniquely determined by ##\mathbf v##. This means that it is a function of ##\mathbf v##. The purpose of the ##\mathbf u+\lambda \mathbf v## -line is to prove that each of these functions is linear. They actually only prove this for the first coordinate, but the same argument would work for each ##k##.

    The author either defines a linear transformation ##T:U\to V## by the condition ##T(\mathbf u + \lambda \mathbf v)=T(\mathbf u)+\lambda T(\mathbf v)##, for all ##\mathbf u,\mathbf v\in U## and ##\lambda \in \Bbb C## (or ##\Bbb R##), or assumes it is known that this condition is equivalent to ##T## being a linear transformation.
     
  8. Mar 8, 2016 #7
    Thanks Buzz, Lavinia and Erland ... you have helped me gain an understanding of the issue that was bothering me ...

    I also had a helpful post from Deveno on MHB ... so I thought I'd share part of it with you ...

    The start of Deveno's post which contains the essence of his post reads as follows:


    " ... ... The way I am used to seeing this "co-vector" defined is like so:

    Suppose [itex]v = \sum\limits_j v_je_j [/itex], where [itex]\{e_j\}[/itex] is a basis (perhaps the standard basis, perhaps not). We define:

    [itex]\pi_i(v) = v_i[/itex]

    (Note we have as many [itex]\pi[/itex]-functions, as we have coordinates).

    Thus [itex]\pi_i: V \to F[/itex], since [itex]v[/itex] is a vector, and [itex]v_i[/itex] is a scalar.... ... "


    Another important point is made later in his post ... where he writes:

    " ... ... Note that Winitzki is just naming the function by its image, something that is often done with functions (we often talk about "the function [itex]x^2[/itex]" when what we really MEAN is "the squaring function"). What he really means is the function:

    [itex]v \mapsto v_i[/itex] (function that returns the [itex]i[/itex]-th coordinate of [itex]v[/itex] in some basis).

    It is also important to note here that the function(s) we have defined here *depend on a choice of basis*, because the CO-ORDINATES of a vector depend on the basis used. ... ... "

    There is more to Deveno's post, but I have mentioned the main two points ...

    Peter
     
    Last edited: Mar 8, 2016
  9. Mar 8, 2016 #8

    Erland

    User Avatar
    Science Advisor

    At a closer thought, I realize that this condition is in fact not equivalent to the standard definirtion of linear transformation (for example the one given by Lavinia in Post #5). The condition does not imply that ##T(\lambda \mathbf u)=\lambda T(\mathbf u)## which is included in the ordinary definition. So either the author quoted in the OP made a mistake or some advanced reasoning.
     
  10. Mar 8, 2016 #9

    Samy_A

    User Avatar
    Science Advisor
    Homework Helper

    Are you sure that they are not equivalent?

    Taking ##u=0, v=0:\ T(0)=T(0+\lambda 0)=T(0)+\lambda T(0)=(1+\lambda )T(0)## for any ##\lambda##.
    So ##T(0)=0##.
    Then ##T(\lambda v)=T(0 +\lambda v)=T(0)+\lambda T(v)=\lambda T(v)## for all ##v \in U## and all scalars ##\lambda##.
     
  11. Mar 8, 2016 #10

    Erland

    User Avatar
    Science Advisor

    Yes, you are right... I guess I was tired :oops:
     
  12. Mar 8, 2016 #11

    lavinia

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    This description is the same as already explained. IMO the best way to think of a co-vector is as a linear map from a vector space into the field of scalars. This idea is independent of any basis.

    However, if one has a basis then any covector is determined by its values on the basis vectors. This follows directly from the condition that the covector is a linear map.

    If one writes the vector ##v## in terms of a basis as ##v = Σ_{i}v_{i}e_{i}## and if ##L## is a covector, then ##L(v) = Σv_{i}L(e_{i})## and this shows that if one knows the values of ##L## on the ##e_{i}##'s one knows ##L## on any vector, ##v##.

    It is important to notice that covectors form a vector space of their own - often called the dual space. If ##L## and ##H## are covectors then any linear combination of them ##aL + bH## is also a covector.

    If one has a basis ##e_{i}## for the vectors space, then a basis for the vector space of covectors - called the dual basis are the linear maps ##π_{i}## defined by ##π_{i}(e_{j}) = δ_{ij}## This is the covector that assigns 1 to the i'th basis vector and zero to all of the others - as mentioned already above. For each choice of basis ##e_{i}## one has a corresponding choice of basis ##π_{i}## for the vector space of covectors.

    The covectors ##v_{i}## mentioned above are the same as the covectors ##π_{i}##. So the function that picks out the i'th coordinate of a vector with respect to a basis is a covector.

    The dual space to the space of covectors is also a vector space. One might call it the space of covectors of covectors. If one writes a covector as ##Σ_{i}l_{i}π_{i}## then the ##l_{i}##'s are a basis for the space of covectors of covectors. A standard theorem says that this space is naturally isomorphic to the original vector space. Otherwise said, the dual space of the dual space of a vector space is naturally isomorphic to the vector space itself. One can see this by observing that the vector ##v## defines a linear map on covectors by ##v(L) = L(v)##.

    One final but crucial point: A vector space and its dual space (space of covectors) are isomorphic but not naturally isomorphic. There is no handy isomorphism between them the way that there is a natural isomorphism between the vector space and its double dual. One way to define an isomorphism is with an inner product. The covector corresponding to the vector ##v## is ##L_{v}(w) = <v,w>##.
     
    Last edited: Mar 8, 2016
  13. Mar 8, 2016 #12
    Thanks Lavinia ... very clear and VERY helpful ...

    Peter
     
  14. Mar 8, 2016 #13
    I'll just add that if you ever have occasion to deal with an infinite-dimensional vector space V (for instance, a countable-dimensional vector space having as basis

    B = {ej | j = 1, 2, 3,...}​

    ), then the (ordinary algebraic) dual is not isomorphic to the original vector space. Instead, the dimension of the dual has a larger cardinality than the dimension of V:

    dim(V*) > dim(V).​

    Also, note that in many cases when an infinite dimensional vector space V has a topology, the only dual vector space one is interested is the vector space of continuous linear functions on V. In this case, the continuous dual Vc* might be the same dimension as the original vector space.

    For details on both the algebraic dual and the continuous dual, this is a good reference: https://en.wikipedia.org/wiki/Dual_space.
     
  15. Mar 8, 2016 #14
    Thanks for the post zinq ... definitely helpful and interesting as I do want to try to cover the case of infinite dimensional vector spaces ... ...

    Thanks for the useful reference ...

    Peter
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted