# I Coefficients of a vector regarded as a function of a vector

1. Mar 5, 2016

### Math Amateur

I am reading Segei Winitzki's book: Linear Algebra via Exterior Products ...

I am currently focused on Section 1.6: Dual (conjugate) vector space ... ...

I need help in order to get a clear understanding of the notion or concept of coefficients of a vector $v$ as linear functions (covectors) of the vector $v$ ...

The relevant part of Winitzki's text reads as follows:

In the above text we read:

" ... ... So the coefficients $v_k, \ 1 \leq k \leq n$, are linear functions of the vector $v$ ; therefore they are covectors ... ... "

Now, how and in what way exactly are the coefficients $v_k$ a function of the vector $v$ ... ... ?

To indicate my confusion ... if the coefficient $v_k$ is a linear function of the vector $v$ then $v_k(v)$ must be equal to something ... but what? ... indeed what does $v_k(v)$ mean? ... further, what, if anything, would $v_k(w)$ mean where $w$ is any other vector? ... and further yet, how do we formally and rigorously prove that $v_k$ is linear? ... what would the formal proof look like?... ...

Hope someone can help ...

Peter

============================================================================

*** NOTE ***

To indicate Winitzki's approach to the dual space and his notation I am providing the text of his introduction to Section 1.6 on the dual or conjugate vector space ... ... as follows ... ...

#### Attached Files:

File size:
92.2 KB
Views:
152
File size:
115.4 KB
Views:
131
• ###### Winitzki - 2 - Section 1.6 - PART 2 ....png
File size:
93.8 KB
Views:
123
Last edited: Mar 5, 2016
2. Mar 5, 2016

### Buzz Bloom

Hi Math:

One way to look at the vi components of v WRT the basis vectors ei is:
vi = v DOT ei .​
where DOT is the dot product operator.

Hope that helps.

Regards,
Buzz

3. Mar 5, 2016

### Math Amateur

Thanks for the help Buzz Bloom ...

... BUT ... while your interpretation $v_i (v) = v \cdot e_i$ works in a way ...

... it then defines $v_i$ as a function with only one domain value, namely $v$ ... and only one image namely $v \cdot e_i = v_i$ ...

Is that right?

Peter

4. Mar 6, 2016

### Buzz Bloom

Hi Math:

I am not sure I understand what is puzzling you. What other domain value than v do you think might be plausible? Your use of the term "image" also seems odd.
The following are quotes from Wikipedia.
In mathematics, an image is the subset of a https://www.physicsforums.com/javascript:void(0) [Broken]'s codomain which is the output of the function on a subset of its domain.
In mathematics, the codomain or target set of a https://www.physicsforums.com/javascript:void(0) [Broken] is the https://www.physicsforums.com/javascript:void(0) [Broken] Y into which all of the https://www.physicsforums.com/javascript:void(0) [Broken] of the function is constrained to fall.​
As I interpret these definitions, for a single valued function the image is always unique. Do you think it might be possible for vi(v) to be a multi-valued function?

Regards,
Buzz

Last edited by a moderator: May 7, 2017
5. Mar 6, 2016

### lavinia

A function on a vector space is linear if $L(aV + bW) = aL(V) + bL(W)$ for arbitrary scalars $a$ and $b$ and arbitrary vectors $W$ and $V$.

If one has a basis for the vector space then a linear function is determined completely by its values on the basis vectors. For instance the function that assigns zero to all but the i'th basis vector and 1 to the i'th is an example. It just picks out the i'th coefficient of a vector with respect to this basis.

6. Mar 7, 2016

### Erland

Given a basis, each coordinate $v_k$ is uniquely determined by $\mathbf v$. This means that it is a function of $\mathbf v$. The purpose of the $\mathbf u+\lambda \mathbf v$ -line is to prove that each of these functions is linear. They actually only prove this for the first coordinate, but the same argument would work for each $k$.

The author either defines a linear transformation $T:U\to V$ by the condition $T(\mathbf u + \lambda \mathbf v)=T(\mathbf u)+\lambda T(\mathbf v)$, for all $\mathbf u,\mathbf v\in U$ and $\lambda \in \Bbb C$ (or $\Bbb R$), or assumes it is known that this condition is equivalent to $T$ being a linear transformation.

7. Mar 8, 2016

### Math Amateur

Thanks Buzz, Lavinia and Erland ... you have helped me gain an understanding of the issue that was bothering me ...

I also had a helpful post from Deveno on MHB ... so I thought I'd share part of it with you ...

The start of Deveno's post which contains the essence of his post reads as follows:

" ... ... The way I am used to seeing this "co-vector" defined is like so:

Suppose $v = \sum\limits_j v_je_j$, where $\{e_j\}$ is a basis (perhaps the standard basis, perhaps not). We define:

$\pi_i(v) = v_i$

(Note we have as many $\pi$-functions, as we have coordinates).

Thus $\pi_i: V \to F$, since $v$ is a vector, and $v_i$ is a scalar.... ... "

Another important point is made later in his post ... where he writes:

" ... ... Note that Winitzki is just naming the function by its image, something that is often done with functions (we often talk about "the function $x^2$" when what we really MEAN is "the squaring function"). What he really means is the function:

$v \mapsto v_i$ (function that returns the $i$-th coordinate of $v$ in some basis).

It is also important to note here that the function(s) we have defined here *depend on a choice of basis*, because the CO-ORDINATES of a vector depend on the basis used. ... ... "

There is more to Deveno's post, but I have mentioned the main two points ...

Peter

Last edited: Mar 8, 2016
8. Mar 8, 2016

### Erland

At a closer thought, I realize that this condition is in fact not equivalent to the standard definirtion of linear transformation (for example the one given by Lavinia in Post #5). The condition does not imply that $T(\lambda \mathbf u)=\lambda T(\mathbf u)$ which is included in the ordinary definition. So either the author quoted in the OP made a mistake or some advanced reasoning.

9. Mar 8, 2016

### Samy_A

Are you sure that they are not equivalent?

Taking $u=0, v=0:\ T(0)=T(0+\lambda 0)=T(0)+\lambda T(0)=(1+\lambda )T(0)$ for any $\lambda$.
So $T(0)=0$.
Then $T(\lambda v)=T(0 +\lambda v)=T(0)+\lambda T(v)=\lambda T(v)$ for all $v \in U$ and all scalars $\lambda$.

10. Mar 8, 2016

### Erland

Yes, you are right... I guess I was tired

11. Mar 8, 2016

### lavinia

This description is the same as already explained. IMO the best way to think of a co-vector is as a linear map from a vector space into the field of scalars. This idea is independent of any basis.

However, if one has a basis then any covector is determined by its values on the basis vectors. This follows directly from the condition that the covector is a linear map.

If one writes the vector $v$ in terms of a basis as $v = Σ_{i}v_{i}e_{i}$ and if $L$ is a covector, then $L(v) = Σv_{i}L(e_{i})$ and this shows that if one knows the values of $L$ on the $e_{i}$'s one knows $L$ on any vector, $v$.

It is important to notice that covectors form a vector space of their own - often called the dual space. If $L$ and $H$ are covectors then any linear combination of them $aL + bH$ is also a covector.

If one has a basis $e_{i}$ for the vectors space, then a basis for the vector space of covectors - called the dual basis are the linear maps $π_{i}$ defined by $π_{i}(e_{j}) = δ_{ij}$ This is the covector that assigns 1 to the i'th basis vector and zero to all of the others - as mentioned already above. For each choice of basis $e_{i}$ one has a corresponding choice of basis $π_{i}$ for the vector space of covectors.

The covectors $v_{i}$ mentioned above are the same as the covectors $π_{i}$. So the function that picks out the i'th coordinate of a vector with respect to a basis is a covector.

The dual space to the space of covectors is also a vector space. One might call it the space of covectors of covectors. If one writes a covector as $Σ_{i}l_{i}π_{i}$ then the $l_{i}$'s are a basis for the space of covectors of covectors. A standard theorem says that this space is naturally isomorphic to the original vector space. Otherwise said, the dual space of the dual space of a vector space is naturally isomorphic to the vector space itself. One can see this by observing that the vector $v$ defines a linear map on covectors by $v(L) = L(v)$.

One final but crucial point: A vector space and its dual space (space of covectors) are isomorphic but not naturally isomorphic. There is no handy isomorphism between them the way that there is a natural isomorphism between the vector space and its double dual. One way to define an isomorphism is with an inner product. The covector corresponding to the vector $v$ is $L_{v}(w) = <v,w>$.

Last edited: Mar 8, 2016
12. Mar 8, 2016

### Math Amateur

Thanks Lavinia ... very clear and VERY helpful ...

Peter

13. Mar 8, 2016

### zinq

I'll just add that if you ever have occasion to deal with an infinite-dimensional vector space V (for instance, a countable-dimensional vector space having as basis

B = {ej | j = 1, 2, 3,...}​

), then the (ordinary algebraic) dual is not isomorphic to the original vector space. Instead, the dimension of the dual has a larger cardinality than the dimension of V:

dim(V*) > dim(V).​

Also, note that in many cases when an infinite dimensional vector space V has a topology, the only dual vector space one is interested is the vector space of continuous linear functions on V. In this case, the continuous dual Vc* might be the same dimension as the original vector space.

For details on both the algebraic dual and the continuous dual, this is a good reference: https://en.wikipedia.org/wiki/Dual_space.

14. Mar 8, 2016

### Math Amateur

Thanks for the post zinq ... definitely helpful and interesting as I do want to try to cover the case of infinite dimensional vector spaces ... ...

Thanks for the useful reference ...

Peter