# The Dual Space

• B
This confuses me. Say we have $\pi_1(x,y)=x$ and $\pi_2(x,y)=y$. Then every function $\varphi = \alpha \pi_1+\beta \pi_2$ is a vector of the dual space. So we get all functions $\varphi(x,y)=\alpha \cdot x + \beta \cdot y$. You can now set any $\varphi(x,y)=(a,b)$ and see whether you can find values $\alpha,\beta$ which will do.
Yes you are clearer than my attempt. Thanks I am seeing it a little clearer now.

Can I keep this thread going? (I feel I have made some progress)

So the dual vector bases are chosen (from countless possibilities) precisely for the property that they are orthogonal to the vector bases?

And what about spaces with higher dimensions...are all the dual bases likewise orthogonal to or does it get more complex?(is that where the Kronecker Delta function applies?)

lavinia
Gold Member
Can I keep this thread going? (I feel I have made some progress)

So the dual vector bases are chosen (from countless possibilities) precisely for the property that they are orthogonal to the vector bases?

And what about spaces with higher dimensions...are all the dual bases likewise orthogonal to or does it get more complex?(is that where the Kronecker Delta function applies?)
Dual spaces are not orthogonal to the underlying vector space. This in fact makes no sense since they are different vector spaces. It only makes sense to say that two vectors are orthogonal if they are in the same vector space and this vector space has an inner product.

Dual spaces are simply the space of linear maps from a vector space into the field of scalars e.g. the real numbers in the case of real vector spaces. They do not have inner products by themselves so there is no intrinsic idea of angle or length. An inner product must be chosen as an additional feature. It is extra.

If one chooses a basis for a vector space then there is a dual basis whose values on the basis of the vector space can be described by the Kronecker delta. But this use of Kronecker delta does not come from an inner product. It is merely a short hand way of describing the values of the dual basis on the underlying basis vector.

However, given a basis one can define an inner product on the vector space by declaring the basis vectors to be orthogonal and of length 1. So every choice of a basis determines an inner product. But this inner product is on the vector space not on the dual space. However, one can uses this to define a second inner product on the dual space by declaring the dual basis to be orthogonal and of length 1.

- In general given two vector spaces, one can consider the space of all linear maps from one of the vector spaces into the other. These linear maps themselves form a vector space since one can add them and multiply them by scalars and still get linear maps. The dual space is a special case where the target vector space is the one dimensional vector space of scalars.

Last edited:
So what is the connection between the vector space and it's dual space?

Do they share the same origin ?

If ,as a simple example the vector space is the set of all real number pairs is perhaps its Dual Space the set of all number pairs where one of the pairs is zero? (meaning [x,y] would be mapped to either [x,0] or [0,y]

And ,if the origin is shared by both spaces (the vector and its dual) is it possible to take any vector and ,starting from the origin go to it by either adding vectors or (the scenic route ) by adding covectors?

PeterDonis
Mentor
2019 Award
So what is the connection between the vector space and it's dual space?
The definition of the dual space.

You appear to be confused by the fact that the dual space is also a vector space. But I think that's because you are confused about what a "vector space" is, mathematically. Mathematically, a vector space is not "a space of little arrows". It is anything that satisfies the axioms of a vector space.

The dual space to a vector space is a space of linear maps. And it just so happens that a space of linear maps satisfies the vector space axioms, so it is a vector space. But that doesn't mean a space of linear maps is a space of little arrows, or points, or that it is somehow "the same space" as the vector space it's a dual of.

With that in mind, let's look at the questions you are asking:

Do they share the same origin ?
What does this even mean? Suppose the original vector space is $\mathbb{R}^2$--roughly speaking, the set of points in a plane viewed as a vector space (so each vector can be thought of as an arrow from the origin to a given point). (Your "the set of all number pairs" is basically the same thing.) The dual space is then the space of linear maps from $\mathbb{R}^2$ to $\mathbb{R}$. The "origin" of the dual space, as a member of the dual space, is then a linear map, not a point in $\mathbb{R}^2$. So what does it even mean to ask whether it is "the same" as the origin of the original vector space?

is perhapd its Dual Space the set of all number pairs where one of the pairs is zero?
No. See above.

(meaning [x,y] would be mapped to either [x,0] or [0,y]
Neither. See above.

fresh_42
Mentor
So what is the connection between the vector space and it's dual space?
One is a set of arrows and the other a set of functions, which attach a number to the arrows.
Do they share the same origin ?
No. Only the dimension. $(x,y)$ is a vector, $(x,y)\longrightarrow f(x,y)=\alpha x+\beta y$ is a dual vector.
If ,as a simple example the vector space is the set of all real number pairs is perhapd its Dual Space the set of all number pairs where one of the pairs is zero? (meaning [x,y] would be mapped to either [x,0] or [0,y]
No. These are all vectors. But if you define a function $f\, : \,(x,y) \longmapsto x$ then you have a dual vector, namely $f$ - without a second component, zero or not, none.
And ,if the origin is shared by both spaces (the vector and its dual) is it possible to take any vector and ,starting from the origin go to it by either adding vectors or (the scenic route ) by adding covectors?
No. That's confusing. You confuse a domain (vectors) with the functions on that domain (covectors).

Suppose we have any vector in the vector space; say (3,4) just to have a concrete example to keep it simple for me....

If a covector acts on (3,4) are there an infinite number of possible results of a covector acting on (3,4)?

Can I result be the number 88?

Would another covector give a different number ,say 56?

PeterDonis
Mentor
2019 Award
If a covector acts on (3,4) are there an infinite number of possible results of a covector acting on (3,4)?
Since a covector maps pairs (x, y) to numbers, and since there are an infinite number of numbers, the answer to this would be yes.

Can I result be the number 88?
Sure, why not?

Would another covector give a different number ,say 56?
Meaning, another covector acting on the same pair (3, 4)? Yes, there will be a covector that gives 56 acting on that pair.

fresh_42
Mentor
Sure, no problem. If we define $f(3,4)=\alpha\cdot 3+ \beta \cdot 4 = 88$ then we still have infinitely many possibilities to choose $\alpha,\beta$.

On the risk of confusing you even more, let me explain where the names come from. Let us choose $(\alpha,\beta) = (22, 5.5)$. Then $$f(3,4)=22\cdot 3+ 5.5 \cdot 4 = (22,5.5) \cdot \begin{bmatrix}3\\ 4 \end{bmatrix} = 88$$ Hence the covector $(22,5.5)$ maps the vector $(3,4)$ onto the real number $88$. The same is true for the covector $(0,22)$ which results in $88$, too: $$g(3,4)=0\cdot 3+ 22 \cdot 4 = (0,22) \cdot \begin{bmatrix}3\\ 4 \end{bmatrix} = 88$$

So in a way our functions are represented by the covectors $(\alpha,\beta)=(22,5.5)$ for the function $f$ and by $(\alpha,\beta)=(0,22)$ for the function $g$. Now both, $(3,4)$ as well as $(\alpha,\beta)$ are written as vectors. However, there is a small but important difference:

Vector $(x,y) = (3,4)$ is an arrow in the real Euclidean plane, starting at the origin and ending in $(3,4)$. We only write the endpoint, because it is more convenient and indicates the direction, so that we can attach this arrow on any other point than the origin, too.

Covector $(\alpha,\beta)=(22,5.5)$ is also just a brief notation, but this time it abbreviates something different, namely an instruction rather than an arrow. And the instruction reads: Multiply any vector $(x,y)$ by $(22,5.5)$. The entire instruction with the order to multiply is now the covector. That it can be written like a vector is only a notation.

Last edited:
Thanks for your help and patience.

Yes ,it is confusing, but I feel I am laying the ground for removing some of the preconceptions I have in this area.

It will take a while for these lessons to sink in and I will have to familiarize myself gradually with this new territory :)

robphy
Homework Helper
Gold Member
This might be useful:

Covectors can be visualized as a stack of equally spaced planes,
which maps a vector to the number of planes pierced by the vector.

A physical example is the electric field as a field of covectors, which could be thought of as linear approximations to the equipotential surfaces. Infinitesimal displacement vectors are mapped to voltage differences.

mathwonk
Homework Helper
what lavinia says is of course correct, but some of us stretch the meaning of the word "orthogonal" so it applies to a relation between elements of a given vector space, and elements of its dual space. I.e. given a veftor space V and its dual space V*, we say that a linear functional f in V* is orthogonal to a vector x in V, precisely when f(x) = 0. Then if we are also given an inner product on V, which can be viewed as a map from V to V*, it turns out that a vector y in V is orthogonal to x in V, in the sense that their dot product is zero, precisely when the image fy of y in V*, is orthogonal to x as above, i.e. when fy(x) = 0. This is obvious since by defini†ion, fy(x) = <y,x>.

In this setting, even without an inner product on V, a subspace W of V has an orthogonal complement Wperp in V*, where Wperp consists of those functionals in V* which vanish identically on W. I.e. the orthocomplement of a subspace of V, is intrinsically defined as a subspace of V*. But maybe the books you are using don't use this abstraction.

This might be useful:

Covectors can be visualized as a stack of equally spaced planes,
which maps a vector to the number of planes pierced by the vector.

A physical example is the electric field as a field of covectors, which could be thought of as linear approximations to the equipotential surfaces. Infinitesimal displacement vectors are mapped to voltage differences.
Is there a way that it can be said that a vector can give a real number when acting on a covector?

Can a vector also be considered as a function acing on a covector?

Does the same visualization apply (in reverse?)

lavinia
Gold Member
Is there a way that it can be said that a vector can give a real number when acting on a covector?

Can a vector also be considered as a function acing on a covector?

Does the same visualization apply (in reverse?)
If $v$ is a vector and $l$ is a covector then $l(v)$ assigns a number to $l$. $v$ is a cocovector.

If $v$ is a vector and $l$ is a covector then $l(v)$ assigns a number to $l$. $v$ is a cocovector.
So ,are we saying that all the number pairs in a given vector space can be treated entirely interchangeably as either a vector or a covector ,depending entirely on the way they are used?

If the number pair are both ,say(3,4) can we use (3,4) as a covector to operate on the vector (3,4) and ,if we use the vector (3,4) to operate on the covector (3,4) then we say that the vector (3,4) is now being used as a covector and so is a covector and no longer a vector?

fresh_42
Mentor
So ,are we saying that all the number pairs in a given vector space can be treated entirely interchangeably as either a vector or a covector ,depending entirely on the way they are used?
Yes, but this is misleading. Better is to say: A tuple of numbers represent a vector according to some coordinate system, i.e. choice of basis. Now vector spaces can consist of various types of objects: the commonly thought of arrows, functions, covectors, operators, differential forms, or whatever can be added and stretched.

It is not the pair which is used differently, it is the pair which is meant / defined differently. Usage comes from context, not the other way around.
If the number pair are both ,say(3,4) can we use (3,4) as a covector to operate on the vector (3,4) and ,if we use the vector (3,4) to operate on the covector (3,4) then we say that the vector (3,4) is now being used as a covector and so is a covector and no longer a vector?
Covectors are elements of the dual vector space and as such they are again vectors. For finite dimensional vector spaces $V$ we have $V\cong V^*$ and $(V^*)^*=V$. But you should first learn what a vecttor space is! Take continuous functions as example: $V=C^0(\mathbb{R})$.

PeterDonis
Mentor
2019 Award
If the number pair are both ,say(3,4) can we use (3,4) as a covector to operate on the vector (3,4) and ,if we use the vector (3,4) to operate on the covector (3,4) then we say that the vector (3,4) is now being used as a covector and so is a covector and no longer a vector?
You need to stop thinking in terms of "number pairs" and start thinking of what the number pairs represent.

First, as several of us have told you, you need to understand what a vector space is. As I said in post #30, a vector space is anything that satisfies the vector space axioms. What are those axioms? Different sources might organize them differently, but in a nutshell, you have the following:

(1) A field, which for this discussion we will take to be the real numbers, $\mathbb{R}$. Elements of the field are called "scalars".

(2) A set defined over this field. In this discussion, we have been using the set of ordered pairs of reals, i.e., $\mathbb{R}^2$. Elements of the set are called "vectors".

(3) Two operations on the set, called "addition" and "scalar multiplication". For $\mathbb{R}^2$, these operations are obvious: addition just adds the pairs, so $(a, b) + (c, d) = (a + c, b + d)$, and scalar multiplication just multiplies each number in the pair by the scalar, so $m (a, b) = (ma, mb)$.

(4) A set of properties that the operations must satisfy: we don't need to delve too deeply into this here, but they are basically the obvious properties that we expect addition and scalar multiplication to have for our examples, e.g., associativity, identity, inverse, commutativity of addition, multiplication distributive over addition, etc.

The Wikipedia article on vector spaces [1] discusses all this in more detail.

Now, given the above definition of a vector space, what is a "covector"? A covector is a linear map from a vector space into its underlying field. So in the case of the example we have been using, it is a linear map from $\mathbb{R}^2$ into $\mathbb{R}$. Now, it should be clear that, as has already been said in this discussion, any such linear map can be written as follows: $(x, y) \rightarrow \alpha x + \beta y$. Notice that we have just characterized the linear map by an ordered pair of real numbers. In other words, if our vector space is the set $\mathbb{R}^2$, then the space of all covectors--all linear maps from our vector space into its underlying field--is also the set $\mathbb{R}^2$. What's more, if we think about what it means to add two linear maps, or multiply a linear map by a real number, we will see that the space of all linear maps from $\mathbb{R}^2$ into $\mathbb{R}$ satisfies all the axioms of a vector space. So the space of all covectors is also a vector space.

What all this means is that the set $\mathbb{R}^2$, considered as a vector space, can be interpreted in two different ways: it can be interpreted as a set of ordered pairs $(x, y)$ that describe the locations of points in a plane, given an origin; or it can be interpreted as a set of linear maps $\alpha x + \beta y$ from ordered pairs $(x, y)$ to real numbers. So if we are talking about vector spaces, we can't just talk about $\mathbb{R}^2$ as a set of "number pairs". We have to be clear about whether we are using the number pairs to represent points, or linear maps.

And we can go even further. Suppose we take $\mathbb{R}^2$ to represent the set of linear maps $\alpha x + \beta y$ from ordered pairs to real numbers; i.e., each member of $\mathbb{R}^2$ is interpreted as the pair $(\alpha, \beta)$ that defines a linear map. Now pick some ordered pair $(x, y)$. This ordered pair will give us a real number $\alpha x + \beta y$ for every pair $(\alpha, \beta)$. In fact, since multiplication of reals is commutative, we could just as well write this number as $x \alpha + y \beta$, and we could write the linear map as $(\alpha, \beta) \rightarrow x \alpha + y \beta$. This looks just like the covector definition we gave above! All that has changed is that we have switched $(x, y)$ and $(\alpha,\beta)$. In other words, we have now used the ordered pair $(x, y)$ to define a linear map from the space of linear maps $(\alpha, \beta)$ to the real numbers! In other words, the set of ordered pairs $(x, y)$ can be viewed as the set of covectors of the vector space $(\alpha, \beta)$. This is what @lavinia was talking about in post #39.

What this is telling us is that, if we have two interpretations of $\mathbb{R}^2$ as a vector space, which interpretation we call "vectors" and which interpretation we call "covectors" is a matter of choice. Each interpretation--ordered pairs describing points, and ordered pairs describing linear maps--is "dual" to the other, and both satisfy all the vector space axioms so each one is a vector space, and each one is a covector space with respect to the other one. There is no "fact of the matter" about which one is the "real" vector space and which one is the "real" covector space. It all depends on what specific problem you are trying to solve and how you want to use these spaces and interpretations to solve it.

[1] https://en.wikipedia.org/wiki/Vector_space#Definition

Last edited:
WWGD
Gold Member
2019 Award
Ultimately, as a meta -comment , @geordief , you cannot always readily visualize Mathematical concepts and sometimes just try to deal with abstractions best way you van until they hopefully sink in some day. These concepts are not simple nor readily comparable to things you may be familiar with.

You need to stop thinking in terms of "number pairs" and start thinking of what the number pairs represent.

First, as several of us have told you, you need to understand what a vector space is. As I said in post #30, a vector space is anything that satisfies the vector space axioms. What are those axioms? Different sources might organize them differently, but in a nutshell, you have the following:

(1) A field, which for this discussion we will take to be the real numbers, $\mathbb{R}$. Elements of the field are called "scalars".

(2) A set defined over this field. In this discussion, we have been using the set of ordered pairs of reals, i.e., $\mathbb{R}^2$. Elements of the set are called "vectors".

(3) Two operations on the set, called "addition" and "scalar multiplication". For $\mathbb{R}^2$, these operations are obvious: addition just adds the pairs, so $(a, b) + (c, d) = (a + c, b + d)$, and scalar multiplication just multiplies each number in the pair by the scalar, so $m (a, b) = (ma, mb)$.

(4) A set of properties that the operations must satisfy: we don't need to delve too deeply into this here, but they are basically the obvious properties that we expect addition and scalar multiplication to have for our examples, e.g., associativity, identity, inverse, commutativity of addition, multiplication distributive over addition, etc.

The Wikipedia article on vector spaces [1] discusses all this in more detail.

Now, given the above definition of a vector space, what is a "covector"? A covector is a linear map from a vector space into its underlying field. So in the case of the example we have been using, it is a linear map from $\mathbb{R}^2$ into $\mathbb{R}$. Now, it should be clear that, as has already been said in this discussion, any such linear map can be written as follows: $(x, y) \rightarrow \alpha x + \beta y$. Notice that we have just characterized the linear map by an ordered pair of real numbers. In other words, if our vector space is the set $\mathbb{R}^2$, then the space of all covectors--all linear maps from our vector space into its underlying field--is also the set $\mathbb{R}^2$. What's more, if we think about what it means to add two linear maps, or multiply a linear map by a real number, we will see that the space of all linear maps from $\mathbb{R}^2$ into $\mathbb{R}$ satisfies all the axioms of a vector space. So the space of all covectors is also a vector space.

What all this means is that the set $\mathbb{R}^2$, considered as a vector space, can be interpreted in two different ways: it can be interpreted as a set of ordered pairs $(x, y)$ that describe the locations of points in a plane, given an origin; or it can be interpreted as a set of linear maps $\alpha x + \beta y$ from ordered pairs $(x, y)$ to real numbers. So if we are talking about vector spaces, we can't just talk about $\mathbb{R}^2$ as a set of "number pairs". We have to be clear about whether we are using the number pairs to represent points, or linear maps.

And we can go even further. Suppose we take $\mathbb{R}^2$ to represent the set of linear maps $\alpha x + \beta y$ from ordered pairs to real numbers; i.e., each member of $\mathbb{R}^2$ is interpreted as the pair $(\alpha, \beta)$ that defines a linear map. Now pick some ordered pair $(x, y)$. This ordered pair will give us a real number $\alpha x + \beta y$ for every pair $(\alpha, \beta)$. In fact, since multiplication of reals is commutative, we could just as well write this number as $x \alpha + y \beta$, and we could write the linear map as $(\alpha, \beta) \rightarrow x \alpha + y \beta$. This looks just like the covector definition we gave above! All that has changed is that we have switched $(x, y)$ and $(\alpha,\beta)$. In other words, we have now used the ordered pair $(x, y)$ to define a linear map from the space of linear maps $(\alpha, \beta)$ to the real numbers! In other words, the set of ordered pairs $(x, y)$ can be viewed as the set of covectors of the vector space $(\alpha, \beta)$. This is what @lavinia was talking about in post #39.

What this is telling us is that, if we have two interpretations of $\mathbb{R}^2$ as a vector space, which interpretation we call "vectors" and which interpretation we call "covectors" is a matter of choice. Each interpretation--ordered pairs describing points, and ordered pairs describing linear maps--is "dual" to the other, and both satisfy all the vector space axioms so each one is a vector space, and each one is a covector space with respect to the other one. There is no "fact of the matter" about which one is the "real" vector space and which one is the "real" covector space. It all depends on what specific problem you are trying to solve and how you want to use these spaces and interpretations to solve it.

[1] https://en.wikipedia.org/wiki/Vector_space#Definition
Thanks,you have gone to a lot of trouble.I think I am getting there.

Ultimately, as a meta -comment , @geordief , you cannot always readily visualize Mathematical concepts and sometimes just try to deal with abstractions best way you van until they hopefully sink in some day. These concepts are not simple nor readily comparable to things you may be familiar with.
Yes,I have read that mathematics is often developed without a real world application and that that application ,if it shows up may be a long time after the mathematics were first formulated.

This covector /vector formulation does appear to me to have real world applications and I think that is what makes me dissatisfied if I cannot connect the mathematics with its application in as direct a way as possible (visually would be ideal but I appreciate it may not always be possible)

And of course I also understand that it can be a very gradual process for an understanding of these concepts to sink in and become something like second nature.

I still feel I have made some progress ,with everyone's help.

lavinia