# Need general algorithm for finding dual bases

hi,

this is my first post on this site so please excuse me (and correct me) if i am not posting according to the guidelines.

im studying right now linear transformations and im a bit shaky concerning dual vector spaces.
i understand the definition but am not sure how to apply it. what exactly is the connection between a regular basis and its dual basis?
i know the delta i,j definition, but that requires an intuitive solution, and thats not always an option.

so in short: is the a general algorithm for computing dual basis, under the assumption that the original vector space and a certain basis are known?

thanks

## Answers and Replies

Fredrik
Staff Emeritus
Science Advisor
Gold Member
What do you mean by "computing" dual basis? To find the components of the dual basis vectors in some other basis? Which basis would that be?

HallsofIvy
Science Advisor
Homework Helper
Here's my interpretation: given an n dimensional vector space, V, its "dual space", V*, is the set of all linear functions from V to its underlying field. Then, given any basis for V, there exist a corresponding "dual basis" or basis for V*.

If the basis for V is $\{e_1, e_2, \cdot\cdot\cdot, v_n\}$ then the corresponding "dual basis" is $\{f_1, f_2, \cdot\cdot\cdot, f_n\}$ where each $f_i$ is defined by "$f_i(v_i)= 1$ and if $j\ne i$, $f_i(v_j)= 0$". that is, $f_i(v_j)= \delta_{ij}$.

I have no idea why donkeykong123 thinks that is an "intuitive" solution nor what he means by a "general algorithm". That definition looks as "algorithmic" as any to me!

hi,

this is my first post on this site so please excuse me (and correct me) if i am not posting according to the guidelines.

im studying right now linear transformations and im a bit shaky concerning dual vector spaces.
i understand the definition but am not sure how to apply it. what exactly is the connection between a regular basis and its dual basis?
i know the delta i,j definition, but that requires an intuitive solution, and thats not always an option.

so in short: is the a general algorithm for computing dual basis, under the assumption that the original vector space and a certain basis are known?

thanks

There is an important idea here that hasn't come out yet.

A finite dimensional vector space is linearly isomorphic - has the same dimension - as its dual space, the space of linear maps of the vector space into its base field.

But there is no natural or canonical isomorphism. Each isomorphism requires more information to be defined. If one has a basis then one can define an isomorphism by mapping each basis element to its dual map. More generally if one defines an inner product on the vector space one gets an isomorphism by mapping each v to the linear map <v,>. If one chooses an orthonormal basis then it is mapped to its dual by this rule.

Conversely if one has a basis, one gets an inner product by declaring the basis to be orthogonal. So isomorphisms of V to its dual are equivalent to the choice of an inner product.

And if there's one, and only one, inner product defined, then that isomorphism is natural (canonical), isn't it?

And if there's one, and only one, inner product defined, then that isomorphism is natural (canonical), isn't it?

Not sure what you mean. But there is never only one inner product.

I'm basing this on Bowen & Wang, pp. 205-207:

"Given any inner product on V, there exists a unique isomorphism

$$G : V \to V^*$$

"which is induced by the inner product in such a way that

$$\left \langle G \textbf{u},\textbf{v} \right \rangle = \textbf{u} \cdot \textbf{v}, \;\forall \, \textbf{u},\textbf{v} \in V$$

"Because of this theorem, if a particular inner product is assigned on V, then we can identify V with V* by suppressing the notation for the isomorphisms G and G-1. In other words we regard a vector v also as a linear function on V

http://repository.tamu.edu/handle/1969.1/2502

$$\left \langle \textbf{u},\textbf{v} \right \rangle \equiv \textbf{u} \cdot \textbf{v}$$

"According to this rule the reciprocal basis is identified with the duel basis and the inner product becomes the scalar product. However, since a vector space can be equipped with many inner products, unless a particular inner product is chosen, we cannot identify V with V* in general."

I stand to be corrected, but I thought the Euclidean translation space (or whatever it's called) of elementary vector analysis was an example of an inner product with only one inner product, namely:

$$\textbf{u} \cdot \textbf{v} = \sum_{k=1}^{n} u_k v_k = \left \| \textbf{u} \right \| \left \| \textbf{v} \right \| \cos \left ( \theta )$$

where theta is the angle between u and v.

In post #22 of this thread, DrGreg writes:

"The point is that an inner product space is a vector space with an inner product. In that context it is understood there is just one inner product, the one used to define the space in the first place. There's no reason why you can't take the same vector space and give it a different inner product, and thus define a different inner product space."

https://www.physicsforums.com/showthread.php?t=341621&page=2

As far as I knew, in the most commonly encountered applications, only one inner product is defined.

By natural/canonical, I mean an isomorphism which is unique, an obvious and unambiguous choice, by virtue of some something intrinsic to the structure of the space, such as--in this case--the one and only inner product function defined on it. I'm paraphrasing Bowen & Wang p. 214.

By natural/canonical, I mean an isomorphism which is unique, an obvious and unambiguous choice, by virtue of some something intrinsic to the structure of the space, such as--in this case--the one and only inner product function defined on it. I'm paraphrasing Bowen & Wang p. 214.

As I said, given an inner product you get an isomorphism as we defined.
Uniqueness here is a little pedantic. It just says that the formula defines the isomorphism.

But in mathematics inner products are usually not considered to be intrinsic to the underlying vector space. The algebraic structure of the vector space is intrinsic but the inner product is an add on. There is no unique or preferred inner product.

For instance, the differential of a smooth function defines at each point a homomorphism of the tangent vectors into the base field. But there is no preferred way to identify the differential with a gradient vector. For this you would need a smooth inner product on each tangent space.

But in mathematics inner products are usually not considered to be intrinsic to the underlying vector space. The algebraic structure of the vector space is intrinsic but the inner product is an add on. There is no unique or preferred inner product.

I could be mistaken, but the impression I got was that these authors took a slightly different perspective, one in which the inner product is part of the given, intrinsic structure of an inner product space, and if only one inner product is defined, then it induces (if that's the right word) this natural isomorphism between V and V*. I think D.H. Griffel too talks about it being a natural isomorphism.

The way in which vector spaces could be said to have "never only one inner product" seems rather different in the three cases of (1) vector space for which no inner product had been defined; (2) an inner product space for which multiple inner products have been defined, (3) an inner product space where only one inner product has been defined (even if others could potentially have been).

OK what you are saying is fair. An inner product space is a vector space with an inner product. The inner product determines an isomorphism between the dual vector spaces.

There is no disagreement here.