Simple question about the dual space

  • B
  • Thread starter etotheipi
  • Start date
  • Tags
    Dual Space
In summary, A vector space ##V## with basis ##v_1, v_2, \dots, v_n## has a dual space ##V^*## which consists of linear functionals ##\varphi: V \rightarrow \mathbb{R}##. The dual basis is defined by ##\varphi^i(v_j) = \delta_{ij}##. The action of any dual vector on a vector in ##V## can be represented as an inner product ##\langle \varphi, v \rangle = \varphi(v)##. However, this inner product may not be related to the original inner product on ##V##, unless the vector space is finite-dimensional and the basis is orthon
  • #1
etotheipi
Rookie question; for a vector space ##V##, with basis ##v_1, v_2, \dots, v_n##, the dual space ##V^*## is the set of linear functionals ##\varphi: V \rightarrow \mathbb{R}##. Dual basis will satisfy ##\varphi^i(v_j) = \delta_{ij}##. Is the action of any dual vector on any vector always an inner product ##\varphi(v) = \langle \varphi, v \rangle##?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
I'm a little confused by your question: the equation ##\langle\varphi,v\rangle=\varphi(v)## cannot represent an inner product since the two inputs to ##\langle\cdot,\cdot\rangle## come from different vector spaces.

For finite-dimensional ##V##, we have that picking a basis for ##V## determines an isomorphism ##V\cong V^*## (take the basis to the corresponding dual basis). If ##x\mapsto\varphi_x## is this isomorphism taking a vector to its corresponding dual vector, then it is true ##\langle x,y\rangle=\phi_x(y)## is an inner product on ##V.## You can check that this is symmetric and positive-definite.

In fact, it is the same inner product that you get by declaring your basis to be orthonormal, and extending to all of ##V## by linearity.

Does this answer your question?
 
  • Like
Likes Abhishek11235 and etotheipi
  • #3
An inner product would require the same vector space.

In any field, e.g. ##\mathbb{R}##, we have ##0\neq 1##, hence we can always define the ##\varphi^i##, and if the vector space is finite dimensional, those linear forms span ##V^*##. It's getting interesting if the dimension isn't finite anymore. I assume that we will need the axiom of choice a lot, and I don't see how definiteness will work.
 
  • Like
Likes etotheipi
  • #4
If V,W are finite dimensional k-vector spaces of the same dimension, a function < , >:VxW-->k is called a non singular pairing if it is symmetric, bilinear, and "positive definite". (By that I mean that the only w that pairs to zero with every v is w=0, and vice versa.) There is always a natural such pairing if W = V*, as you know. In fact, this is essentially the only case where there is such a pairing, in the following sense: if VxW-->k is a non singular pairing, then the map taking a vector w in W to the linear function < ,w>:V-->k, on V, is an isomorphism from W to V*.

If W = V, a non singular pairing VxV-->k is called an inner product on V, and thus defines an isomorphism from V to V*. Conversely, any isomorphism from V to V*, defines a non singular pairing, i.e. an inner product, VxV-->k.

edit: oops, this doesn't make sense. what do i mean by "symmetric" for a pairing between different spaces. I need to think about this and repost tomorrow after I get some sleep on it. shows what happens when i claim "plausible" things without proving them.

maybe i just want bilinear and non singular ("positive definite"). but then why would it be symmetric when V=W? maybe it won't have to be. but the point is supposed to be that an inner product is a special case of a bilinear pairing, and defines an isomorphism with the dual space. but where does symmetry come in?
 
Last edited:
  • Like
Likes etotheipi
  • #5
Infrared said:
For finite-dimensional ##V##, we have that picking a basis for ##V## determines an isomorphism ##V\cong V^*## (take the basis to the corresponding dual basis). If ##x\mapsto\varphi_x## is this isomorphism taking a vector to its corresponding dual vector, then it is true ##\langle x,y\rangle=\phi_x(y)## is an inner product on ##V.## You can check that this is symmetric and positive-definite.

Ah, okay, I think I understand. For something like a Hilbert space ##V## and its dual ##V^*##, we have an isomorphism ##u \mapsto \varphi_u##. Then, ##\varphi_u(v)## is the same as taking the inner product of ##u## with the vector ##v##, both of which lie in the space ##V##; i.e. ##\varphi_u(v) = \langle u | v \rangle##. I do suppose this is limited to finite dimensional spaces, and we of course need ##V## to be an inner product space.
 
  • #6
Well first of call, I wouldn't use the term "Hilbert space" when you only mean to refer to finite-dimensional vector spaces.

Anyway, I think you're still mixing some concepts.

If you have a basis ##v_1,\ldots,v_n## for ##V##, then you have a corresponding dual basis ##\varphi^1,\ldots,\varphi^n## for ##V^*,## defined by the rule ##\varphi^i(v_j)=\delta^i_j,## and by linearity you get an isomorphism ##V\to V^*, v\mapsto \varphi_v.## You do not need an inner product to start with to define this. If you already have an inner product ##\langle\cdot,\cdot\rangle## on ##V##, there is no guarantee that ##\langle u,v\rangle=\varphi_u(v).## The RHS is an inner product, but might have no relation to the inner product you start with (indeed, it is the inner product you get from demanding that your basis ##v_1,\ldots,v_n## is orthonormal).

On the other hand if you do start with a finite-dimensional inner product space ##(V,\langle\cdot,\cdot\rangle)##, then we can define an isomorphism ##V\to V^*, v\mapsto \varphi_v## by the rule ##\varphi_v(u)=\langle u,v\rangle.## This map is more natural than the map in the previous paragraph (also called ##\varphi##) since it doesn't depend on a choice of basis. And then you equation you want is true by definition.
mathwonk said:
maybe i just want bilinear and non singular ("positive definite"). but then why would it be symmetric when V=W? maybe it won't have to be. but the point is supposed to be that an inner product is a special case of a bilinear pairing, and defines an isomorphism with the dual space. but where does symmetry come in?
Forgive me if I'm only saying things that you already know.
Positive-definiteness of a (symmetric is usually assumed) bilinear form usually means that ##(x,x)\geq 0## with equality only for ##x=0##, which is stronger than non-degeneracy. You're also right that symmetry doesn't follow from bilinearity and non-degeneracy. Thinking concretely, the form ##(x,y):=x\cdot Ay## on ##\mathbb{R}^n## is non-degenerate when ##A## is nonsingular, (anti-) symmetric when ##A## is, and positive-definite when ##A## is symmetric with positive eigenvalues.
 
  • Informative
Likes etotheipi
  • #7
Infrared said:
If you have a basis ##v_1,\ldots,v_n## for ##V##, then you have a corresponding dual basis ##\varphi^1,\ldots,\varphi^n## for ##V^*,## defined by the rule ##\varphi^i(v_j)=\delta^i_j,## and by linearity you get an isomorphism ##V\to V^*, v\mapsto \varphi_v.## You do not need an inner product to start with to define this. If you already have an inner product ##\langle\cdot,\cdot\rangle## on ##V##, there is no guarantee that ##\langle u,v\rangle=\varphi_u(v).## The RHS is an inner product, but might have no relation to the inner product you start with (indeed, it is the inner product you get from demanding that your basis ##v_1,\ldots,v_n## is orthonormal).

On the other hand if you do start with a finite-dimensional inner product space ##(V,\langle\cdot,\cdot\rangle)##, then we can define an isomorphism ##V\to V^*, v\mapsto \varphi_v## by the rule ##\varphi_v(u)=\langle u,v\rangle.## This map is more natural than the map in the previous paragraph (also called ##\varphi##) since it doesn't depend on a choice of basis. And then you equation you want is true by definition.

Thanks for taking the time to explain! What you wrote makes sense. I think I'll need to study a bit more before asking any more questions :smile:
 
  • #8
etotheipi said:
Ah, okay, I think I understand. For something like a Hilbert space ##V## and its dual ##V^*##, we have an isomorphism ##u \mapsto \varphi_u##. Then, ##\varphi_u(v)## is the same as taking the inner product of ##u## with the vector ##v##, both of which lie in the space ##V##; i.e. ##\varphi_u(v) = \langle u | v \rangle##. I do suppose this is limited to finite dimensional spaces, and we of course need ##V## to be an inner product space.

This isomorphism holds for every real Hilbert space, also infinite dimensional vector spaces, but you consider the topological dual then instead of the algzbraic one (in the finite dimensional case they coincide). I think this result is called the Riesz theorem.

For complex vector spaces, the map does not preserve scalar multiplication anymore but changes a scalar in its conjugate.
 
  • Like
Likes etotheipi
  • #9
Math_QED said:
This isomorphism holds for every real Hilbert space, also infinite dimensional vector spaces, but you consider the topological dual then instead of the algzbraic one (in the finite dimensional case they coincide). I think this result is called the Riesz theorem.

For complex vector spaces, the map does not preserve scalar multiplication anymore but changes a scalar in its conjugate.

Thanks! Funnily enough, I was trying to learn this from Chapter 3, however when I searched for "Riesz" I learned that Axler goes into a lot more detail in Chapter 6 (page 188 in my book). He says exactly what yourself and @Infrared did,

"Riesz Representation Theorem: Suppose ##V## is finite-dimensional and ##\varphi## is a linear functional on ##V##. Then there is a unique vector ##u \in V## s.t. $$\varphi(v) = \langle v, u \rangle$$for every ##v \in V##"

and the proof is given below. Interesting about the complex vector spaces, though! I'll see if I can find anything about that in the book... thanks!
 
  • Like
Likes member 587159
  • #10
etotheipi said:
Thanks! Funnily enough, I was trying to learn this from Chapter 3, however when I searched for "Riesz" I learned that Axler goes into a lot more detail in Chapter 6 (page 188 in my book). He says exactly what yourself and @Infrared did,

"Riesz Representation Theorem: Suppose ##V## is finite-dimensional and ##\varphi## is a linear functional on ##V##. Then there is a unique vector ##u \in V## s.t. $$\varphi(v) = \langle v, u \rangle$$for every ##v \in V##"

and the proof is given below. Interesting about the complex vector spaces, though! I'll see if I can find anything about that in the book... thanks!

You can verify it for yourself. Assuming the inner product is linear in the first component (as Axler's version of Riesz seems to suggest), we have ##\varphi_{\lambda u}=\overline{\lambda}\varphi_u##. Indeed, given ##v\in V##, we have
$$\varphi_{\lambda u}(v)= \langle v,\lambda u\rangle =\overline{\lambda}\langle v,u\rangle =\overline{\lambda}\varphi_u(v)$$ from which the claim follows.
 
  • Like
Likes etotheipi
  • #11
Math_QED said:
You can verify it for yourself. Assuming the inner product is linear in the first component (as Axler's version of Riesz seems to suggest), we have ##\varphi_{\lambda u}=\overline{\lambda}\varphi_u##. Indeed, given ##v\in V##, we have
$$\varphi_{\lambda u}(v)= \langle v,\lambda u\rangle =\overline{\lambda}\langle v,u\rangle =\overline{\lambda}\varphi_u(v)$$ from which the claim follows.

Thanks! I did have a follow-up question... why does the inner product need to be linear in one argument but conjugate-linear in the other? And furthermore, is it purely by convention linear in the first argument and conjugate-linear in the second, because I don't see any reason why it couldn't be defined to be the other way around?
 
  • #12
Suppose that your form is linear in both components. Then ##\langle iv,iv\rangle=-\langle v,v\rangle ## would pose a problem for the condition ##\langle x,x\rangle\geq 0## for all ##x##. For example, think about the standard form ##\langle x,y\rangle=\sum x_i\overline{y_i}## in ##\mathbb{C}^n##.

It is a matter of convention which component you choose to be conjugate-linear.
 
  • Like
Likes etotheipi and member 587159
  • #13
etotheipi said:
Thanks! I did have a follow-up question... why does the inner product need to be linear in one argument but conjugate-linear in the other? And furthermore, is it by convention always linear in the first argument and conjugate-linear in the second, because I don't see any reason why it couldn't be the other way around?

It is a convention. It can be the other way around as well. I have seen both conventions being used, although I feel linearity in the first component is more popular for mathematicians.

The answer of your other question is that bilinearity and positive definiteness don't go well together over the complex field, as @Infrared excellently demonstrated.
 
  • Like
Likes etotheipi
  • #14
Infrared said:
Suppose that your form is linear in both components. Then ##\langle iv,iv\rangle=-\langle v,v\rangle ## would pose a problem for the condition ##\langle x,x\rangle\geq 0## for all ##x##.
That's a neat counter-example! Now I understand why it needs to take the form ##\langle x,y\rangle=\sum x_i\overline{y_i}##.

Math_QED said:
It is a convention. It can be the other way around as well. I have seen both conventions being used, although I feel linearity in the first component is more popular by mathematicians.

Cool, in any case I suppose it's probably clear from context.

I've run out of questions... thanks for your help! You guys are awesome :smile:
 
  • Like
Likes Infrared and member 587159
  • #15
Infrared said:
Well first of call, I wouldn't use the term "Hilbert space" when you only mean to refer to finite-dimensional vector spaces.
I think that, mathematically, a Hilbert space can be finite dimensional. I had never seen a physical system that had a finite dimensional Hilbert space until just a few years ago. But a professor of mathematics writing mathematical physics models started using one to solve new physics problems. It took me a while to understand it and then the technique seemed brilliant. If you want a challenge, look at causal-fermion-system.com Very few physicists seem to be fluent or even aware of this re-framing of physics in these new mathematical forms. Everyone, even here in southern California, that I have talked to just says that it is incomprehensible or not mainstream or "not my specialty."
 
Last edited by a moderator:
  • #16
Math_QED said:
It is a convention. It can be the other way around as well. I have seen both conventions being used, although I feel linearity in the first component is more popular for mathematicians.

The Dirac notation relies on linearity in the second argument, which is why physicists prefer that!
 
  • Like
Likes member 587159, fresh_42 and etotheipi
  • #17
@Physics4Funn

Yes, of course a Hilbert space ##\textit{can}## be finite-dimensional, but there's no reason to say "Hilbert space" as opposed to just "inner product space" when your spaces are necessarily finite-dimensional, since then the extra condition of completeness is vacuous.

Anyway, I don't know enough physics to comment on the rest of your post.
 
  • Like
Likes member 587159
  • #18
Infrared said:
@Physics4Funn

Yes, of course a Hilbert space ##\textit{can}## be finite-dimensional, but there's no reason to say "Hilbert space" as opposed to just "inner product space" when your spaces are necessarily finite-dimensional, since then the extra condition of completeness is vacuous.

Hilbert space is much shorter to say and write.
 

1. What is the dual space?

The dual space is a mathematical concept that refers to the set of all linear functionals on a vector space. It is essentially a space of linear maps from the original vector space to its underlying field (usually the real numbers or complex numbers).

2. How is the dual space related to the original vector space?

The dual space is closely related to the original vector space, as it is essentially a "mirror image" of the original space. The dual space contains all the information about the original space, but in a different form. For example, the dimensions of the dual space will be the same as the original space, but the elements will be linear functionals instead of vectors.

3. What is the importance of the dual space in linear algebra?

The dual space is an important concept in linear algebra because it allows us to study and understand vector spaces in a more abstract and general way. It also provides a way to define and manipulate linear functionals, which are important in many areas of mathematics and physics.

4. How is the dual space defined mathematically?

The dual space is defined as the set of all linear functionals on a vector space V, denoted as V*. A linear functional is a linear map from V to its underlying field, such as the real numbers or complex numbers. In other words, the dual space consists of all possible linear combinations of the basis elements of V.

5. Can the dual space be visualized?

The dual space is a mathematical concept and cannot be visualized in the same way as a vector space. However, in some cases, we can think of the dual space as the "space of all possible directions" in the original vector space. For example, in a two-dimensional vector space, the dual space can be thought of as a set of lines passing through the origin, each representing a different direction in the original space.

Similar threads

Replies
1
Views
243
  • Linear and Abstract Algebra
Replies
7
Views
255
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
13
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
0
Views
451
  • Calculus
Replies
4
Views
521
  • Linear and Abstract Algebra
2
Replies
48
Views
7K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
16
Views
4K
Back
Top