I How Do Dual Vectors Differ From Regular Vectors in Physics?

  • Thread starter Thread starter pixel
  • Start date Start date
  • Tags Tags
    Dual Vectors
Click For Summary
Dual vectors, or covectors, are distinct from regular vectors in that they map vectors to scalars, which is a fundamental aspect of their function in physics. Covariant vectors and dual vectors transform similarly, but they belong to different vector spaces, making them incompatible for direct addition. The electric field serves as a practical example of a dual vector, as it can be expressed as the gradient of a potential field and produces a scalar when paired with a vector. The distinction between contravariant and covariant vectors is significant, especially in non-orthogonal bases, where they cannot be directly combined. Understanding the role of metrics is crucial for connecting these different types of vectors and facilitating their interactions mathematically.
  • #31
fresh_42 said:
Yes, but there is a major difference between physically meaningful and the requirement of a metric or even impossible.

But I think for the purposes of getting an intuition about difference between tangent vectors and their duals, it helps to consider a case where there is no obvious physically meaningful way to convert one to the other.
 
Physics news on Phys.org
  • #32
George Jones said:
Natural, basis-independent linear bijection.
"Natural" is a matter of definition. I don't understand your need to put "basis-independent" in there. Nowhere did we mention a basis. You can also use any basis to define a basis-independent linear bijection - it just will not have the same components. As soon as you define your non-degenerate linear map between ##V## and ##V^*## you have defined your non-degenerate 2-form that determines what "natural" means.

pixel said:
To put things in concrete terms, can you give an example of a vector representing a physical quantity and its corresponding dual vector, where the latter was derived by applying the metric to the former?
The phase differential ##d\phi## and the wave vector ##N## of a plane wave in Minkowski space (or any other 3+1-spacetime).

fresh_42 said:
Yes, but there is a major difference between physically meaningful and the requirement of a metric or even impossible. The algorithm I mentioned allows a unique identification ##\varphi_i \longleftrightarrow v_i## of basis vectors with the property ##\varphi_j(v_i)=\delta_{ij}##. No metric anywhere near, only fundamental properties of the category of vector spaces.
I do not agree with the bold part. You just defined a type (0,2) tensor in your bijection between ##V## and ##V^*## with exactly the properties of a metric so your bijection in itself is an actual metric.

George Jones said:
Now, let ##\phi_1 = c\psi_1##, and run the procedure again. When this is done, ##\phi_1 = c \psi_1 \sim \frac{1}{c} v##. Consequently, this procedure does not lead to a natural isomorphism.
Simply because this defines a different metric on your vector space ##V##. By choosing that your particular bases should be orthonormal you have imposed a metric on the space. If you choose another basis to be orthonormal you will obviously get a different metric, but it will still define a metric. Whether or not the metric is "natural" is a matter of semantics.
 
  • #33
Orodruin said:
I do not agree with the bold part. You just defined a type (0,2) tensor in your bijection between VVV and V∗V∗V^* with exactly the properties of a metric so your bijection in itself is an actual metric.
But I do not require one. Only that I can write a vector space as a direct sum of a kernel and a non-zero vector (line). I can do this as soon as I defined ##V^*##, no matter whether there is a given metric or not, or finite dimensionality. Only thing is, that there is no unique (or canonical) way to do it. Once a metric is given, one would adjust the algorithm appropriately.

But the debate doesn't fit very well into this physical forum. (I recognized too late that it is SR/GR where metrics are crucial parts.)
 
  • Like
Likes atyy
  • #34
fresh_42 said:
But I do not require one.
No, you do not require one to define the map, but as soon as you do you have one in the map you defined. So why did you define it if you did not require one?

fresh_42 said:
Only thing is, that there is no unique (or canonical) way to do it.
Agreed. But once you have defined your map, that in itself provides you with a metric.
 
  • #35
geordief said:
Can I ask a question in this thread as I have been looking into this also over the past few weeks,..?

Does the distinction between contravariant and covariant vectors only apply when the base vectors are not orthogonal (ie are they identical otherwise?)

If the bases are not orthogonal what happens when a covariant vector is added to a contravariant vector?

Is the result the same irrespective of the order of the operation?
A lot of the mysteries of tensor analysis immediately goes away when you are just slightly more precise with the language (which physicists usually never are, which is a pity, but you can't help it :-(): Vektors and dual vectors are not "variant" at all but independent of any chosen basis/reference frame. The components are what changes when changing from one basis to another.

Now, if you just have a finite-dimensional real vector space, there are first the vectors ##\vec{v}## and bases of the vector space ##\vec{b}_j##. Any vector can be uniquely represented by its components with respect to this basis,
$$\vec{v}=v^j \vec{b}_j.$$
Here and in the following the Einstein summation convention is used, i.e., one has to sum over repeated indices.

Now the change from one basis to another is defined by an invertible matrix,
$$\vec{b}_j={T^k}_j \vec{b}_k'.$$
The transformation of the vector components is immediately derived by the invariance of the vector, i.e.,
$$\vec{v}=v^j \vec{b}_j = v^j {T^k}_j \vec{b}_k' \; \Rightarrow \; v^{\prime k}={T^k}_j v^j.$$

The next very natural notion are the linear maps from vectors to the real numbers, socalled linear forms (also called 1-forms, or tensors of rank 1), mapping any vector ##\vec{v} \mapsto M(\vec{v}) \in \mathbb{R}## being linear. Because of the linearity you only need to know, how the basis vectors map, i.e.,
$$M(\vec{b}_j)=M_j,$$
because then you know how all vectors map given their components with respect to the basis
$$M(\vec{v})=M(v^j \vec{b}_j) = v^j M(\vec{b}_j)=v^j M_j.$$
Now the linear form must not change by just changing the basis. This again uniquely defines the transformation properties of its components
$$M_j=M(\vec{b}_j) = M({T^k}_j \vec{b}_k') = {T^k}_j M(\vec{b}_j')={T^k}_j M_j',$$
or using the inverse matrix ##{U^j}_k={(T^{-1})^j}_k##,
$$M_k' = {U^{j}}_{k} M_j.$$
one says the components of a linear form transform contragrediently to the vector components.

It's clear that the linear forms also build a vector space, called the dual space.
 
  • Like
Likes stevendaryl
  • #36
I always thought, it would be a good idea to explain tensors in terms of bra and ket vectors. So I have tried to write it up. I wonder if this is helpful in explaining the above questions.
(it is work in progress, by the way)

My opinion is, that every measurement introduces a dual structure, since you need a quantity to tell you how much of something you got, and you need a link to reality, a unit. If you change your units, you will have to change the numbers in an inverse way. Say, you measure something to be 10m long. If you replace m with (2m), then you have 5 of (2m).
Now, for vectors you will have the same structure, if an object that you want to describe with vectors is invariant, and therefore describes an object in nature.
I think, for vector spaces one has 2 such structures. A vector itself can describe a distance in nature and hence an invariant object. That is why base vectors and the corresponding coefficients transform in an inverse way. To reflect that for the notation used, base vectors get lower indices and coefficients upper indices. So, if you combine this two objects, one with lower and one with upper indices, you get something invariant.
The length of a vector is invariant too, if it describes a length in nature. So the computation of a length has to produce a dual structure as well, or in other words, one structure with upper and one with lower indices, that will be combined. This leads to the introduction of dual vectors, that combine with vectors to create a scalar. This dual vectors are different from vectors, as much as coefficients are different from base vectors.
Going back to the first example, 10 * (m) = 5 * (2m). We could give the units a lower index and the coefficients an upper index as well.
Therefore, upper and lower indices are just a reminder of what things you can combine to create something invariant and therefore independent of an observer. To make a mathematical structure invariant, you will always have a combination of 2 structures that transform in an inverse way to each other.
 

Attachments

Last edited:

Similar threads

  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 7 ·
Replies
7
Views
465
  • · Replies 4 ·
Replies
4
Views
3K
Replies
1
Views
2K
Replies
0
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
Replies
5
Views
4K