Orthogonality & Dual Spaces: Exploring Physical Quantities in Vector Spaces

  • Thread starter Rasalhague
  • Start date
In summary: Vectors themselves can be "dualized" to form covectors. In summary, there are two approaches to thinking about vector spaces in relation to physical quantities and physical units. One approach is to consider the dual spaces as separate entities, each with their own vector spaces and inner products. The other approach is to identify the dual spaces through an inner product and consider them as different representations of the same vector space. It ultimately depends on the context and what is most useful for the problem at hand.
  • #1
Rasalhague
1,387
2
Many examples of orthogonality are really about vectors orthogonal to covectors. For example, the work done by a force on a moving object is zero when its velocity is perpendicular to the force. It is unreasonable to interpret orthogonality here in terms of an inner product, because force and velocity belong to different vector spaces: it makes no kind of sense to add a force to a velocity (Griffel: Linear Algebra and its Applications, Vol. 2, p. 180).

The dual space of V may then be identified with the space FA of functions from A to F: a linear functional T on V is uniquely determined by the values θα = T(eα) it takes on the basis of V (Wikipedia: Dual space).

If Griffel is arguing that force and velocity are vectors of different vector spaces because they can't be added, this would suggest that displacement vectors are vectors of a third vector space, because they can't be added to force or to velocity, since they have different units. Yet it's meaningful to ask whether a force is orthogonal to a displacement, so displacement vectors must also be of a dual space to that of force vectors. Griffel's argument seems to lead to a situation where there are two vector spaces dual to that of force vectors. Is that how people generally think about vector spaces which represent quantities with physical units? Can we only be sure that the dual space unique if both it and the primary vector space are unitless?

Another viewpoint I've come across is that vector quantities in physics can each be categorised as somehow inherently "vectors" or "covectors" according to their physical nature, specifically whether they have the capacity-like quality or a density-like quality: whether their magnitudes increase when smaller units are chosen, or decrease. Weinreich, in Geometric Vectors, contrasts them as "arrow vectors" and "stack vectors". People will state that such-and-such a quantity "is" a vector, or "is" a covector, because of how its components vary with a change of basis. This relies on the convention whereby displacement or velocity vectors are taken as primary (and given the simple name "vectors"), and density-like vectors are taken as secondary (and so given the name "covectors"), since mathematically the relationship between primary and dual spaces is symmetrical (each being the dual of the other, or at least the first is naturally isomorphic to the dual of the second, so that the relationship can be thought of as reciprocal). Or to put it another way, it relies on the convention of how coordinates are defined in relation to space. But this view it also seems to assume that there are really only two vector spaces involved here, that of what are called "vectors" and its dual, that of what are called "covectors", each dual only to the other (regardless of how they might be used on a given occasion to describe a given physical quantity), rather than a profusion of vector spaces, one for each physical quantity, some possibly with more than one dual space.

Are these really two different approaches? How do folks here think about vector spaces, and other mathematical structures such as the real numbers, in relation to physical quantities and physical units?

EDIT: On further reading, I see that he context of the Wikipedia quote was specifically talking about infinite dimensional vector spaces. But I really just quoted it to illustrate my feeling that there seems to be something odd about the idea of multiple, non-identical vector spaces all dual to the same vector space.
 
Last edited:
Physics news on Phys.org
  • #2
Take vector space V endowed with a scalar product. Then take two 1-dimensional spaces, call one AppleUnits, the other one OrangeUnits. Now construct

[tex]Apples=AppleUnits\bigotimes V\quad Oranges=OrangeUnits\bigotimes V.[/tex]

With this construction it is still possible to see whether a given apple is orthogonal to a given orange or not.

In fact AppleUnits and OrangeUnits can be just "positive half-spaces", though such a concept is rarely discussed in algebra textbooks (but it is used in some publications and it works fine).
 
Last edited:
  • #3
Rasalhague said:
It is unreasonable to interpret orthogonality here in terms of an inner product, because force and velocity belong to different vector spaces: it makes no kind of sense to add a force to a velocity (Griffel: Linear Algebra and its Applications, Vol. 2, p. 180).
IMO, Griffel is creating a strawman argument here. Nowhere in the inner product F·v are we adding speed to force (magnitude). We physicists know better than that, be it vectors or even simple scalars. Addition can only be performed on quantities with commensurable units. One watt plus one second makes zero sense. On the other hand, one watt times one second does make sense -- and so does force times velocity. Addition is of course taking place in computing F·v, but the terms being added consistently have units of force*velocity, or energy.

How else are you going to interpret orthogonality other than in terms of an inner product?
 
Last edited:
  • #4
Adding to the above: nothing prevents us from adding apples to oranges - provided we know what we are doing. Given two different vector spaces V,W of arbitrary dimensions you can always form [tex]V\oplus W[/tex]. This is what is being done in the tensor algebra. We add scalars to vectors and to bivectors there. We also know that scalars are orthogonal to vectors.

Just know what you are doing! If you do not know - then you may get into trouble.
 
  • #5
Rasalhague said:
I

Are these really two different approaches? How do folks here think about vector spaces, and other mathematical structures such as the real numbers, in relation to physical quantities and physical units?

Whenever one has a vector space, one has a second vector space, the dual vector space of linear maps from the vector space into the base field. In Physics a vector space and its dual are often identified through an inner product. The theorem that allows this identification says that anyone of these dual linear maps can be obtained as the inner product with some vector.

However, a different inner product will give you a different vector. There is no natural or canonical way to think of dual map as a vector. It really is different. So forces really are not the same type of vectors as velocities but can be pictured in the same vector space with an inner product.

On a manifold, the notion of displacement as a vector only makes rigorous sense infinitessimally. It is then a linear map from the tangent space into the base field and is thus just a dual vector. A displacement though must be dual to the tangent space. Displacements do not make sense in an arbitrary vector bundle.
 
  • #6
lavinia said:
On a manifold, the notion of displacement as a vector only makes rigorous sense infinitessimally. It is then a linear map from the tangent space into the base field and is thus just a dual vector. A displacement though must be dual to the tangent space. Displacements do not make sense in an arbitrary vector bundle.

How does this fit with the idea that vectors of the tangent spaces can be distinguished from those of the cotangent spaces by the different ways that their components in coordinate bases (and dual coordinate bases) change when the coordinates are changed? Since the components of velocity transform, in one dimension, for example like this:

[tex]\frac{\text{mile}}{\text{hour}} = \frac{\text{mile}}{\text{km}}\frac{\text{km}}{\text{hour}}[/tex]

and the components of displacement transform in the same way:

[tex]\text{mile} = \frac{\text{mile}}{\text{km}} \; \text{km}[/tex]

doesn't this make them, in some sense, both tangent vectors (however the issue of units is dealt with).
 
  • #7
Rasalhague said:
How does this fit with the idea that vectors of the tangent spaces can be distinguished from those of the cotangent spaces by the different ways that their components in coordinate bases (and dual coordinate bases) change when the coordinates are changed? Since the components of velocity transform, in one dimension, for example like this:

[tex]\frac{\text{mile}}{\text{hour}} = \frac{\text{mile}}{\text{km}}\frac{\text{km}}{\text{hour}}[/tex]

and the components of displacement transform in the same way:

[tex]\text{mile} = \frac{\text{mile}}{\text{km}} \; \text{km}[/tex]

doesn't this make them, in some sense, both tangent vectors (however the issue of units is dealt with).

Yeah. I was worried about this.

I guess what I meant was that a displacement is a change in a position function.
So if we track say the x-coordinate of a moving particle over time then the displacement is approximately (t-t[tex]_{0}[/tex])dx(c'(t[tex]_{0}[/tex])) where c(t) is the particle's path. So the displacement per unit time is just the value of the dual vector dx on the velocity vector c'(t[tex]_{0}[/tex]). I was thinking of dx as the infinitesimal displacement in the x coordinate. But maybe this is wrong.
 
Last edited:
  • #8
Often the same symbol is used for a function as its value, as when people write [itex]y = y(x)[/itex]. And often a tensor is represented by its coefficients in an arbitrary basis, with basis vectors not explicitly written. I'm just guessing here, but perhaps when people write [itex]\text{d}x^i[/itex] for an infinitesimal displacement in the [itex]x^i[/itex] direction, they're combining these two ambiguous customs. In other words, considering how its coefficients transform with a change of basis, maybe an infinitesimal displacement in a general direction would be a tangent vector

[tex]\frac{\text{d}}{ds}[/tex]

And an infinitesimal displacement in a direction tangent to one of the coordinate curves would be:

[tex]\text{d}x^i \left ( \frac{\text{d}}{ds} \right ) \frac{\partial }{\partial x^i} = s^i \; \frac{\partial }{\partial x^i}[/tex]

(no sum) which people might write informally [itex]\text{d}x^i[/itex], using the symbol for the function as shorthand for its value, which they use as shorthand for the value times the corresponding tangent vector of the coordinate basis.

Or maybe it's a relic notation for an infinitesimal displacement from a time before the idea of exterior derivatives had been invented. When Roger Penrose introduces 1-forms in The Road to Reality, he writes, "In Cartan's scheme we do not think of [itex]\text{d}x[/itex] as representing an 'infinitesimal quantity', however, but as providing us with the appropriate kind of density (1-form) that one may integrate over a curve" (Vintage 2005, p. 230), as if there's been a shift in viewpoint over time. On the other hand, on pp. 224-225 he depicts tangent vectors as tiny arrows, and 1-forms as tiny squares, suggesting that both are to be thought of as infinitesimal in their different ways. In Geometric Vectors Weinreich advises that all of his "menagerie" of vector types should be thought of as infinitesimal except for macroscopic displacement vectors. His way of visualising 1-forms as stacks of planes makes clearer the density aspect.

A few months older then when I first read this, and ever-so-slightly-wiser, I still find Penrose's note 12.8 hilarious:

Confusion easily arises between the 'classical' idea that a thing like [itex]\mathrm{d}x^r[/itex] should stand for an infinitesimal displacement (vector), whereas we here seem to be viewing it as a covector. In fact, the notation is consistent, but it needs a clear head to see this! The quantity [itex]\mathrm{d}x^r[/itex] seems to have a vectorial character because of its upper index r, and this would indeed be the case if r is treated as an abstract index, in accordance with § 12.8. On the other hand, if r is taken as a numerical index, say r = 2, then we do get a covector, namely [itex]\mathrm{d}x^2[/itex], the gradient of the scalar quantity [itex]y = x^2[/itex] ('x-two, not x squared'). But this depends on the interpretation of 'd' as standing for the gradient rather than as denoting an infnitesimal, as it would have done in the classical tradition. In fact, if we treat both the r as abstract and the d as gradient, then [itex]\mathrm{d}x^r[/itex] simply stands for the (abstract) Kronecker delta!

I think in the last part, he means dxr to stand for

[tex]\text{d}x^r \left ( \frac{\partial }{\partial x^s} \right ) = \delta^r_s[/tex]

whose (r,s) entry is the s'th coefficient of the exterior derivative of the r'th coordinate curve.
 
Last edited:
  • #9
For instance, the formula

[tex](df)(x)=\frac{\partial f}{\partial x^r}\,dx^r[/tex]

makes sense whatever meaning you attach to it.
 
  • #10
Even if dxr is taken in the first sense I guessed at, as the r'th coefficient of an unnamed tangent vector representing an infinitesimal displacement together with the corresponding tangent basis vector? Wouldn't we then have a covariant tensor on the left equal to a contravariant tensor on the right?

Hmm, on second thoughts, I suppose anyone taking that approach could read in whatever implicit things need doing to the left hand side to make it the same. Magic!
 
Last edited:

1. What is orthogonality in vector spaces?

Orthogonality in vector spaces refers to the property of two vectors being perpendicular to each other. This means that the angle between the two vectors is 90 degrees, and their dot product is equal to zero.

2. How is orthogonality related to physical quantities?

In physics, physical quantities such as force, velocity, and displacement can be represented as vectors in a vector space. Orthogonality between these vectors allows us to analyze and understand the relationships between these quantities in a geometric way.

3. What is a dual space in vector spaces?

A dual space is the set of all linear functionals on a vector space. It is a space of linear transformations from the original vector space to the field of scalars, which allows us to represent vectors as linear combinations of functionals.

4. How are dual spaces used in physics?

In physics, dual spaces are used to analyze physical quantities in vector spaces. By representing vectors as linear combinations of functionals, we can manipulate and transform these quantities in a more efficient and meaningful way.

5. Can orthogonality and dual spaces be applied to other fields besides physics?

Yes, orthogonality and dual spaces are fundamental concepts in linear algebra and can be applied to various fields, such as computer graphics, signal processing, and machine learning. These concepts have wide-ranging applications in different areas of mathematics and science.

Similar threads

  • Linear and Abstract Algebra
Replies
7
Views
272
Replies
15
Views
4K
  • Linear and Abstract Algebra
Replies
9
Views
223
  • Calculus
Replies
4
Views
530
  • Calculus and Beyond Homework Help
Replies
0
Views
457
  • Advanced Physics Homework Help
Replies
15
Views
3K
  • Linear and Abstract Algebra
Replies
15
Views
2K
  • Linear and Abstract Algebra
Replies
8
Views
2K
  • Differential Geometry
Replies
16
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
5K
Back
Top