I Axes of the 2-d coordinate system used in vector resolution

AI Thread Summary
The discussion centers on the interpretation of vector components in non-orthogonal coordinate systems as presented in Anthony French's "Newtonian Mechanics." It clarifies that the relationships for vector components, such as ##A_x = A \cdot \cos \alpha## and ##A_y = A \cdot \cos \beta##, do not hold in non-orthogonal bases, as the coefficients ##a_x## and ##a_y## are not simply the projections. The conversation introduces the concept of a reciprocal basis to derive these coefficients, emphasizing the need for a more general approach when dealing with non-orthogonal systems. Additionally, the dual space and its relationship to vector components are discussed, highlighting how to obtain components using dual basis vectors. The participants express appreciation for the collaborative learning environment, underscoring the importance of understanding the underlying theory.
KedarMhaswade
Messages
35
Reaction score
6
TL;DR Summary
Must the axes of the 2-d coordinate system used in vector resolution be perpendicular to each other?
Hello,

This question is with regards to the discussion around page 56 (1971 Edition) in Anthony French's Newtonian Mechanics. He is discussing the choice of a coordinate system where the axes are not necessarily perpendicular to each other. Here is the summary of what I read (as applied to vectors in a two-dimensional plane):

Vector A makes the angles α and β with two coordinate axes (we'll still call the axes ##x-##axis and ##y-##axis respectively) not necessarily perpendicular to each other (i.e. α+β is not necessarily π/2). Then, ##A_x = A\cdot\cos\alpha## and ##A_y=A\cdot\cos\beta##, where ##A_x, A_y## are the magnitudes of the ##x, y##-components of ##\vec{A}## respectively. In the generalized two-dimensional case, we have the relationship ##\cos^2\alpha+\cos^2\beta = 1##.

I have redrawn the accompanying figure:
fNu33.png


How does French arrive at the *generalized* relationship: ##\cos^2\alpha+\cos^2\beta = 1## in two dimensions and ##\cos^2\alpha+\cos^2\beta+\cos^2\gamma = 1## in three dimensions? Am I misreading the text, or summarizing it wrong (sorry, you need the book to ascertain that)?

In the case of perpendicular axes in two dimensions, it is clear why it would hold, since ##\alpha+\beta=\frac{\pi}{2}##. But I am not sure how it holds in general.
 
Physics news on Phys.org
What you called the generalised relationship, i.e. ##\cos^2 \alpha + \cos^2 \beta = 1##, is not true for a non-orthogonal basis!

In general, the components of a vector written with respect to a non-orthogonal basis ##(\mathbf{e}_x , \mathbf{e}_y)## are not the projections you called ##A_x## and ##A_y##, rather they are the numbers ##a_x## and ##a_y## such that$$\mathbf{A} = a_x \mathbf{e}_x + a_y \mathbf{e}_y$$and since ##\mathbf{e}_i \cdot \mathbf{e}_j \neq \delta_{ij}## these coefficients are no longer simply direction cosines.

The question is then, how do you find these coefficients ##a_x## and ##a_y##... do you have any ideas? Have you come across the concept of a reciprocal basis?
 
  • Like
Likes KedarMhaswade, robphy and vanhees71
Thank you, @etotheipi for nudging me in the right direction (it's vectors, after all)!

I did not know about Basis. I think I now understand that. However, to not get ahead of myself, I redrew the figure as you suggested:

1617247417254.png

Then, I figured that following holds:
$$
a_x = A\cos\alpha-a_y\cos(\alpha+\beta)
$$
$$
a_y = A\cos\beta-a_x\cos(\alpha+\beta)
$$
where, ##A, a_x, a_y## are magnitudes of the vector, its ##x-##component, and its ##y-##component respectively, and ##\hat{e_x}, \hat{e_y}## are the unit vectors along the axes of the coordinate system of our choice.

Algebraically, these two linear equations can be solved to find ##a_x, a_y##.

Geometrically, it can be easily done by completing the parallelogram, of course.
 
Last edited:
Yes, nice! Your diagram and parallelogram constructions are perfect, and your equations look correct. I'll explain the more general theory, to show how to deal with non-orthogonal bases more easily. There's going to be a few new concepts here, so keep your wits about you... :wink:

First let's just consider a 3-dimensional Euclidean vector space ##V##, where we have access to an inner product and a cross product, and let's choose a basis ##(\mathbf{e}_x, \mathbf{e}_y, \mathbf{e}_z)## which is not necessarily orthogonal or even normalised. The arbitrary vector ##\mathbf{A}## can always be written as a unique linear combination of the basis vectors$$\mathbf{A} = a_x \mathbf{e}_x + a_y \mathbf{e}_y + a_z \mathbf{e}_z$$Now, let's introduce the vectors ##{\mathbf{e}_x}^*, {\mathbf{e}_y}^*, {\mathbf{e}_z}^*##, defined by

$${\mathbf{e}_x}^* = \frac{\mathbf{e}_y \times \mathbf{e}_z}{\mathbf{e}_x \cdot \mathbf{e}_y \times \mathbf{e}_z}, \quad {\mathbf{e}_y}^* = \frac{\mathbf{e}_z \times \mathbf{e}_x}{\mathbf{e}_x \cdot \mathbf{e}_y \times \mathbf{e}_z}, \quad

{\mathbf{e}_z}^* = \frac{\mathbf{e}_x \times \mathbf{e}_y}{\mathbf{e}_x \cdot \mathbf{e}_y \times \mathbf{e}_z}$$where ##\mathbf{e}_x \cdot \mathbf{e}_y \times \mathbf{e}_z## is the scalar triple product of the original basis vectors. These strange new starred vectors we've defined actually help us to very easily determine the numbers ##a_x##, ##a_y## and ##a_z##; simply, take the inner product of ##\mathbf{A}## with the corresponding starred vector, e.g.

$$\mathbf{A} \cdot {\mathbf{e}_x}^* =
a_x \underbrace{\frac{\mathbf{e}_x \cdot \mathbf{e}_y \times \mathbf{e}_z}{\mathbf{e}_x \cdot \mathbf{e}_y \times \mathbf{e}_z}}_{=1} +

a_y \underbrace{\frac{\mathbf{e}_y \cdot \mathbf{e}_y \times \mathbf{e}_z}{\mathbf{e}_x \cdot \mathbf{e}_y \times \mathbf{e}_z}}_{=0} +

a_z \underbrace{\frac{\mathbf{e}_z \cdot \mathbf{e}_y \times \mathbf{e}_z}{\mathbf{e}_x \cdot \mathbf{e}_y \times \mathbf{e}_z}}_{=0} = a_x$$due to the property that the scalar triple product of three vectors vanishes if any two of them are the same. Now, it turns out that if the basis ##(\mathbf{e}_x, \mathbf{e}_y, \mathbf{e}_z)## is orthonormal, then ##\mathbf{e}_i = {\mathbf{e}_i}^*## and you can simply obtain the components ##a_i## by taking the scalar product with the original basis vectors ##\mathbf{e}_i## as you are likely used to. But in the general case ##\mathbf{e}_i \neq {\mathbf{e}_i}^*##, and you must be more careful.

-------------------

Maybe you are satisfied with that, but if you're not put off then there's some more hiding under the surface! Consider an ##n##-dimensional vector space ##V## and suppose we've chosen a general basis ##\beta = (\mathbf{e}_1, \dots, \mathbf{e}_n)##. Now, consider an arbitrary vector ##\mathbf{v} \in V##; again, by the theory of linear algebra it's possible to write ##\mathbf{v}## as a unique linear combination of the basis,
$$\mathbf{v} = v^1 \mathbf{e}_1 + \dots + v^n \mathbf{e}_n = \sum_{i=1}^n v^i \mathbf{e}_i$$It's very important to note that the superscripts, e.g. ##v^2##, do not represent exponentiation here, they are just labels; why we have chosen superscripts will hopefully become clear soon. To the vector space ##V## you may define the dual space ##V^*##, which is just a space of objects ##\boldsymbol{\omega} \in V^*## which act linearly on vectors to give real numbers, for example ##\boldsymbol{\omega}(\mathbf{v}) = c \in \mathbb{R}##. ##V^*## itself is a vector space, and we are free to choose a basis for it. It turns out that we can define quite naturally a basis ##\beta^* = (\mathbf{f}^1, \dots, \mathbf{f}^n)## of ##V^*## associated to our basis ##\beta = (\mathbf{e}_1, \dots, \mathbf{e}_n)## of ##V## by the rule$$\mathbf{f}^i(\mathbf{e}_j) = \delta^i_j$$where ##\delta^i_j## is ##1## if ##i=j## and ##0## if ##i \neq j##. And of course, then we can write any element ##\boldsymbol{\omega} \in V^*## as a linear combination of the ##\beta^*## basis,$$\boldsymbol{\omega} = \omega_1 \mathbf{f}^1 + \dots + \omega_n \mathbf{f}^n = \sum_{i=1}^n \omega_i \mathbf{f}^i$$[exercise: prove linear independence of the ##\mathbf{f}^i##, and show that the action of both sides on an arbitrary vector ##\mathbf{v}## are in agreement!].

Anyway, we don't want to go too far into the properties of the dual space, instead we just want to know how to find the components ##v^i## of ##\mathbf{v} \in V##! Luckily, with this machinery in place, it's quite straightforward. Consider acting the ##i^{\mathrm{th}}## basis vector of ##\beta^*##, ##\mathbf{f}^i##, on ##\mathbf{v}## and using the linearity:$$\mathbf{f}^i(\mathbf{v}) = \mathbf{f}^i \left(\sum_{j=1}^n v^j \mathbf{e}_j \right) = \sum_{j=1}^n v^j \mathbf{f}^i(\mathbf{e}_j) = \sum_{j=1}^n v^j \delta^i_j = v^i$$in other words, acting the associated dual basis vector on ##\mathbf{v}## yields the relevant component.

You might be wondering, then, how do the ##{\mathbf{e}_i}^* \in V## we introduced right back at the beginning relate to the ##\mathbf{f}^i \in V^*## in the dual space? Both of them "act" on vectors, and give back the ##i^{\mathrm{th}}## component of the vector, except in the first case the ##{\mathbf{e}_i}^*## live in an inner-product space ##V## and act via the inner product, whilst in the second case the ##\mathbf{f}^i## live in ##V^*## and act as functionals.

The answer is that if ##V## is an inner-product space, the ##{\mathbf{e}_i}^*## can be defined and in particular they may be defined by the Riesz representation theorem. This states that given any arbitrary vector ##\mathbf{u} \in V##, we may associate by isomorphism some ##\boldsymbol{\omega}_{\mathbf{u}} \in V^*## such that for any ##\mathbf{v} \in V##,$$\boldsymbol{\omega}_{\mathbf{u}}(\mathbf{v}) = \mathbf{u} \cdot \mathbf{v}$$In other words, "##\boldsymbol{\omega}_u## acts on ##\mathbf{v}## just like ##\mathbf{u}## acts on ##\mathbf{v}##". So there is naturally a bijective map ##\varphi: V \rightarrow V^*## which takes ##\mathbf{u} \mapsto \boldsymbol{\omega}_{\mathbf{u}}##. And with this theory in mind, maybe you have now noticed that$$\mathbf{f}^i = \varphi({\mathbf{e}_i}^*)$$i.e. that ##\mathbf{f}^i## is the so-called dual of ##{\mathbf{e}_i}^*##.

[It should be noted that a vector space ##V## is not necessarily an inner-product space, and in such cases you can only work with the ##\mathbf{f}^i## and cannot define the ##{\mathbf{e}_i}^*##.]
 
Last edited by a moderator:
Well, in a general vector space without a scalar product, you can still define the dual space, i.e., the vector space of the linear forms, which are linear maps from the vector space to the real numbers (I assume that we discuss vector spaces over the field of real numbers here). The dual space of a finite-dimensional vector space has the same dimension as the vector space.

Now given a basis ##\vec{e}_i## (##i \in \{1,\ldots,d\}##), i.e., a complete set of linearly independent vectors you can expand any vector uniquely in terms of these basis vectors,
$$\vec{V}=V^i \vec{e}_i,$$
where the ##i## in ##V^i## is an index (not a power!), and here and in the following in all expressions where a pair of equal indices occurs you have to some over this index, i.e., here the ##i## running from ##1## to ##d##.

Now let ##\omega## be a linear form, i.e., a map from the vector space to the real numbers which obeys
$$\omega (\lambda \vec{V} + \mu \vec{W})=\lambda \omega(\vec{V}) + \mu \omega(\vec{W}).$$
Then this map is fully defined if you know the numbers
$$\omega_i=\omega(\vec{e}_i)$$
the basis vectors get mapped to, because
$$\omega(\vec{V})=\omega(V^i \vec{e}_i)=V^i \omega(\vec{e}_i)=\omega_i V^i.$$
Now obviously also the linear forms form a vector space, because if you have two such linear forms ##\alpha## and ##\beta## also any linear combination ##\gamma=\lambda \alpha + \mu \beta##, which simply is defined by
$$\gamma(\vec{V})=\lambda \alpha(\vec{V})+\mu \beta(\vec{V}).$$
Now you can define the co-basis ##\eta_i## as a basis in this vector space of linear forms, which is called the dual space, defined by
$$\eta_i(\vec{e}^j)=\delta_i^j:=\begin{cases} 1 &\text{for} \quad i=j, \\ 0 & \text{for} \quad i \neq j. \end{cases}$$
Now it's clear that you can write any linear form ##\omega## as a linear combination of this dual basis:
$$\omega=\omega_i \eta^i,$$
because for any vector you have
$$\omega_i \eta^i(\vec{V})=\omega_i \eta^{i}(V^j \vec{e}_j)=\omega_i V^j \delta_i^j=\omega_i V^i=\omega(V^i \vec{e}_i)=\omega(\vec{V}).$$
This now tells you how to get the vector components wrt. the basis ##\vec{e}_i##. You just have to use the dual basis vectors ##\eta^j##, because
$$\eta^j(\vec{V})=\eta^j(V^i \vec{e}_i)=V^i \eta^j(\vec{e}_i)=V^i \delta_i^j=V^j.$$
 
Thank you, @etotheipi (is that the same as ##-1##? :wink:) and @vanhees71 for your encouragement, thoroughness, and kindness. Whereas I will take some time to understand the theory better, I felt that my question was greeted with compassion. Such human values are as important as the physical science itself.

I have derived the necessary equations based on a more elementary treatment as dictated by French's book. If you have time and interest, please take a look at my notes (page 40).
 
Thread 'Question about pressure of a liquid'
I am looking at pressure in liquids and I am testing my idea. The vertical tube is 100m, the contraption is filled with water. The vertical tube is very thin(maybe 1mm^2 cross section). The area of the base is ~100m^2. Will he top half be launched in the air if suddenly it cracked?- assuming its light enough. I want to test my idea that if I had a thin long ruber tube that I lifted up, then the pressure at "red lines" will be high and that the $force = pressure * area$ would be massive...
I feel it should be solvable we just need to find a perfect pattern, and there will be a general pattern since the forces acting are based on a single function, so..... you can't actually say it is unsolvable right? Cause imaging 3 bodies actually existed somwhere in this universe then nature isn't gonna wait till we predict it! And yea I have checked in many places that tiny changes cause large changes so it becomes chaos........ but still I just can't accept that it is impossible to solve...
Back
Top