I Confusion between vector components, basis vectors, and scalars

AI Thread Summary
The discussion centers on the relationship between vector components and basis vectors, clarifying that vector components are scalars, while basis vectors are indeed vectors. It emphasizes that to reconstruct a vector, one must multiply the scalar components by their corresponding basis vectors and then sum these products vectorially. The conversation also highlights the importance of understanding that while components can be represented as scalars, they are dependent on the chosen basis, making them not true scalars in an invariant sense. Additionally, it touches on the transformation properties of vector components and the distinction between covariant and contravariant tensors. This nuanced understanding is essential for accurately interpreting vector mathematics in different coordinate systems.
e2m2a
Messages
354
Reaction score
13
TL;DR Summary
Confused about vector components, basis vectors and scalars.
There is an ambiguity for me about vector components and basis vectors. I think this is how to interpret it and clear it all up but I could be wrong. I understand a vector component is not a vector itself but a scalar. Yet, we break a vector into its "components" and then add them vectorially to get the vector. But if vector components are scalars we cannot add scalars to get a vector. So is the solution to this confusion as follows. The vector component indeed is only a scalar but when we multiply vector component, the scalar, times the basis vector we get a vector. (A scalar times a vector is a vector. And I assume basis vectors are vectors.) Hence, what we are doing to get the vector is not adding the vector components but the vector components times the basis vectors and then add the product of the vector component times the basis vector vectorially to get a vector. Is this how it's done?
 
Physics news on Phys.org
Maybe it is more clear if you think of vector as invariants objects.
So the components of the vector are contravariant tensor, and the basis is a covariant tensor.
$$\vec {v} = V^{i}\vec{e_{i}}$$

You didn't add components to got the vector, you multiply the components with the basis and them get the vector. I think it is a little dangerous to think of the components of vectors separatelly. It is like to suppose a matrix is a scalar because $a_{11}$ is a complex number, say $3i$. Now while it is true that $3i$ is $3i$ in any space/reference, it is not true that $a_{11}$ is the same in any reference frame.

So, in the case here, say ##V = (3,4,3)##, using the known cartesian coordinates. In the same way, while it is true that ##3=3## in another frame, it is not true that ##a_{11}=a_{1'1'}## in another frame. So while it is true that we use scalar to represent the components of the vector, this does not mean it is a scalar.
 
Your thinking is correct. Basis vectors are usually normalized, so you can think of them as unit vectors. In the simple case of 3d Cartesian space when you write ##\vec A=A_1~\hat x_1+A_2~\hat x_2+A_3~\hat x_3,## the ##\hat x_i## form an orthonormal basis set, ##\hat x_i \cdot \hat x_j=\delta_{ij}##. The components are scalars and are obtained from the scalar product, ##A_i=\vec A \cdot \hat x_i##.

And yes, to get vector ##\vec A## in this case, you are adding components times basis vectors which you can think of as the addition of three vectors having the form ##(A_i~\hat x_i)##.
 
e2m2a said:
I understand a vector component is not a vector itself but a scalar.
It is under a certain interpretation of the term "scalar", but you have to be careful.

Strictly speaking, a "scalar" is a number that is invariant, i.e., it doesn't depend on your choice of coordinates (or, equivalently, on your choice of basis vectors). But a vector component, as normally understood, is dependent on your choice of basis vectors, which would mean it would not be a scalar.

However, if you think of a vector component in terms of the sum you describe, where a particular vector ##V## is expressed as the sum of its components times their corresponding basis vectors, then we can think of the components as scalars because we have fixed the choice of basis vectors when we construct the sum. Or, to put it another way, we can think of a vector component, as @kuruman says, as the scalar product (inner product, or dot product) of the vector ##V## and the specific basis vector corresponding to the component. The dot product of two specific vectors is an invariant--it will be the same regardless of your choice of basis. (But of course the "basis" vector in the dot product will only be a basis vector for one choice of basis.) So in this sense, yes, vector components are scalars.
 
  • Like
Likes PhDeezNutz and vanhees71
This works as follows: Let ##\vec{b}_k## (##k \in \{1,\ldots,d \}##, where ##d## is the dimension of the (real) vector space). Then you can define a matrix
$$g_{jk} = \vec{b}_j \cdot \vec{b}_k.$$
By assumption this is a symmetric non-degenerate matrix with only positive eigenvalues, because the scalar product is by definition a positive definite bilinear form.

In this context the basis vectors with lower indices are called co-variant basis vectors. If you have another basis ##\vec{b}_k'##, then there is a invertible matrix ##{T^j}_k## such that
$$\vec{b}_k' = {T^j}_k \vec{b}_j,$$
where the Einstein summation convention is used. According to this convention it is understood that over any index which occurs twice in a formula you have to sum, and there must always be one of these indices a lower and the other be an upper index (two equal indices at the same vertical place are strictly forbidden; if such a thing occurs in the formulas of this socalled Ricci calculus, you made a mistake!).

Now vectors are invariant objects, i.e., you can decompose any vector uniquely wrt. to any basis with the corresponding vector components,
$$\vec{V}=V^j \vec{b}_j = \vec{b}_k' V^{\prime k} = {T^{j}}_ k V^{\prime k} \vec{b}_k.$$
I.e., the components transform "covariantly",
$$V^j={T^j}_k V^{\prime k},$$
or defining
$${\tilde{T}^k}_j={(T^{-1})^k}_j \; \Rightarrow \; {\tilde{T}^k}_j {T^j}_l=\delta_j^l = \begin{cases} 1 & \text{if} \quad j=l, \\ 0 & \text{if} \quad j \neq l. \end{cases}$$
Then you have
$${\tilde{T}^l}_j V^j ={\tilde{T}^l}_j {T^j}_k V^{\prime k}=\delta_k^l V^{\prime k}=V^{\prime l}.$$
For the scalar products of vectors you find
$$\vec{V} \cdot \vec{W} =(V^j \vec{b}_j) \cdot (W^k \vec{b}_k) = V^j W^k \vec{b}_j \cdot \vec{b}_k = g_{jk} V^j W^k.$$
The same holds for the components wrt. the other basis
$$\vec{V} \cdot \vec{W}=g_{jk}' V^{\prime j} V^{\prime k},$$
where the transformation law for the metric components read
$$g_{jk}' = \vec{b}_j' \cdot \vec{b}_{k}' = (\vec{b}_l {T^l}_j) \cdot (\vec{b}_m {T^m}_k) = {T^l}_j {T^m}_k \vec{b}_l \cdot \vec{b}_m = {T^l}_j {T^m}_k g_{lm},$$
i.e., you have to apply the rule for covariant transformations to each lower index of ##g_{lm}##. The inverse of this formula is of course
$$g_{lm} = {\tilde{T}^j}_l {\tilde{T}^k}_m g_{jk}'.$$
Now you can also introduce a contravariant basis ##\vec{b}^j## for each given covariant basis by demanding that
$$\vec{b}^j \cdot \vec{b}_k=\delta_{k}^j.$$
To find them we need to inverse of the matrix ##g_{jk}## which we denote as ##g^{lm}##, i.e., we have
$$g_{jk} g^{kl}=\delta_j^l.$$
The matrix ##g_{jk}## is invertible because the scalar product is non-degenerate, because it's positive definite, i.e., the matrix has only positive eigenvalues and thus it's determinant is non-zero. Indeed, defining
$$\vec{b}^{\prime j}=g^{jk} \vec{b}_k$$
does the job, because
$$(g^{jk} \vec{b}_k) \cdot \vec{b}_l = g^{jk} \vec{b}_k \cdot \vec{b}_l = g^{jk} g_{kl}=\delta_k^l.$$
Then you have
$$\vec{V} \cdot \vec{b}^j=(V^k \vec{b}_k) \cdot \vec{b}^j = V^k \vec{b}_k \cdot \vec{b}^j = V^k \delta_k^j = V^j.$$
So you get the contravariant vector components by multiplying with the co-basis vectors. On the other hand you have
$$\vec{V} = V^j \vec{b}_j = V^j g_{jk} \vec{b}^k=V_k \vec{b}^k.$$
So you get covariant components of ##\vec{V}## as
$$V_k = g_{jk} V^j.$$

The ##\vec{b}^j## are contravariant, i.e., they transform analogously to the contravariant vector components:
$$\vec{b}^{\prime l} \cdot \vec{b}_m'=\delta_m^l.$$
From this you get
$$\vec{b}^{\prime l} \cdot ({T^j}_m \vec{b}_j)={T^j}_m \vec{b}^{\prime l} \cdot \vec{b}_j =\delta_m^l = \vec{b}^m \cdot \vec{b}_j.$$
From this we have
$$\vec{b}^{\prime l} \cdot \vec{b}_j={\tilde{T}^l}_j,$$
because the inverse matrix of ##{T^{j}}_k## is uniquely given by the matrix ##{\tilde{T}^l}_m##. So we have
$$\vec{b}^{\prime l} = \vec{b}^{\prime l} \cdot \vec{b}_j \vec{b}^j={\tilde{T}^l}_j \vec{b}_j,$$
i.e., the transform contravariantly, as claimed above.
 
Thread 'Question about pressure of a liquid'
I am looking at pressure in liquids and I am testing my idea. The vertical tube is 100m, the contraption is filled with water. The vertical tube is very thin(maybe 1mm^2 cross section). The area of the base is ~100m^2. Will he top half be launched in the air if suddenly it cracked?- assuming its light enough. I want to test my idea that if I had a thin long ruber tube that I lifted up, then the pressure at "red lines" will be high and that the $force = pressure * area$ would be massive...
I feel it should be solvable we just need to find a perfect pattern, and there will be a general pattern since the forces acting are based on a single function, so..... you can't actually say it is unsolvable right? Cause imaging 3 bodies actually existed somwhere in this universe then nature isn't gonna wait till we predict it! And yea I have checked in many places that tiny changes cause large changes so it becomes chaos........ but still I just can't accept that it is impossible to solve...
Hello! I am generating electrons from a 3D gaussian source. The electrons all have the same energy, but the direction is isotropic. The electron source is in between 2 plates that act as a capacitor, and one of them acts as a time of flight (tof) detector. I know the voltage on the plates very well, and I want to extract the center of the gaussian distribution (in one direction only), by measuring the tof of many electrons. So the uncertainty on the position is given by the tof uncertainty...
Back
Top