I Trouble understanding contravariant transformations for vectors

AI Thread Summary
The discussion centers on the confusion surrounding contravariant transformations of vectors under coordinate changes. A specific transformation example is given, where the coordinates change from (x^1, x^2, ..., x^n) to (2x^1, 2x^2, ..., 2x^n). It is clarified that the transformation of contravariant components involves the partial derivatives of the new coordinates with respect to the old ones, leading to a factor of 1/2 for the basis vectors. This means that while the components are multiplied by 2, they transform contravariantly, moving in the opposite direction of the basis vectors. Understanding this relationship is key to grasping the nature of contravariant transformations.
whisperzone
Messages
1
Reaction score
0
TL;DR Summary
Don't understand equation of contravariant transformation for vectors
Hey, so I've been studying some math on my own and I'm really confused by this one bit. I understand what contravariant components of a vector are, but I don't understand the ways in which they transform under a change of coordinate system.

For instance, let's say we have two coordinate systems ##(x^1, x^2, ..., x^n)## and ##(\overline{x}^1, \overline{x}^2, ..., \overline{x}^n)## and a coordinate transformation relating the two systems: ##\overline{x}^j = \overline{x}^j (x^1, x^2, ..., x^n)##. Everything I've read says that

$$\overline{a}^j = \sum_{k = 1}^{n} \frac{\partial \overline{x}^j}{\partial x^k} a^k, \text{ where } j = 1, 2, ... n$$

but intuitively this makes no sense to me at all. For instance, consider a coordinate transformation ##(x^1, x^2, ..., x^n) \rightarrow (2x^1, 2x^2, ..., 2x^n)##. Then the partial derivative of ##\overline{x}^j## with respect to ##x^k## should be 2, right (or am I missing something here)? And if the partial derivative is 2, is that really a contravariant transformation, since whatever you're applying the transformation to would also get multiplied by 2?

Apologies if anything was too unclear, I've just been struggling with this for a while.
 
Mathematics news on Phys.org
Contravariant refers to transforming the opposite way of the tangent basis vectors, which are defined as
$$
\vec E_i = \frac{\partial \vec x}{\partial x^i}.
$$
This means that
$$
\vec E_i ' = \frac{\partial \vec x}{\partial x^j} \frac{\partial x^j}{\partial x'^i}.
$$
In the case of your transformation, you therefore have ##\vec E_i' = \vec E_i / 2## so there is a factor of 1/2 in the transformation of the basis vectors. The components therefore (which are multiplied by 2) transform in the opposite fashion, i.e., contravariantly.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Back
Top