Coordinate Transformation & Jacobian Matrix

Rasalhague
Messages
1,383
Reaction score
2
Is the following correct, as far as it goes?

Suppose I have a vector space V and I'm making a transformation from one coordinate system, "the old system", with coordinates xi, to another, "the new system", with coordinates yi. Where i is an index that runs from 1 to n.

Let ei denote the coordinate basis for the old system, and e'i the coordinate basis for the new system.

I can define matrices BL and BR (where subscript L and R stand for "left" and "right") such that

B_L \begin{bmatrix} \vdots \\ \textbf{e}_i \\ \vdots \end{bmatrix} = \begin{bmatrix} \vdots \\ \textbf{e}'_i \\ \vdots \end{bmatrix}

\begin{bmatrix} \cdots & \textbf{e}_i & \cdots \end{bmatrix} B_R = \begin{bmatrix} \cdots & \textbf{e}'_i & \cdots \end{bmatrix}

and likewise matrices CL and CR, replacing the basis vectors in the above definitions with components of vectors in (the underlying set of) V.

And

C_L = \left ( C_R \right )^T = \begin{bmatrix}<br /> \frac{\partial y^1}{\partial x^1} &amp; \cdots &amp; \frac{\partial y^1}{\partial x^n} \\ <br /> \vdots &amp; \ddots &amp; \vdots \\ <br /> \frac{\partial y^n}{\partial x^1} &amp; \cdots &amp; \frac{\partial y^n}{\partial x^n} <br /> \end{bmatrix}

and

\left ( C_L \right )^{-1} = B_L = \left ( B_R \right )^T = \begin{bmatrix}<br /> \frac{\partial x^1}{\partial y^1} &amp; \cdots &amp; \frac{\partial x^1}{\partial y^n} \\ <br /> \vdots &amp; \ddots &amp; \vdots \\ <br /> \frac{\partial x^n}{\partial y^1} &amp; \cdots &amp; \frac{\partial x^n}{\partial y^n} <br /> \end{bmatrix}.

And some people (e.g. Wolfram Mathworld, Berkley & Blanchard: Calculus) define the Jacobian matrix of this transformation as

J \equiv C_L \equiv \frac{\partial \left ( y^1,...,y^n \right )}{\partial \left ( x^1,...,x^n \right )}

while others (e.g. Snider & Davis: Vector Analysis) define it as

J \equiv \left ( C_L \right )^{-1} \equiv \frac{\partial \left ( x^1,...,x^n \right )}{\partial \left ( y^1,...,y^n \right )}.
 
Last edited:
Physics news on Phys.org
They are really the same thing. Your CL transforms from the "x_i" coordinates system to the "yj" coordinate system while CL-1, of course, goes the other way, transforming from the "yj" coordinate system to the "xi" coordinate system.
 
I don't understand how they're "really the same thing". For a given coordinate transformation, won't CL generally be a different matrix from its inverse BL? Also, changing a subscript on one of these matrices from L to R or vice versa transposes it, and in general a matrix is not the same thing as its transpose.

Experimenting with the transformation from 2d Cartesian to plane polar coordinates confirms that using the wrong matrices, or the right ones in the wrong order, gives the wrong answer. In fact in this thread, I did make a mistake (see #4), and if I'd done the multiplication correctly it wouldn't have given the required answer.

I'm thinking if they were literally the same, there'd be no need for Griffel's "Warning. There are two ways to define the change matrix. In our definition, the columns are the B'-components of the B vectors. Some authors define it with B and B' interchanged, giving a matrix which is the inverse of ours. The two versions of the theory look slightly different, but they are equivalent. It does not matter which version is used, provided itis used consistently. Using formulas from one version in a calculation from the other version will give the wrong answers" (Linear Algebra and its Applications, Vol. 2, p. 11).

But maybe you only meant that they're the same sort of thing, or that if we swapped the labels "old" and "new", the same matrices would be playing opposite roles.
 
Last edited:
Rasalhague said:
But maybe you only meant that they're the same sort of thing, or that if we swapped the labels "old" and "new", the same matrices would be playing opposite roles.
Yes, this is correct.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top