Developing Inner Product in Polar Coordinates via metric

Main Question or Discussion Point

Hey all,

I've never taken a formal class on tensor analysis, but I've been trying to learn a few things about it. I was looking at the metric tensor in curvilinear coordinates. This Wikipedia article claims that you can formulate a dot product in curvilinear coordinates through the following:

$\textbf{u} \cdot \textbf{v} = g_{ij} u^i v^j$ where $g_{ij}$ is the covariant metric tensor for the coordinate system.

In particular, I'm interested in 2-D polar coordinates, therefore, $g_{ij} = \left( \begin{array}{cc} 1 & 0 \\ 0 & r^2 \end{array} \right)$ which makes sense: the polar coordinates are orthogonal so $g_{ij}$ is diagonal, and an infinitesimal change $ds$ occurs with a proportional change in $dr$ and varies with $r$ in $d \theta$. Now I expand the expression for the dot product and using Einstein summation:

$\textbf{u} \cdot \textbf{v} = u^1 v^1 + (r)^2 u^2 v^2$

Now this really doesn't make much sense to me. To me, this implies that the length of a vector in polar coordinates depends on $\theta$, which isn't true, the length is contained entirely within the $r$ component. If we transform a vector in polar to Cartesian and then write our dot product, we find $\textbf{u} \cdot \textbf{v} = u^1 v^1 \cos(u^2 - v^2)$, but I don't see this happening any time soon with this metric and the given dot product formulation. Is there any way to get the same dot product using the metric of the coordinate space? If not, why isn't this working? Thanks!

Related Linear and Abstract Algebra News on Phys.org
chiro
Hey gordon831 and welcome to the forums.

Lets consider going from euclidean to polar.

x = rcos(theta), y = rsin(theta).

r = SQRT(x^2 + y^2), theta = arctan(y/x).

dr/dx = x/SQRT(x^2+y^2), dr/dy = y/SQRT(x^2+y^2), dtheta/dx = - y / (x^2 + y^2), dtheta/dy = x / (x^2 + y^2)

So from this our Jacobian for this system has the values

[dr/dx, dr/dy; dtheta/dx, dtheta/dy] where g = J^2.

Is this what you got for your tensor where the transformation is from (x,y) to (r,theta)?

So here's how I developed the metric. One way to develop a covariant metric of a coordinate space is to dot the covariant bases:

$g_{ij} = \textbf{e}_i \cdot \textbf{e}_j$
(Source) see (9)

To do the dot product, I use my bases defined in Cartesian coordinates so the dot product is simply $a_i b_i$. My bases in Cartesian coordinates are:

$\vec{e}_r = \frac{\partial x}{\partial r} \hat{e}_x + \frac{\partial y}{\partial r} \hat{e}_y = \cos \theta \hat{e}_x + \sin \theta \hat{e}_y$
$\vec{e}_\theta = \frac{\partial x}{\partial \theta} \hat{e}_x + \frac{\partial y}{\partial \theta} \hat{e}_y = -r \sin \theta \hat{e}_x + r \cos \theta \hat{e}_y$
With $\vec{e}_\theta$ remaining un-normalized.

This gives the covariant metric tensor for polar coordinates:

$g_{ij} = \left( \begin{array}{cc} 1 & 0 \\ 0 & r^2 \end{array} \right)$

I went through using the Jacobian you described, and I got the inverse tensor:

$g_{ij}^{-1} = \left( \begin{array}{cc} 1 & 0 \\ 0 & \frac{1}{r^2} \end{array} \right)$

Which makes sense. Let Cartesian components be denoted with $x$ and polar be $q$, your Jacobian is expressed as $\frac{\partial q^i}{\partial x^j}$, but the dot product described above would yield $\frac{\partial x^i}{\partial q^j}$ - the inverse.

$\textbf{u} \cdot \textbf{v} = u^1 v^1 + (r)^2 u^2 v^2$

Now this really doesn't make much sense to me. To me, this implies that the length of a vector in polar coordinates depends on $\theta$, which isn't true, the length is contained entirely within the $r$ component. If we transform a vector in polar to Cartesian and then write our dot product, we find $\textbf{u} \cdot \textbf{v} = u^1 v^1 \cos(u^2 - v^2)$, but I don't see this happening any time soon with this metric and the given dot product formulation. Is there any way to get the same dot product using the metric of the coordinate space? If not, why isn't this working? Thanks!
What you need to understand is that position vectors transform according to the full nonlinear transformation--notions of length for them are not considered or handled by the metric.

The metric allows you to take dot products of vectors that themselves depend on position, however--i.e. that are fields. That is, at some $(r,\theta)$, the vectors $u$ and $v$ point in some directions.

Simple case: let $u = 2e_r + 3e_\theta$, and let's find $u \cdot u$. The metric (and common logic) would tell you that the answer is $(u)^2 = 4 + 9(r)^2$. You see, $r$ refers to where this vector is located on our coordiante system, so quite the contrary, it's not $\theta$ that enters into anything but $r$. The reason this happens is because $e_\theta$ is not a unit vector. Its magnitude everywhere is $r$. Saying $(u)^2 = 4+9(r)^2$ is a perfectly sensible result.

Okay, so when I first read your response I was really confused about what it meant for a vector to be "located" at $(r,\theta)$ and I think I'm still a little confused. By specifying $(r, \theta)$, we are defining our basis vectors. If we normalize $e_r$ and $e_\theta$, we see that $\theta$ is the Cartesian equivalent of rotating the vector about the positive z-axis by $\theta$. But what does it mean to specify an $r$ where a vector is located?

Yes, the unit vectors depend on $(r,\theta)$. This is precisely what I meant. Hence, you cannot just say you have a vector with polar components. One must specify at which location $(r,\theta)$ that the unit vectors should be evaluated at. To me, that's just simply saying what the vector's position on the coordinate space is. Unlike cartesian coordinates, in order to make sense of a vector in a polar coordinate basis, you must know where this vector lies.

Let me give a concrete example. Let's say you have $u = u^1 e_r + u^2 e_\theta$ and similarly for $v$. The metric tells us that

$$u \cdot v = u^1 v^1 + (r)^2 u^2 v^2$$

We can verify this by writing out the expressions for $e_r$ and $e_\theta$ in a cartesian basis.

$$e_r = e_x \cos \theta + e_y \sin \theta \\ e_\theta = r(-e_x \sin \theta + e_y \cos \theta)$$

Do you follow this so far? I hope so. In particular, pay attention to $e_\theta$. It increases in magnitude as you get further away from the origin. This leads to some strange results. For example, if you had the vector $e_y$ and wanted to move it smoothly along the x-axis, its expression in polar would be $e_y = (1/r) e_\theta$. Its apparent component of $e_\theta$ would decrease as the vector is transported along the axis, even though the full magnitude of the vector is not changing.

At any rate, you can compute the dot products of these vectors.

$$e_r \cdot e_r = \cos^2 \theta + \sin^2 \theta = 1\\ e_r \cdot e_\theta = r(-\cos \theta \sin \theta + \cos \theta \sin \theta) = 0 \\ e_\theta \cdot e_\theta = r^2 (\sin^2 \theta + \cos^2 \theta) = r^2$$

Exactly like you'd expect.

Anyway, I'm not exactly sure what's confusing you. I think you still don't grasp that this can't be used for position vectors. You need to picture, say, a vector with its tail fixed to a specified point (like $r=1, \theta=\pi/4$, say) and pointing in some arbitrary direction (not necessarily the radial). That vector's components are evaluated in terms of the basis vectors at that point where the tail is fixed. Hopefully that helps.

Okay, so let me see if I've got this.

Let's say I have two vectors, $u$ which has a length of 1 and parallel to the Cartesian x-axis and $v$ which has a length of 1 and is rotated $\frac{\pi}{4}$ rads counter-clockwise from the x-axis. These are position vectors, so the metric doesn't really apply here.

Now, I'm going to write these vectors in terms of the basis. For convenience, I'll write both vectors as if they were located at $(1,0)$ on the polar coordinate system. I will find: $$u = e_r$$ $$v = \frac{1}{\sqrt{2}}e_r + \frac{1}{\sqrt{2}}e_\theta$$ Which leads to: $$u \cdot v = \frac{1}{\sqrt{2}}$$ Which is correct. But isn't it arbitrary what I choose for $(r,\theta)$? I mean, depending on the choice the vector components change because the basis changes. At a more abstract level, I can see that $\theta$ will never be significant in the length of a vector, because it's not a factor in the dot product.

Sorry to be so bothersome, I've never considered that the tail of a vector might be important - I thought vectors were considered "free" or that their initial points held no significance.

Yes, it's arbitrary what position you choose for this example, but this becomes meaningful when you're talking about vector fields that naturally have a notion of what position they're at.