# Origin translation tensor

1. Sep 13, 2009

### espen180

Let's say I have two coordinate system in first-rank tensor form:
$$x^{\mu}=\left[\begin{matrix} x \\ y \\ z \end{matrix}\right]$$ and $$x^{\mu^\prime}=\left[\begin{matrix} x^\prime \\ y^\prime \\ z^\prime \end{matrix}\right]$$

and I want to translate the origin of $$x^{\mu^\prime}$$ from the origin to point (a,b,c) in $$x^{\mu}$$. I can to this by using a second-rank transformation tensor $${T^{\mu^\prime}}_{\mu}$$ such that $$x^{\mu^\prime}={T^{\mu^\prime}}_{\mu}x^{\mu}$$.

Would I be "cheating" if I said that $${T^{\mu^\prime}}_{\mu}$$ can be written as $${T^{\mu^\prime}}_{\mu}=\left[\begin{matrix} 1+\frac{a}{x} & 0 & 0 \\ 0 & 1+\frac{b}{y} & 0 \\ 0 & 0 & 1+\frac{c}{z}\end{matrix}\right]$$? Because sure enough, if you carry out the multiplication, you find that $$x^\prime=x+a$$ and so on, but since $${T^{\mu^\prime}}_{\mu}$$ includes the original coordinates, it is not independent of $$x^{\mu}$$.

Is it therefore neccesary to represent the three-vector above as a four-vector $$x^{\mu}=[x,y,z,1]^T$$ and
$${T^{\mu^\prime}}_{\mu}=\left[\begin{matrix} 1 & 0 & 0 & a \\ 0 & 1 & 0 & b \\ 0 & 0 & 1 & c \\ 0 & 0 & 0 & 1\end{matrix}\right]$$
in order for the transformation to be "rigorous" enough?

Thanks for any help.

2. Sep 13, 2009

### dx

You want to translate the origin to some point. What about the rest of the points, what do you want to do to them?

3. Sep 13, 2009

### espen180

In this transformation, I want all the points of $$x^{\mu^\prime}$$ to undergo the same translation as the origin, such that the result is an orthogonal coordinate system with "grid-increments" the same size as in $$x^{\mu}$$.

4. Sep 13, 2009

### dx

A translation is simply x'μ = xμ + aμ. A translation cannot be done by matrix multiplication because matrix multiplication always takes the origin to itself.

5. Sep 13, 2009

### espen180

I dont understand. Which origin are you talking about? What the multiplication does is relate one set of coordinates to another, right?

6. Sep 13, 2009

### dx

Only if the coordinates are related by a linear transformation. A translation is not a linear transformation, and therefore it cannot be represented by matrix multiplication. It is easy to see that matrix multiplication always sends (0, 0, 0) to (0, 0, 0).

7. Sep 13, 2009

### espen180

Unless I am mistaken, I would say that $$\left[\begin{matrix} 1 & 0 & 0 & a \\ 0 & 1 & 0 & b \\ 0 & 0 & 1 & c \\ 0 & 0 & 0 & 1\end{matrix}\right]\left[\begin{matrix} x \\ y \\ z \\ 1 \end{matrix}\right]$$ sends (0,0,0) to (a,b,c).

Then why doesn't this multiplication work?

8. Sep 13, 2009

### dx

It sends (0, 0, 0, 1) to (a, b, c, 1).

But yes, what you did is ok as long as you interpret (x, y, z, 1) as the point (x, y, z). Basically, you are using a linear transformation of a 4-dimensional space to generate a translation of a 3-dimensional subset. But why would you do that when you can just add the vector aμ to xμ?

9. Sep 13, 2009

### Ben Niehoff

That works just fine. I think dx might be confused. Translation can be represented by matrix multiplication by extending your space by one more dimension as you have done above. In effect, a translation in D dimensions is just a projection of a linear transformation in D+1 dimensions onto a D-dimensional subspace (the hyperplane w=1, in your case).

Note, however, that your transformation matrix is no longer a D-dimensional tensor.

10. Sep 14, 2009

### John Creighto

You might do this if you wanted to compose several operations and then maybe invert them.

11. Sep 14, 2009

### dx

Does the addition of aμ to xμ prevent you from doing further operations on the vector or inverting them?

12. Sep 14, 2009

### espen180

Not really, but by expressing the transformation as a 2. order D+1-dimentional tensor it is possible to compress all the neccesary operations into a single tensor $${T^{\mu^\prime}}_{\mu}$$. This is of course possible also after doing the vector addition, which leaves one to decide which of
$$x^{\mu^\prime}={T^{\mu^\prime}}_{\mu}x^{\mu}$$ (1)
and
$$x^{\mu^\prime}={T^{\mu^\prime}}_{\mu}\left(x^{\mu}+a^{\mu}\right)$$ (2)
he/she prefers.

The difference in $${T^{\mu^\prime}}_{\mu}$$ between the two is that in (2) it is a D-dimentional tensor, while in (1), it is a D+1-dimentional tensor.

13. Sep 14, 2009

### dx

It's not a question of preference, but of utility. A translation is a translation, and the simplest way to translate a point (x, y, z) is to add a vector (a, b, c) to it. Or you could do something more complicated, like you did, but I still don't see why you would want to do that. What it looks like to me from your OP is that you think that any transformation between two coordinate systems must be a matrix multiplication, which is not true. Based on this misconception, you tried to get the translation into matrix form. Is that correct, or did you have some other reason you wanted it in that form?

Last edited: Sep 14, 2009
14. Sep 14, 2009

### espen180

I don't beleive that a transformation from one coordinate system to another must be a matrix multiplication, but I beleive that any such transformation may be expressed as one. Is that correct?

If not, to what extent is it true?

15. Sep 14, 2009

### dx

A general smooth change of coordinates on a patch of space (or some other manifold) is of the form

$$x^{\mu}(P) = f^{\mu}(y^1(P), y^2(P), ..., y^n(P))$$​

where the $$f^{\mu}$$ are smooth functions. A complete description of the transformation requires you to specify these functions. At each point, the coordinate system gives you a basis for the tangent and cotangent spaces. The components of a vector in a given coordinate system refer to this basis. The components of the same vector at a point A in the two coordinate systems are related in the following way: $$V'^{\mu} = T^{\mu}_{\nu}V^{\nu}$$, where

$$T^{\mu}_{\nu} = \frac{\partial y^{\mu}}{\partial x^{\nu}}(A)$$​

So locally, a general smooth change of coordinate is a linear transformation of the coordinates on the tangent space. This is the only way linear transformations naturally arise from general coordinate transformations: they describe the local effect on the tangent and cotangent spaces and more general tensors constructed from them.

Last edited: Sep 14, 2009
16. Sep 14, 2009

### espen180

From what I have read about tangent spaces, I understand that they are D-1-dimentional manifolds (In 2 and 3 dimentions; lines and planes) which are tangential to a given function at a given point, though that's how far it goes.

I assume $$x^{\mu}$$ is a first order tensor representing a coordinate system. What is$$y$$ in your equation?

17. Sep 14, 2009

### dx

x and y are two coordinate systems, the coordinates in the systems being (x1, x2, ..., xn) and (y1, y2, ..., yn) respectively. They are not tensors. The tangent space to a surface at a point is the set of tangent vectors at that point, and has the same dimension as the surface.

18. Sep 14, 2009

### espen180

Okay. Here is my interpretation of your equation. Please tell me if it is correct or not:

P is a point in n-dimensional space. $$x^{\mu}(P)$$ is the point given in the coordinates of $$x^{\mu}$$.

The function $$f^{\mu}$$ relates each individual coordinate of $$x^{\mu}$$ with the coordinates of $$y^{\mu}$$.

A smooth function is a funtion which has derivatives of all orders. I think this means that $$f^{(n)}$$ is never 0.

Last edited: Sep 14, 2009
19. Sep 14, 2009

### dx

All correct except that last line. Derivatives of all orders must exists, but they can be zero.

20. Sep 14, 2009

### espen180

Okay. Still one question remains.

Why is the tangent space important in this context?