How can we represent tensors in conformal model?

Click For Summary

Discussion Overview

The discussion revolves around the representation of tensors in a conformal model, particularly focusing on the transformation of coordinate systems and the implications of using matrix multiplication versus vector addition for translations. The scope includes theoretical aspects of tensor representation, transformations, and the mathematical foundations underlying these concepts.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant proposes using a second-rank transformation tensor to translate coordinates, questioning if this approach is valid given that the tensor includes original coordinates.
  • Another participant challenges the idea of using matrix multiplication for translations, stating that it cannot be done as matrix multiplication always keeps the origin fixed.
  • A different viewpoint suggests that a translation can be represented in a higher-dimensional space, allowing for the use of matrix multiplication by adding an extra dimension.
  • Some participants discuss the implications of representing transformations as either D-dimensional or D+1-dimensional tensors, noting the differences in utility and complexity.
  • There is a mention of the necessity of specifying smooth functions for general coordinate transformations, highlighting the relationship between tangent spaces and coordinate systems.
  • One participant seeks clarification on the notation used for transformations and the meaning of differentiating coordinates.

Areas of Agreement / Disagreement

Participants express differing views on the validity and utility of using matrix multiplication for translations, with some arguing against it while others defend the approach under certain conditions. The discussion remains unresolved regarding the best method for representing these transformations.

Contextual Notes

Participants note that translations are not linear transformations and that the representation of transformations may depend on the dimensionality of the tensors involved. There is also an emphasis on the need for clarity in defining the functions that relate different coordinate systems.

espen180
Messages
831
Reaction score
2
Let's say I have two coordinate system in first-rank tensor form:
x^{\mu}=\left[\begin{matrix} x \\ y \\ z \end{matrix}\right] and x^{\mu^\prime}=\left[\begin{matrix} x^\prime \\ y^\prime \\ z^\prime \end{matrix}\right]

and I want to translate the origin of x^{\mu^\prime} from the origin to point (a,b,c) in x^{\mu}. I can to this by using a second-rank transformation tensor {T^{\mu^\prime}}_{\mu} such that x^{\mu^\prime}={T^{\mu^\prime}}_{\mu}x^{\mu}.

Would I be "cheating" if I said that {T^{\mu^\prime}}_{\mu} can be written as {T^{\mu^\prime}}_{\mu}=\left[\begin{matrix} 1+\frac{a}{x} & 0 & 0 \\ 0 & 1+\frac{b}{y} & 0 \\ 0 & 0 & 1+\frac{c}{z}\end{matrix}\right]? Because sure enough, if you carry out the multiplication, you find that x^\prime=x+a and so on, but since {T^{\mu^\prime}}_{\mu} includes the original coordinates, it is not independent of x^{\mu}.

Is it therefore neccesary to represent the three-vector above as a four-vector x^{\mu}=[x,y,z,1]^T and
{T^{\mu^\prime}}_{\mu}=\left[\begin{matrix} 1 & 0 & 0 & a \\ 0 & 1 & 0 & b \\ 0 & 0 & 1 & c \\ 0 & 0 & 0 & 1\end{matrix}\right]
in order for the transformation to be "rigorous" enough?

Thanks for any help.
 
Physics news on Phys.org
You want to translate the origin to some point. What about the rest of the points, what do you want to do to them?
 
In this transformation, I want all the points of x^{\mu^\prime} to undergo the same translation as the origin, such that the result is an orthogonal coordinate system with "grid-increments" the same size as in x^{\mu}.
 
A translation is simply x'μ = xμ + aμ. A translation cannot be done by matrix multiplication because matrix multiplication always takes the origin to itself.
 
I don't understand. Which origin are you talking about? What the multiplication does is relate one set of coordinates to another, right?
 
Only if the coordinates are related by a linear transformation. A translation is not a linear transformation, and therefore it cannot be represented by matrix multiplication. It is easy to see that matrix multiplication always sends (0, 0, 0) to (0, 0, 0).
 
Unless I am mistaken, I would say that \left[\begin{matrix} 1 & 0 & 0 & a \\ 0 & 1 & 0 & b \\ 0 & 0 & 1 & c \\ 0 & 0 & 0 & 1\end{matrix}\right]\left[\begin{matrix} x \\ y \\ z \\ 1 \end{matrix}\right] sends (0,0,0) to (a,b,c).

Then why doesn't this multiplication work?
 
It sends (0, 0, 0, 1) to (a, b, c, 1).

But yes, what you did is ok as long as you interpret (x, y, z, 1) as the point (x, y, z). Basically, you are using a linear transformation of a 4-dimensional space to generate a translation of a 3-dimensional subset. But why would you do that when you can just add the vector aμ to xμ?
 
That works just fine. I think dx might be confused. Translation can be represented by matrix multiplication by extending your space by one more dimension as you have done above. In effect, a translation in D dimensions is just a projection of a linear transformation in D+1 dimensions onto a D-dimensional subspace (the hyperplane w=1, in your case).

Note, however, that your transformation matrix is no longer a D-dimensional tensor.
 
  • #10
dx said:
It sends (0, 0, 0, 1) to (a, b, c, 1).

But yes, what you did is ok as long as you interpret (x, y, z, 1) as the point (x, y, z). Basically, you are using a linear transformation of a 4-dimensional space to generate a translation of a 3-dimensional subset. But why would you do that when you can just add the vector aμ to xμ?

You might do this if you wanted to compose several operations and then maybe invert them.
 
  • #11
Does the addition of aμ to xμ prevent you from doing further operations on the vector or inverting them?
 
  • #12
Not really, but by expressing the transformation as a 2. order D+1-dimensional tensor it is possible to compress all the neccesary operations into a single tensor {T^{\mu^\prime}}_{\mu}. This is of course possible also after doing the vector addition, which leaves one to decide which of
x^{\mu^\prime}={T^{\mu^\prime}}_{\mu}x^{\mu} (1)
and
x^{\mu^\prime}={T^{\mu^\prime}}_{\mu}\left(x^{\mu}+a^{\mu}\right) (2)
he/she prefers.

The difference in {T^{\mu^\prime}}_{\mu} between the two is that in (2) it is a D-dimensional tensor, while in (1), it is a D+1-dimensional tensor.
 
  • #13
It's not a question of preference, but of utility. A translation is a translation, and the simplest way to translate a point (x, y, z) is to add a vector (a, b, c) to it. Or you could do something more complicated, like you did, but I still don't see why you would want to do that. What it looks like to me from your OP is that you think that any transformation between two coordinate systems must be a matrix multiplication, which is not true. Based on this misconception, you tried to get the translation into matrix form. Is that correct, or did you have some other reason you wanted it in that form?
 
Last edited:
  • #14
I don't believe that a transformation from one coordinate system to another must be a matrix multiplication, but I believe that any such transformation may be expressed as one. Is that correct?

If not, to what extent is it true?
 
  • #15
A general smooth change of coordinates on a patch of space (or some other manifold) is of the form

x^{\mu}(P) = f^{\mu}(y^1(P), y^2(P), ..., y^n(P))​

where the f^{\mu} are smooth functions. A complete description of the transformation requires you to specify these functions. At each point, the coordinate system gives you a basis for the tangent and cotangent spaces. The components of a vector in a given coordinate system refer to this basis. The components of the same vector at a point A in the two coordinate systems are related in the following way: V'^{\mu} = T^{\mu}_{\nu}V^{\nu}, where

T^{\mu}_{\nu} = \frac{\partial y^{\mu}}{\partial x^{\nu}}(A)​

So locally, a general smooth change of coordinate is a linear transformation of the coordinates on the tangent space. This is the only way linear transformations naturally arise from general coordinate transformations: they describe the local effect on the tangent and cotangent spaces and more general tensors constructed from them.
 
Last edited:
  • #16
From what I have read about tangent spaces, I understand that they are D-1-dimensional manifolds (In 2 and 3 dimentions; lines and planes) which are tangential to a given function at a given point, though that's how far it goes.

I assume x^{\mu} is a first order tensor representing a coordinate system. What isy in your equation?
 
  • #17
x and y are two coordinate systems, the coordinates in the systems being (x1, x2, ..., xn) and (y1, y2, ..., yn) respectively. They are not tensors. The tangent space to a surface at a point is the set of tangent vectors at that point, and has the same dimension as the surface.
 
  • #18
Okay. Here is my interpretation of your equation. Please tell me if it is correct or not:

P is a point in n-dimensional space. x^{\mu}(P) is the point given in the coordinates of x^{\mu}.

The function f^{\mu} relates each individual coordinate of x^{\mu} with the coordinates of y^{\mu}.

A smooth function is a funtion which has derivatives of all orders. I think this means that f^{(n)} is never 0.
 
Last edited:
  • #19
All correct except that last line. Derivatives of all orders must exists, but they can be zero.
 
  • #20
Okay. Still one question remains.

Why is the tangent space important in this context?
 
  • #21
Because we want to understand how linear transformations arise from non-linear transformations. The tangent space is, in a sense, the local linear approximation of a space, and a non-linear change of coordinates on the space induces a linear change on the tangent spaces.
 
  • #22
dx said:
T^{\mu}_{\nu} = \frac{\partial y^{\mu}}{\partial x^{\nu}}(A)​

What does this notation mean? You differentiate the old coordinates with respect to the new, and multiply with A? what is A and how does this differentiation work?
 
  • #23
A is a point in the space. The notation just means that I'm evaluating the derivative at A.
 
  • #24
And you sum the derivatives over all the superscript values, right?
 
  • #25
No sum. The convention is to sum over repeated indices, one of which appears upstairs and one of which appears downstairs.
 
  • #26
Hello,
I don't know if this can be useful at all, but please, remember that there is also a way to express translations as orthogonal transformations. This is accomplished in the Conformal Model of Geometric Algebra. However the price to pay is that your originally 3D space must be embedded into a 5-D space with Minkowski metric. Still, this is, as far as I know, the only way to model all the Euclidean transformation by linear transformation (matrix multiplication).
 
  • #27
I googled the Minowski metric and found this for the Monwski tensor:
b9e01c89a9cb9eb65f905bea9305b3ac.png


It said to take c=1. I have two questions:

1. How will it look if we omit the c=1 convention?

2. Why does SR play a role in linear tensor analysis? (This may be a misconception on my part)
 
Last edited:
  • #28
I may not be the best person to answer to your question, since I am not a physicist and I am a newbie with Geometric Algebra.

However, as you already noticed the minkowski space, as physicist use it has a base with 4 orthogonal unit vectors, among which, only one has the property <e,e>=-1.

In the conformal model, you have to use 5 dimensions (not 4), so the metric will look like an 5x5 identity matrix but with one negative element.

I suggest you take a look here if are interested:
http://www.euclideanspace.com/maths/geometry/space/nonEuclid/conformal/index.htm

Regarding your last question, I would almost say that it is tensor analysis which plays a role in SR, as tensor analysis is the right "tool" to study curved spaces.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 38 ·
2
Replies
38
Views
3K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
5K