# Understanding Basis Vectors for Tensor Differentiation

• snoopies622
In summary: Manifolds come with a standard basis, and using a coordinate chart, the standard basis on R^n gives a basis for the vector fields defined on the range of the coordinates. So, the partial derivative operators associated with the coordinate system are basis vectors of the tangent space. This relationship can be seen by looking at the components of the metric tensor, which are given by the dot product of the basis vectors. However, this particular exercise is not a good way to build intuition for the general treatment of manifolds.

#### snoopies622

This is still rather new to me so please pardon my ignorance. My introduction to tensor differentiation involved only manifolds that were embedded in higher-dimensional Euclidean spaces. To describe them, I was instructed to find basis vectors using partial derivatives, as in

$$e_\theta = <\frac {\partial x}{\partial \theta},\frac {\partial y}{\partial \theta},\frac {\partial z}{\partial \theta}>$$

then to use those and the dot product to find both the metric tensor and the Christoffel symbols.

$$g_\phi_\theta =e_\phi \cdot e_\theta$$

$$g_\theta_\theta =e_\theta \cdot e_\theta$$
.
.

$$\Gamma ^\theta _\theta _\phi = e^\theta \cdot \frac {\partial e_\theta}{\partial \phi}$$

etc.

Since then I've noticed that GR doesn't seem to bother with higher-dimensional flat spaces, and instead begins with the metric tensor, using that alone to compute the Christoffel symbols. That's fine with me, but what about basis vectors? Can those somehow also be computed using the metric tensor? or should I regard them simply as

$$e_0 = <1,0,0,0>$$
$$e_1 =<0,1,0,0>$$
$$e_2 =<0,0,1,0>$$
$$e_3 = <0,0,0,1>$$

and move on? Does it matter?

snoopies622 said:
That's fine with me, but what about basis vectors?
Manifolds don't come with a standard basis. However, if you are using a coordinate chart, then the standard basis on R^n does give you a basis for the vector fields defined on the range of your coordinates.

If M is a manifold, U is an open subset of M, p is a point in U, and $x:U\rightarrow \mathbb{R}^n$ is a coordinate system, then the partial derivative operators

$$\frac{\partial}{\partial x^\mu}\bigg|_p$$

are basis vectors of the tangent space TpM of M at p.

These operators are defined by their action on functions $f:M\rightarrow\mathbb{R}$.

$$\frac{\partial}{\partial x^\mu}\bigg|_p f=(f\circ x^{-1}),_\mu(x(p))$$

where $,_\mu$ denotes the partial derivate of the function, with respect to the $\mu$th variable.

So, no need to involve the metric. The components $g_{\mu\nu}(p)$ of the metric at p, are given by

$$g_{\mu\nu}(p)=g\bigg(\frac{\partial}{\partial x^\mu}\bigg|_p,\frac{\partial}{\partial x^\nu}\bigg|_p\bigg)$$

When the manifold is $\mathbb{R}^n$, it's convenient to take the coordinate system to be the identity map I on $\mathbb{R}^n$, defined by I(p)=p for all p in $\mathbb{R}^n$.

Thanks guys, I appreciate that.

Fredrik, can you provide (or direct me to) an example of this procedure? I don't understand why

$$\frac{\partial}{\partial x^\mu}\bigg|_p f=(f\circ x^{-1}),_\mu(x(p))$$

$$\frac{\partial}{\partial x^\mu}\bigg|_p f=(f\circ x),_\mu(x(p))$$

and it might help if I see it in action.

The reason for that is that x (the coordinate system) maps a subset of the manifold onto a subset of Rn, and f maps the manifold to R. So $f\circ x^{-1}$ is a real-valued function of n real variables, and we know how to take partial derivatives of those. $f\circ x$ doesn't make sense, since the range of x is not a subset of the domain of f.

I'm not sure what kind of example I should show you. Here's the relationship between the partial derivative operators associated with two different coordinate systems, x and y:

$$\frac{\partial}{\partial y^\mu}\bigg|_p f=(f\circ y^{-1}),_\mu(y(p)) =(f\circ x^{-1}\circ x\circ y^{-1}),_\mu(y(p))$$

$$=(f\circ x^{-1}),_\nu(x\circ y^{-1}(y(p)))\cdot(x\circ y^{-1})^\nu{},_{\mu}(y(p))$$

$$=(f\circ x^{-1}),_\nu(x(p))\cdot (x\circ y^{-1})^\nu{},_{\mu}(y(p))$$

$$=(x\circ y^{-1})^\nu{},_{\mu}(y(p)) \frac{\partial}{\partial x^\nu}\bigg|_p f$$

You probably won't see it right away, but all I did to get from the first line to the second was to use the chain rule. (I'm also using the Einstein summation convention. We don't write any sigma symbols for summation, because we know that we're supposed to take a sum when we see the same index appear twice).

So the relationship between these two sets of basis vectors is

$$\frac{\partial}{\partial y^\mu}\bigg|_p = \Lambda^\nu{}_\mu\frac{\partial}{\partial x^\nu}\bigg|_p$$

where the $\Lambda^\nu{}_\mu$ are the components of the Jacobian matrix of $x\circ y^{-1}$

Last edited:
I just wanted to point out that something important about what Fredrick said-- noticed his coordinate system or chart was from U to R^n and not M to R^n. That's no coincidence, you usually need multiple charts to cover a manifold (i.e. there is no global coordinate system), the manifold in general can only be locally mapped to R^n.

I also don't like that the exercise seems to teach a false lesson that basis vectors need to be (a) constructed from coordinate systems and (b) live in the manifold (as opposed to the tangent space where they belong). There is nothing technically wrong with the exercise but working out something that is particular to E^n (n dimensional Euclidean space) doesn't help build intuition for the more general treatment.

DavidWhitbeck said:
I also don't like that the exercise seems to teach a false lesson that basis vectors need to be (a) constructed from coordinate systems
This is a good point. The tangent space can be defined as the vector space of all operators that take functions to numbers and are i) linear, and ii) satisfy the Leibnitz rule: v(fg)=v(f)g(p)+f(p)v(g). The result is the same though, because we can show that the partial derivative operators I defined are basis vectors of this vectors space.

I actually prefer the coordinate dependent method in this particular case, even though I prefer coordinate independence in all other definitions.

DavidWhitbeck said:
(b) live in the manifold (as opposed to the tangent space where they belong). There is nothing technically wrong with the exercise but working out something that is particular to E^n (n dimensional Euclidean space) doesn't help build intuition for the more general treatment.
There's nothing in the calculation that depends on the manifold being Euclidean. It makes no reference to a metric, and it doesn't suggest that the tangent vectors live in the manifold.

Last edited:
I misread the problem, I thought it said that it was Euclidean, and now I see that the OP was saying that his previous exposure was only Euclidean. Sorry.

I'd like to ask a slight variation on my original question (one which will probably seem trivial at this point), if that's OK.

Suppose I have a three-dimensional manifold covered by coordinate system (a,b,c) with metric

$$ds^2 = g_{aa}da^2 + g_{ab}da db + g_{ac}da dc..$$

and I know nothing else about it. Is it alright to define my basis vectors

$$e_a = <1,0,0>, e_b = <0,1,0>, e_c = <0,0,1>$$

so long as I define my basis covectors as

$$\omega ^a = g^{ap}e_p, \omega ^b = g^{bp}e_p, \omega ^c = g^{cp}e_p$$?

Last edited:
If you embed your manifold in some higher R^n, then once you pick your basis vectors and your "induced metric", which is the pullback of the Euclidean R^n metric onto the manifold, then any calculation you can do with this manifold can be done solely with that metric and the basis vectors. The fact that it was embedded can be forgotten, and you can talk purely about intrinsic quantities. Then, next time you want to consider a curved manifold, forget about embedding it (which can always be done, btw), and just write down a metric and some coordinates =)

Thanks, Ibrits (lbrits?). Unfortunately, I don’t know how to embed a curved manifold in a higher-dimensional Euclidean space in the first place. Is there a standard technique, or is it a matter of trial-and-error?

If it isn't obvious how to, don't try, since you don't need to. You're either given an intrinsic metric, or need to calculate it (e.g., Einstein's equations).

If you want some practice, though, you can parameterize various objects like 2-spheres and cylinders and cones and whatnot in 3-space and find their induced metrics. You can also do the same for 3-spheres and embed them in 4 dimensions. But the general thing goes along Nash's theorem and I think it assumes you already have some geometric structure, like a metric.

OK; I'll see if I can find a proof of Nash's theorem out there.

## 1. What are basis vectors in tensor differentiation?

In tensor differentiation, basis vectors refer to the set of linearly independent vectors that span the vector space in which the tensor is defined. They are used to represent the components of the tensor and are essential in understanding the behavior of the tensor under transformations.

## 2. How do basis vectors affect tensor differentiation?

The basis vectors determine the direction and magnitude of change in a tensor when undergoing differentiation. They also play a crucial role in defining the coordinate system in which the tensor is being differentiated.

## 3. Can basis vectors be non-orthogonal in tensor differentiation?

Yes, basis vectors can be non-orthogonal in tensor differentiation. In fact, in many cases, using non-orthogonal basis vectors can simplify the calculations and provide a more intuitive understanding of the tensor's behavior.

## 4. How do you choose the appropriate basis vectors for tensor differentiation?

The choice of basis vectors depends on the problem at hand and the desired outcome. Some common choices include the canonical basis vectors (i.e., unit vectors along the coordinate axes) and the basis vectors aligned with the principal axes of the tensor.

## 5. Are basis vectors unique for tensor differentiation?

Yes, basis vectors are unique for each tensor and cannot be arbitrarily changed. However, the choice of basis vectors is not unique, and different choices can lead to different but equivalent results in tensor differentiation.