# Several covariant derivatives

• I
How does one solve a problem like this?

Suppose we have
$$(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)$$
What is the result of the above operation? As I remember it from the theory of covariant derivatives, the above relation would look like this
$$e_\theta[e_\theta] + e_\theta[f(\theta)e_\varphi] + f(\theta)e_\varphi[e_\theta] + f(\theta)e_\varphi[f(\theta)e_\varphi] = \nabla_\theta e_\theta + \nabla_\theta f(\theta)e_\varphi + f(\theta)\nabla_\varphi e_\theta + f(\theta)\nabla_\varphi f(\theta)e_\varphi$$
Now suppose the metric is Minkowskian, in which case all the ##\Gamma## vanishes. Then the last equality above would read
$$0 + \partial_\theta f(\theta) e_\varphi + 0 + 0 = \partial_\theta f(\theta) e_\varphi$$
Am I getting this correctly?

Last edited:

PeterDonis
Mentor
2020 Award
Suppose we have
$$(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)$$
What is the result of the above operation? As I remember it from the theory of covariant derivatives

There are no covariant derivatives in that expression.

There are no covariant derivatives in that expression.
Why (for instance) the operation ##e_\theta[f(\theta)e_\varphi]## isn't a covariant derivative? ##f(\theta)## plays the role of the component of the vector.

PeterDonis
Mentor
2020 Award
Why (for instance) the operation ##e_\theta[f(\theta)e_\varphi]## isn't a covariant derivative?

It doesn't look like one to me. It just looks like multiplication. Where are you getting all this from?

It doesn't look like one to me. It just looks like multiplication. Where are you getting all this from?
I'm not getting it from a specific source. I just want to compute ##(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)## and I wonder if that is a valid way of doing the job. Of course, I'm not creating any new rules, I'm trying to use the general rules already known to do it.

Last edited:
PeterDonis
Mentor
2020 Award
I just want to comput ##(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)##

Which, as I said, is just multiplication. There are no covariant derivatives anywhere. So I don't understand what you are having a problem with.

Which, as I said, is just multiplication
The issue here is, how do we solve such multiplication if we cannot use covariant derivative laws?

Last edited:
PeterDonis
Mentor
2020 Award
how do we solve this multiplication if we cannot use covariant derivatives laws?

Why do you need covariant derivative laws to do a simple multiplication in which there are no covariant derivatives?

Why do you need covariant derivative laws to do a simple multiplication in which there are no covariant derivatives?
Well, because it is not a inner product; it is not a vector product; it looks just like a scalar product, but such kind of product is not defined for vectors. What is the meaning of, say ##e_\theta e_\varphi##?

Last edited:
PeterDonis
Mentor
2020 Award
What is the meaning of, say ##e_\theta e_\varphi##?

Um, the product of multiplying the two numbers ##e_\theta## and ##e_\varphi##?

Um, the product of multiplying the two numbers ##e_\theta## and ##e_\varphi##?
But these aren't numbers, they are vectors.

PeterDonis
Mentor
2020 Award
these aren't numbers, they are vectors.

Are they? They aren't written in the usual vector notation. That's why I asked you where you were getting this from.

If you want to be understood, you need to learn standard notation. Can you write whatever it is you are trying to ask in standard vector notation? Or at least explain what you mean by the notation ##e_\theta##, ##e_\varphi##, and ##f(\theta)##?

The notation I have been using is one of the most standard notations that I'm aware of. In this notation, a general vector ##V## is written in a basis ##(e_\theta, e_\varphi)## as ##V = V^\theta e_\theta + V^\varphi e_\varphi##. That said, we identify in my original question $$f(\theta) = V^\varphi \\ 1 = V^\theta$$

PeterDonis
Mentor
2020 Award
The notation I have been using is one of the most standard notations that I'm aware of.

Standard notation for vectors puts either an arrow or a hat over them. Also, you can't just assume that people know which vectors you're talking about if you just write down symbols. See below.

a basis ##(e_\theta, e_\varphi)##

A basis in what space, using what coordinate chart?

For example, if you were using spherical coordinates in 3-dimensional Euclidean space, there would be three basis vectors, not two, and they would be written, given standard spherical coordinates ##r, \theta, \varphi##, as ##\hat{e}_r##, ##\hat{e}_\theta##, and ##\hat{e}_\varphi##. (Note the hats over the vectors.) But you are saying there are only two basis vectors, so it does not appear that you are using spherical coordinates in 3-dimensional Euclidean space. So you still need to clarify what you mean.

Also, if you are saying that the ##\varphi## component of this vector is ##f(\theta)##, does this mean it is a function of ##\theta## only?

Also, if you are multiplying some vector ##V## by itself, there is more than one way of multiplying vectors. Which one are you using? Your notation does not make that clear.

Ok. Sorry, I should have made these points clear on the beggining.
A basis in what space, using what coordinate chart?
Three dimensional Euclidean Space, using spherical coordinates.
But you are saying there are only two basis vectors, so it does not appear that you are using spherical coordinates in 3-dimensional Euclidean space
That is because the ##r##-component of the vector in question is null. But, indeed, I should have stated in the last post the base as being ##(e_r, e_\theta, e_\varphi)##.
if you are saying that the ##\varphi## component of this vector is ##f(\theta)##, does this mean it is a function of ##\theta## only?
Exactly.
Also, if you are multiplying some vector ##V## by itself, there is more than one way of multiplying vectors. Which one are you using? Your notation does not make that clear.
To tell you which one, I need first your answer to the following question (because my answer will be more clarifying depending on what you say).

Can we define a vector as being any quantity which transforms like ##V^{' \mu} (x') = (\partial x^{' \mu} / \partial x^\nu) V^\nu (x)## under a change of coordinate system ##x \longrightarrow x'##?

PeterDonis
Mentor
2020 Award
Three dimensional Euclidean Space, using spherical coordinates.

Ok.

That is because the ##r##-component of the vector in question is null.

Ok. But, as you note, you should still make that clear by including ##\hat{e}_r## in the basis.

Can we define a vector as being any quantity which transforms like ##V^{' \mu} (x') = (\partial x^{' \mu} / \partial x^\nu) V^\nu (x)## under a change of coordinate system ##x \longrightarrow x'##?

No, because vectors, as mathematical objects, exist in the absence of any choice of coordinates at all, so they are not defined in terms of their coordinate transformation law. The law you state is certainly valid. But I don't see why you would need to view it as a definition, instead of just a valid law that applies to vectors.

Ok. It turns out that my basis vectors are derivative operators! As they transform like vectors and I'm interested in evaluate that product in the opening post, I have assumed that it would be valid to use the vector rules. Now, as you said,
if you are multiplying some vector ##V## by itself, there is more than one way of multiplying vectors. Which one are you using?
there is more than one way of evaluating that product. It should be clear from the context what kind of product one is going to deal with. It turns out that in this case the product will act on a function. I mean
##(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)## acting on a function ##g(\theta, \varphi)## is expected to give the same result as operating once ##(e_\theta + f(\theta)e_\varphi) g(\theta, \varphi)## and then operating once more with ##(e_\theta + f(\theta)e_\varphi)## on the result. Also, I'm dealing with this in the context of General Relativity, which is the reason for opening the thread on the relativity section.

So, given all this information, what kind of product should we use to evaluate first ##(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)## and after apply the resultant quantity to the function ##g(\theta, \varphi)##?

PeterDonis
Mentor
2020 Award
It turns out that my basis vectors are derivative operators!

This is true of all vectors, not just basis vectors. More precisely, there is an isomorphism between vectors and directional derivatives.

in the opening post, I have assumed that it would be valid to use the vector rules

It is. Directional derivative operators are elements of a vector space--that's what "there is an isomorphism between vectors and directional derivatives" means. You can use vector rules on elements of any vector space.

there is more than one way of evaluating that product.

Evaluating what product? You have a product of two vectors, but you haven't said whether it's a scalar product (dot product) or a vector product (cross product) or something else. You have to specify that before we can evaluate anything.

It turns out that in this case the product will act on a function.

Really? Why? The fact that vectors can be treated as directional derivative operators does not mean you have to treat them that way. Before you can know how you want to treat them, you have to know what problem you are trying to solve.

I'm dealing with this in the context of General Relativity

That's way too vague. What specific problem are you trying to solve?

If your answer is "I don't know", then you need to find a specific problem that raises whatever issue it is that you are asking about. I still don't know what that is.

what kind of product should we use

You're supposed to tell me that; you wrote down the expression. It now appears that you don't even know what you were trying to write down.

At this point I am closing the thread since I can't tell what the actual question is. Please PM me if you have further information, so I can consider whether it justifies reopening the thread.