Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Several covariant derivatives

  1. Aug 6, 2017 #1
    How does one solve a problem like this?

    Suppose we have
    $$(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)$$
    What is the result of the above operation? As I remember it from the theory of covariant derivatives, the above relation would look like this
    $$e_\theta[e_\theta] + e_\theta[f(\theta)e_\varphi] + f(\theta)e_\varphi[e_\theta] + f(\theta)e_\varphi[f(\theta)e_\varphi] = \nabla_\theta e_\theta + \nabla_\theta f(\theta)e_\varphi + f(\theta)\nabla_\varphi e_\theta + f(\theta)\nabla_\varphi f(\theta)e_\varphi$$
    Now suppose the metric is Minkowskian, in which case all the ##\Gamma## vanishes. Then the last equality above would read
    $$0 + \partial_\theta f(\theta) e_\varphi + 0 + 0 = \partial_\theta f(\theta) e_\varphi $$
    Am I getting this correctly?
     
    Last edited: Aug 6, 2017
  2. jcsd
  3. Aug 6, 2017 #2

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    There are no covariant derivatives in that expression.
     
  4. Aug 6, 2017 #3
    Why (for instance) the operation ##e_\theta[f(\theta)e_\varphi]## isn't a covariant derivative? ##f(\theta)## plays the role of the component of the vector.
     
  5. Aug 6, 2017 #4

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    It doesn't look like one to me. It just looks like multiplication. Where are you getting all this from?
     
  6. Aug 6, 2017 #5
    I'm not getting it from a specific source. I just want to compute ##(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)## and I wonder if that is a valid way of doing the job. Of course, I'm not creating any new rules, I'm trying to use the general rules already known to do it.
     
    Last edited: Aug 6, 2017
  7. Aug 6, 2017 #6

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Which, as I said, is just multiplication. There are no covariant derivatives anywhere. So I don't understand what you are having a problem with.
     
  8. Aug 6, 2017 #7
    The issue here is, how do we solve such multiplication if we cannot use covariant derivative laws?
     
    Last edited: Aug 6, 2017
  9. Aug 6, 2017 #8

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Why do you need covariant derivative laws to do a simple multiplication in which there are no covariant derivatives?
     
  10. Aug 6, 2017 #9
    Well, because it is not a inner product; it is not a vector product; it looks just like a scalar product, but such kind of product is not defined for vectors. What is the meaning of, say ##e_\theta e_\varphi##?
     
    Last edited: Aug 6, 2017
  11. Aug 6, 2017 #10

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Um, the product of multiplying the two numbers ##e_\theta## and ##e_\varphi##?
     
  12. Aug 6, 2017 #11
    But these aren't numbers, they are vectors. :biggrin:
     
  13. Aug 6, 2017 #12

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Are they? They aren't written in the usual vector notation. That's why I asked you where you were getting this from.

    If you want to be understood, you need to learn standard notation. Can you write whatever it is you are trying to ask in standard vector notation? Or at least explain what you mean by the notation ##e_\theta##, ##e_\varphi##, and ##f(\theta)##?
     
  14. Aug 6, 2017 #13
    The notation I have been using is one of the most standard notations that I'm aware of. In this notation, a general vector ##V## is written in a basis ##(e_\theta, e_\varphi)## as ##V = V^\theta e_\theta + V^\varphi e_\varphi##. That said, we identify in my original question $$f(\theta) = V^\varphi \\ 1 = V^\theta$$
     
  15. Aug 6, 2017 #14

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Standard notation for vectors puts either an arrow or a hat over them. Also, you can't just assume that people know which vectors you're talking about if you just write down symbols. See below.

    A basis in what space, using what coordinate chart?

    For example, if you were using spherical coordinates in 3-dimensional Euclidean space, there would be three basis vectors, not two, and they would be written, given standard spherical coordinates ##r, \theta, \varphi##, as ##\hat{e}_r##, ##\hat{e}_\theta##, and ##\hat{e}_\varphi##. (Note the hats over the vectors.) But you are saying there are only two basis vectors, so it does not appear that you are using spherical coordinates in 3-dimensional Euclidean space. So you still need to clarify what you mean.

    Also, if you are saying that the ##\varphi## component of this vector is ##f(\theta)##, does this mean it is a function of ##\theta## only?

    Also, if you are multiplying some vector ##V## by itself, there is more than one way of multiplying vectors. Which one are you using? Your notation does not make that clear.
     
  16. Aug 6, 2017 #15
    Ok. Sorry, I should have made these points clear on the beggining.
    Three dimensional Euclidean Space, using spherical coordinates.
    That is because the ##r##-component of the vector in question is null. But, indeed, I should have stated in the last post the base as being ##(e_r, e_\theta, e_\varphi)##.
    Exactly.
    To tell you which one, I need first your answer to the following question (because my answer will be more clarifying depending on what you say).

    Can we define a vector as being any quantity which transforms like ##V^{' \mu} (x') = (\partial x^{' \mu} / \partial x^\nu) V^\nu (x)## under a change of coordinate system ##x \longrightarrow x'##?
     
  17. Aug 6, 2017 #16

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Ok.

    Ok. But, as you note, you should still make that clear by including ##\hat{e}_r## in the basis.

    No, because vectors, as mathematical objects, exist in the absence of any choice of coordinates at all, so they are not defined in terms of their coordinate transformation law. The law you state is certainly valid. But I don't see why you would need to view it as a definition, instead of just a valid law that applies to vectors.
     
  18. Aug 6, 2017 #17
    Ok. It turns out that my basis vectors are derivative operators! As they transform like vectors and I'm interested in evaluate that product in the opening post, I have assumed that it would be valid to use the vector rules. Now, as you said,
    there is more than one way of evaluating that product. It should be clear from the context what kind of product one is going to deal with. It turns out that in this case the product will act on a function. I mean
    ##(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)## acting on a function ##g(\theta, \varphi)## is expected to give the same result as operating once ##(e_\theta + f(\theta)e_\varphi) g(\theta, \varphi)## and then operating once more with ##(e_\theta + f(\theta)e_\varphi)## on the result. Also, I'm dealing with this in the context of General Relativity, which is the reason for opening the thread on the relativity section.

    So, given all this information, what kind of product should we use to evaluate first ##(e_\theta + f(\theta)e_\varphi) (e_\theta + f(\theta)e_\varphi)## and after apply the resultant quantity to the function ##g(\theta, \varphi)##?
     
  19. Aug 6, 2017 #18

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    This is true of all vectors, not just basis vectors. More precisely, there is an isomorphism between vectors and directional derivatives.

    It is. Directional derivative operators are elements of a vector space--that's what "there is an isomorphism between vectors and directional derivatives" means. You can use vector rules on elements of any vector space.

    Evaluating what product? You have a product of two vectors, but you haven't said whether it's a scalar product (dot product) or a vector product (cross product) or something else. You have to specify that before we can evaluate anything.

    Really? Why? The fact that vectors can be treated as directional derivative operators does not mean you have to treat them that way. Before you can know how you want to treat them, you have to know what problem you are trying to solve.

    That's way too vague. What specific problem are you trying to solve?

    If your answer is "I don't know", then you need to find a specific problem that raises whatever issue it is that you are asking about. I still don't know what that is.

    You're supposed to tell me that; you wrote down the expression. It now appears that you don't even know what you were trying to write down.

    At this point I am closing the thread since I can't tell what the actual question is. Please PM me if you have further information, so I can consider whether it justifies reopening the thread.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Several covariant derivatives
  1. Covariant derivative (Replies: 13)

  2. Covariant Derivative (Replies: 1)

Loading...