Undergrad Is Covariant Derivative Notation Misleading in Vector Calculus?

Click For Summary
The discussion centers on the notation for covariant derivatives in vector calculus, with a focus on the potential confusion arising from the representation of vectors and their components. The original poster argues that the notation ##\nabla_\mu V^\nu## misleadingly suggests a direct operation on the components of a vector, while they propose a clearer alternative, ##(\nabla_\mu V)^\nu##. Participants highlight that the covariant derivative is a (1, 1) tensor, and there is debate over whether the notation adequately distinguishes between vectors and their components. The conversation also touches on the implications of using different notations for directional derivatives and covariant derivatives. Overall, the thread emphasizes the need for clarity in mathematical notation to avoid misinterpretation.
  • #31
PeterDonis said:
The components of a vector in a specific, fixed basis are scalars, yes, because they are simply contractions (or perhaps "inner products" would be a better term) of vectors. But "the components of a vector" without specifying a basis is, strictly speaking, not even a well-defined expression, although most sources use it in a somewhat sloppy fashion when talking about coordinate transformations. @stevendaryl is correct that the proper way to describe what a coordinate transformation does is to change which set of basis vectors you contract a given vector with to get components. But most sources don't describe it that way and don't really make it clear that that is what is going on.
Sure, from linear algebra theory the isomorphism ##\varphi_{\mathcal{B}} : V \longrightarrow \mathbf{R}^n## taking ##v \in V## to its coordinate vector ##[v]_{\mathcal{B}}## with respect to a basis ##\mathcal{B}## of ##V## of course depends on ##\mathcal{B}##, and due to the linearity is defined totally by its action on the basis e.g. ##\varphi_{\mathcal{B}}(e_i) = (0, \dots, 1, \dots, 0)## with the ##1## in the ##i^\mathrm{th}## position. And transformation from ##\mathcal{A}##-coordinates to ##\mathcal{B}##-coordinates is just application of a transition function ##\varphi_{\mathcal{B}} \circ \varphi_{\mathcal{A}}^{-1}##, where here this function can be found explicitly by considering ##v = v^{\mu} e_{\mu} = \bar{v}^{\mu} \bar{e}_{\mu}##, so it follows ##v^{\nu} = v^{\mu} \delta^{\nu}_{\mu} = e^{\nu}(v^{\mu} e_{\mu}) = \bar{v}^{\mu} e^{\nu}(\bar{e}_{\mu})## and is defined completely by the numbers ##{T^{\nu}}_{\mu} := e^{\nu}(\bar{e}_{\mu})##.

It doesn't even make sense to mention vector components without specifying a basis!
 
Physics news on Phys.org
  • #32
PeterDonis said:
But on that interpretation of "the components of a vector" (as opposed to the more specific "the components of a vector in this particular fixed basis"), the components of a vector do not change when you change coordinates, because scalars don't change when you change coordinates.

Just saying "the components of a vector" is implicitly leaving out the important bit of "with respect to basis A". The components relative to a different basis B will take a different set of basis vectors and therefore be different from those relative to basis A. Components are useless without knowledge about what basis they refer to.

PeterDonis said:
Yes, and, as above, the way any discussion of coordinate transformations in a GR textbook will describe this process is "transforming the components of V". But on your interpretation, the components of V are 3 and 4, period. They don't change when I transform to another basis. Is that the interpretation you are defending?
They do change because the components depend on the basis you insert into the tensor.
 
  • #33
Jufa said:
What is the derivation one should follow to compute ##\nabla_\mu (e_\nu)^\sigma ##?

That's just a different notation for the connection coefficient ##\Gamma^\sigma_{\mu \nu}##.

There are two different ways to compute ##\Gamma^\sigma_{\mu \nu}##. If you know the components of the metric tensor, ##g_{\mu \nu}##, then ##\Gamma^\sigma_{\mu \nu} = \dfrac{1}{2} g^{\sigma \lambda}(\partial_\mu g_{\nu \lambda} + \partial_\nu g_{\lambda \mu} - \partial_\lambda g_{\nu \mu})##

Or, if you know what the relationship is between your basis vectors ##e_\mu## and a set of local Cartesian basis vectors ##e_j##, then you can use:

##\nabla_\mu e_\nu = \Gamma^\sigma_{\mu \nu} e_\sigma##

To compute ##\nabla_\mu e_\nu## you can re-express ##e_\nu## in terms of Cartesian basis:

##e_\nu = L^\nu_j e_j##

where ##L^\nu_j## are the coefficients of the transformation. Then you can use the fact that (by definition) Cartesian basis vectors are covariantly constant to write:
##\nabla_\mu L^\nu_j e_j = (\partial_\mu L^\nu_j) e_j##

Let me work out an example with 2-D polar coordinates:

##e_r = \dfrac{dx}{dr} e_x + \dfrac{dy}{df} e_y = cos(\theta) e_x + sin(\theta) e_y##
##e_\theta = \dfrac{dx}{d\theta} e_x + \dfrac{dy}{d\theta} e_y = - r sin(\theta) e_x + r cos(\theta) e_y##

##\nabla_r e_r = 0##
##\nabla_\theta e_r = -sin(\theta) e_x + cos(\theta) e_y = \dfrac{1}{r} e_\theta##
##\nabla_r e_\theta = - sin(\theta) e_x + cos(\theta) e_y = \dfrac{1}{r} e_\theta##
##\nabla_\theta e_\theta = -r cos(\theta) e_x - r sin(\theta) e_y = -r e_r##

So ##\Gamma^\theta_{r \theta} = \Gamma^\theta_{\theta r} = 1/r##
##\Gamma^r_{\theta \theta} = -r##

All the other coefficients are zero.
 
  • #34
stevendaryl said:
That's just a different notation for the connection coefficient ##\Gamma^\sigma_{\mu \nu}##.

There are two different ways to compute ##\Gamma^\sigma_{\mu \nu}##. If you know the components of the metric tensor, ##g_{\mu \nu}##, then ##\Gamma^\sigma_{\mu \nu} = \dfrac{1}{2} g^{\sigma \lambda}(\partial_\mu g_{\nu \lambda} + \partial_\nu g_{\lambda \mu} - \partial_\lambda g_{\nu \mu})##

Or, if you know what the relationship is between your basis vectors ##e_\mu## and a set of local Cartesian basis vectors ##e_j##, then you can use:

##\nabla_\mu e_\nu = \Gamma^\sigma_{\mu \nu} e_\sigma##

To compute ##\nabla_\mu e_\nu## you can re-express ##e_\nu## in terms of Cartesian basis:

##e_\nu = L^\nu_j e_j##

where ##L^\nu_j## are the coefficients of the transformation. Then you can use the fact that (by definition) Cartesian basis vectors are covariantly constant to write:
##\nabla_\mu L^\nu_j e_j = (\partial_\mu L^\nu_j) e_j##

Let me work out an example with 2-D polar coordinates:

##e_r = \dfrac{dx}{dr} e_x + \dfrac{dy}{df} e_y = cos(\theta) e_x + sin(\theta) e_y##
##e_\theta = \dfrac{dx}{d\theta} e_x + \dfrac{dy}{d\theta} e_y = - r sin(\theta) e_x + r cos(\theta) e_y##

##\nabla_r e_r = 0##
##\nabla_\theta e_r = -sin(\theta) e_x + cos(\theta) e_y = \dfrac{1}{r} e_\theta##
##\nabla_r e_\theta = - sin(\theta) e_x + cos(\theta) e_y = \dfrac{1}{r} e_\theta##
##\nabla_\theta e_\theta = -r cos(\theta) e_x - r sin(\theta) e_y = -r e_r##

So ##\Gamma^\theta_{r \theta} = \Gamma^\theta_{\theta r} = 1/r##
##\Gamma^r_{\theta \theta} = -r##

All the other coefficients are zero.
Many thanks, I think I understand it now.
 
  • #35
@etotheipi just now), the expression "the components of a vector" is not even well-defined since it doesn't specify a basis. The numbers ##3## and ##4## in your previous example are not "the components of ##V##", they are "the components of ##V## in the ##e_x##, ##e_y## basis".

Yes, it should always be "The components of ##V## in the basis ##e_\mu##"
 
  • #36
stevendaryl said:
Or, if you know what the relationship is between your basis vectors eμ and a set of local Cartesian basis
The OP is working on manifolds though. There may be no such thing as a Cartesian basis.
 
  • #37
Orodruin said:
To me this is one of those cases where practicality trumps cumbersome notation. I can agree that there is no a priori clear indication as to if ##\nabla_\nu V^\mu## means ##(\nabla_\nu V)^\mu## or ##\nabla_\nu(V^\mu)##. However, we already have the notation ##\partial_\nu V^\mu## for ##\nabla_\nu (V^\mu)## so we do not really need another one. With this in mind, the notation is pretty consistent. As for the basis ##e_\mu##, the connection coefficient would be most easily expressed as ##\nabla_\mu e_\nu = \Gamma_{\mu\nu}^\lambda e_\lambda## - I don't really see a problem with this. Alternatively, ##\Gamma_{\mu\nu}^\lambda = e^\lambda(\nabla_\mu e_\nu)##, where ##e^\lambda## is the dual basis. The only notational rule is that when you have an expression of the form ##\nabla_\nu T^{\ldots}_{\ldots}##, the indices represented by ##\ldots## refer to the additional indices of the expression ##\nabla_\nu T##, i.e., it is understood as ##(\nabla_\nu T)_\ldots^\ldots##. I do not really see any possible misinterpretation here.

Regardless, I think this entire discussion is distracting from the OP's inquiry. It should perhaps be moved to a thread of its own?
When in doubt, just always write down the parantheses in place.
Also in mathematical logic there's a rule for eliminating parantheses, but I can't remember it.
I just keep on writing all the parantheses when needed.
 
  • #38
Orodruin said:
The OP is working on manifolds though. There may be no such thing as a Cartesian basis.

Yes, but it seems that understanding covariant derivatives in flat space using non-Cartesian coordinates is good practice for understanding the case of curved space.
 
  • #39
stevendaryl said:
Yes, but it seems that understanding covariant derivatives in flat space using non-Cartesian coordinates is good practice for understanding the case of curved space.
This is fine and usually how I introduce my own students to covariant derivatives. However, the differences in the manifold case need to be pointed out. I have seen too many examples if what happens to some students that somehow miss that part.
 
  • #40
PeterDonis said:
The covariant derivative operator takes any (p, q) tensor and produces a (p, q + 1) tensor. The directional derivative operator is not formed by "fixing " on ; it's formed by contracting with , i.e., it's the operator . This operator takes a (p, q) tensor and produces another (p, q) tensor.
To me is clear that covariant derivative operator ##\nabla## is actually an operator and not a tensor itself -- I believe it could be better to use ##\nabla()## for it. The argument inside the brackets () is a tensor field thus if we "pass" it a vector field ##X## it actually returns a (1,1) tensor.

To me it makes no sense to use indexes "inside" the term ##\nabla(X) = \nabla X## nevertheless it perfectly does make sense to apply upper-lower indexes to the overall ##\nabla X## such as ##\left( \nabla V \right)_\mu{}^\nu## since it is a tensor.
 
  • #41
cianfa72 said:
covariant derivative operator ##\nabla## is actually an operator and not a tensor itself

Yes, that's correct; the common notation ##\nabla_\mu## is really a shorthand for saying that ##\nabla## is an operator that takes a (p, q) tensor and produces a (p, q + 1) tensor, i.e., it "adds" one lower index. As you note, it makes more sense to put the indexes on the entire expression ##\nabla V## instead of on the ##\nabla## and the ##V## separately.
 
  • Like
Likes vanhees71
  • #42
PeterDonis said:
As you note, it makes more sense to put the indexes on the entire expression ∇V instead of on the ∇ and the V separately.
Well, yes and no. On the one hand it is true that the object as such is a (p,q+1) tensor. On the other hand the usual notation makes it clearer which index is which.
 
  • #43
Orodruin said:
the usual notation makes it clearer which index is which.

But there is no "which index is which"; that's the point. It's not like one index "belongs" to the ##\nabla## operator and one index "belongs" to the vector ##V##. Both indexes "belong" to the object ##\nabla V##; you can't separate them out. That's the issue that prompted the original complaint about the usual notation.
 
  • Informative
Likes cianfa72
  • #44
PeterDonis said:
But there is no "which index is which";
Of course there is. If not all tensors would be symmetric. In the particular case of this operator, the singled out argument is that for which the operator becomes a directional derivative when taking an argument.
 
  • #45
Orodruin said:
Of course there is. If not all tensors would be symmetric.

I don't understand.

Orodruin said:
In the particular case of this operator, the singled out argument is that for which the operator becomes a directional derivative when taking an argument.

I don't understand how the term "argument" applies here. I see what you are getting at: if I want to obtain a directional derivative from the object ##\left( \nabla V \right)_\mu{}^\nu##, I have to contract with the ##\mu## index, not the ##\nu## index. Or, if I look at just the operator by itself, I can contract ##\nabla_\mu## with a vector ##u^\mu## to obtain the directional derivative operator ##u^\mu \nabla_mu##; I can't contract anything with the vector ##V## to get a directional derivative operator. But the argument of the operator in either case is the same, the vector ##V##.
 
  • #46
PeterDonis said:
But there is no "which index is which"; that's the point.
Orodruin said:
Of course there is. If not all tensors would be symmetric.
PeterDonis said:
I don't understand.

I think the point being made here is that the horizontal positioning is important [well, for any not totally-symmetric tensor :wink:] and of course cannot be ignored; In slot notation, the covariant derivative of a type ##(k,l)## tensor ##T## along a vector ##U## is$$\nabla_{U} T := \nabla T(\, \cdot \, , \dots, \, \cdot \, , U)$$with the ##U## in the final slot.

With the usual conventions of tensor notation, the components of the covariant derivative, in some chosen basis, would be written $${\nabla T^{\alpha_1, \dots, \alpha_k}}_{\beta_1, \dots, \beta_{l+1}}$$but for covariant derivatives it seems much more natural to re-write this last lower index ##\beta_{l+1}## on the nabla instead,$$\nabla_{\beta_{l+1}} {T^{\alpha_1, \dots, \alpha_k}}_{\beta_1, \dots, \beta_{l}}$$because this makes especially clear that the ##\beta_{l+1}## index, corresponding to the final slot on the tensor, corresponds to the basis vector along which the derivative is being taken.
 
Last edited by a moderator:
  • Like
Likes Paul Colby
  • #47
PeterDonis said:
Yes, that's correct; the common notation ##\nabla_\mu## is really a shorthand for saying that ##\nabla## is an operator that takes a (p, q) tensor and produces a (p, q + 1) tensor, i.e., it "adds" one lower index. As you note, it makes more sense to put the indexes on the entire expression ##\nabla V## instead of on the ##\nabla## and the ##V## separately.

That’s the convention that I greatly dislike. It is true that ##\nabla V## is a (1,1) tensor, but I don’t understand what is wrong with considering ##\nabla_\mu V## to be a vector.
 
  • #48
##\nabla_\mu## makes perfect sense as an operator on vectors, with the effect that it takes a vector with components ##V^\nu## and returns a new vector with components

##\partial_\mu V^\nu + \Gamma^\nu_{\mu \lambda} V^\lambda##
 
  • #49
stevendaryl said:
I don’t understand what is wrong with considering ##\nabla_\mu V## to be a vector.

If by ##\nabla_\mu V## you mean ##\left( e_\mu \right)^\mu \nabla_\mu V## (which shows the abuse of notation involved in confusing "which vector" indexes with component indexes), then, yes, that's a vector. I just don't think that's what the notation ##\nabla_\mu## usually means.

If by ##\nabla_\mu V## you mean ##\nabla_\mu V^\nu##, i.e., ##\left( \nabla V \right)_\mu{}^\nu##, that's not a vector, that's a (1, 1) tensor.

stevendaryl said:
##\nabla_\mu## makes perfect sense as an operator on vectors

Yes, an operator that takes vectors as input and gives (1, 1) tensors as output. The thing that gets output has 1 upper and 1 lower index. The expression you wrote down for the output shows that.
 
  • #50
stevendaryl said:
a new vector with components

##\partial_\mu V^\nu + \Gamma^\nu_{\mu \lambda} V^\lambda##

That isn't a vector, it's a (1, 1) tensor. It has two free indexes, one upper and one lower.
 
  • Like
Likes cianfa72
  • #51
PeterDonis said:
That isn't a vector, it's a (1, 1) tensor. It has two free indexes, one upper and one lower.

No, it's NOT a tensor, it's a vector. For each possible value of ##\mu##, it's a different vector. In the same way that ##e_\mu## represents four different vectors, one for each value of ##\mu##. Look, if instead of ##\mu##, I used "x", surely you wouldn't think that

##\partial_x V^\nu + \Gamma^\nu_{x \lambda} V^\lambda##

is a tensor. Would you? No, it's the directional derivative of ##V## in the x-direction.

Similarly,
##\partial_\mu V^\nu + \Gamma^\nu_{\mu \lambda} V^\lambda##

means a directional derivative in the ##\mu##-direction.
 
  • #52
PeterDonis said:
If by ##\nabla_\mu V## you mean ##\left( e_\mu \right)^\mu \nabla_\mu V## (which shows the abuse of notation involved in confusing "which vector" indexes with component indexes), then, yes, that's a vector. I just don't think that's what the notation ##\nabla_\mu## usually means.

That might be true. I think that the usual notation is pretty screwed up.

If by ##\nabla_\mu V## you mean ##\nabla_\mu V^\nu##.

That doesn't make any sense to me. To me, ##\nabla V## is a 1,1 tensor, and ##\nabla_\mu V^\nu## is not a tensor at all, but a COMPONENT of a tensor.

To me, ##V^\nu## is not a vector, it is a component of a vector.
 
  • #53
Just my humble opinion, I think we're really going round in circles here getting bogged down in semantics. Something of the form ##\nabla_{\mu} V^{\nu}## can be completely justifiably viewed as either the components of a (1,1) tensor, or the components of a (1,0) tensor [vector], depending on the context.

It really just amounts to whether you view the subscript ##\mu## as a shorthand for ##e_{\mu}##, in which case ##\nabla_{\mu} V^{\nu} = \nabla(e_{\mu}, V)^{\nu}## are the components of a vector field, or whether you consider it an index in its own right, ##\nabla_{\mu} V^{\nu} = {(\nabla V)_{\mu}}^{\nu}## in which case it's the components of a tensor field.

It's just notation!
 
Last edited by a moderator:
  • #54
etotheipi said:
Just my humble opinion, I think we're really going round in circles here getting bogged down in semantics. Something of the form ##\nabla_{\mu} V^{\nu}## can be completely justifiably viewed as either a (1,1) tensor, or a (1,0) tensor [vector], depending on the context.

My feeling is that it should never be considered a tensor or a vector, but should be considered components of a tensor or a vector. That's what I consider confusing about physics notation is that they don't distinguish carefully between a vector and a component of a vector, and they don't distinguish between a function and the value of a function at a point.
 
  • Like
Likes cianfa72 and (deleted member)
  • #55
PeterDonis said:
I don't understand how the term "argument"
The (p,q+1) tensor is a linear map that takes p+q+1 arguments (p dual vectors and q+1 vectors) to real numbers.
 
  • #56
stevendaryl said:
My feeling is that it should never be considered a tensor or a vector, but should be considered components of a tensor or a vector. That's what I consider confusing about physics notation is that they don't distinguish carefully between a vector and a component of a vector, and they don't distinguish between a function and the value of a function at a point.

Yes sorry, I agree; I was just using the sloppy physicist's parlance that you mentioned right now :wink:

[Only exception is if we're using Penrose's abstract indices, in which case ##\nabla_{a} V^{b}## does refer to the abstract tensor itself.]
 
  • #57
etotheipi said:
Yes sorry, I agree; I was just using the sloppy physicist's parlance that you mentioned right now :wink:

[Only exception is if we're using Penrose's abstract indices, in which case ##\nabla_{a} V^{b}## does refer to the abstract tensor itself.]

Yes, I understand Penrose abstract indices, but sometimes you want to talk about a component of a vector. Then you have to do something like ##(V^b)^\mu##, which is very weird looking.
 
  • Like
Likes etotheipi
  • #58
stevendaryl said:
Yes, I understand Penrose abstract indices, but sometimes you want to talk about a component of a vector. Then you have to do something like ##(V^b)^\mu##, which is very weird looking.

In fact in Wald's book he just uses ##V^{\mu}## for the components of ##V^a## in some basis, which is pretty clean. The only bit that looks weird to me is stuff like tetrads, where you have to write ##(e_{\mu})^a## where ##\mu## is labelling a particular basis vector in the basis. :smile:
 
  • Like
Likes robphy
  • #59
stevendaryl said:
This is something that drives me crazy about physics notation, which is that the notation doesn't distinguish between a vector and a component of a vector, and doesn't distinguish between a tensor and a component of a tensor.

If ##U## and ##V## are vectors, then ##\nabla_U V## is another vector.

This notation drives me nuts, as well. I generally prefer abstract index notation. So to calculate the acceleration vector field field a from a velocity vector field v, via a directional derivative, I'd write ##a^b = u^a \nabla_a v^b##. Then ##\nabla_a v^b## is notationally a second rank tensor. Taking a directional derivative is just a contraction of this second rank tensor with some vector. In this case, we contract the second rank tensor arising from the covariant derivative of the velocity field with the original velocity field to get the acceleration field.

Of course authors don't always do this. If there is enough context I can usually figure it out the notation with enough effort. If the context is lacking or unclear (as in some of this discussion), I find it hard to follow the intent of the author who chooses some other notational scheme. Which doesn't necessarily make it wrong, it's just that I find it confusing.

The only difference between index notation and abstract index notation is that if we require a specific basis (for instance, a coordinate basis), then we use greek letters in the subscripts and superscripts as a warning of this requirement.
 
  • Like
Likes etotheipi
  • #60
stevendaryl said:
it's NOT a tensor, it's a vector.

We've already been around this merry-go-round once. I'm sorry, but I simply don't see the point of trying to gerrymander the interpretation of individual indexes of the same kind on expressions the way you are doing.

To me, the whole point of having indexes on expressions is to denote what kind of object the thing is. A vector has one upper index. A (1, 1) tensor has one upper and one lower index. If indexes are contracted in the expression, they aren't free indexes so they don't contribute to the "what kind of object" determination. Calling something with one upper and one lower index a "vector" because you are trying to think of one index one way and another index another way makes no sense to me. I can't stop you from doing it, but I don't see the point of it; I think it just causes more confusion instead of solving any.

As I mentioned much earlier in this thread, Wald's abstract index notation can be helpful in this connection since it separates out the "what kind of object" indexes from all the other uses (and abuses) of index notation and makes at least that aspect clear. For example, we could write ##\left( e_\mu \right)^a## to show that this thing we are calling ##e_\mu##, however confusing the ##\mu## index might be (just witness the confusion about that in this thread), at least is known for sure to be a vector--one upper abstract index.

In Wald's abstract index notation, we would write ##\left( \nabla V \right)_a{}^b##, or, if we are using the common shortcut, ##\nabla_a V^b##. The two abstract indexes tell us what kind of object it is: a (1, 1) tensor; and they do that whether we include the parentheses or not. It's also clear that we are not talking about components, so we don't have to get confused about whether components are scalars (because we haven't made it clear whether we mean "components" generally, or "components in a particular fixed basis", another confusion we've had in this thread). If we want to express a particular component of an object, we have to contract it appropriately: for example, we would say that ##\left( V^a \right)^\mu = V^a \left( e^\mu \right)_a##. And now it's clear that this "component" is a scalar, because it's a component in a fixed basis--a contraction of two vectors. (Actually, a contraction of a vector and a covector, since I've used the basis covector in the contraction.)

stevendaryl said:
Look, if instead of ##\mu##, I used "x",

Then you would be writing a different expression, which at least would give some indication that you intended the ##x## index to mean something different from the other ones (although even then the "directional derivative" interpretation is not the one that I would intuitively assign, nor, I think, would most physics readers--we would intuitively think you mean the ##x## component of the 1, 1 tensor). But you didn't write that expression; you wrote the one you wrote. You wrote an expression with two free indexes of the same kind (both Greek indexes). To most readers, that means both indexes are indicating the same kind of thing. So for you to then complain that nobody understands that you meant one Greek index to indicate "directional derivative" and the other Greek index to indicate "vector" doesn't seem to me like a good strategy.

In Wald's abstract index notation, the directional derivative of ##V## in the ##x## direction would be ##\left( e_x \right)^a \nabla_a V^b##. And now it's clear (a) that this thing is a vector (one free upper index), and (b) that the ##x## index is not a "what kind of object" index, it's a "which object" index.
 
  • Like
Likes cianfa72 and robphy

Similar threads

  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
913
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
6K
  • · Replies 10 ·
Replies
10
Views
2K