Understanding Dual Vectors in Fluid Mechanics: Purpose and Derivation Explained

In summary, a dual vector is a vector in the dual space that is defined by the dual basis vectors in a non-orthogonal coordinate system. The concept of a dual vector is closely related to the vector derivative operator and is usually denoted by the symbol \nabla. However, the presentation of this concept is often lacking in traditional textbooks, leading to confusion and misunderstanding. Engineers and students interested in a more rigorous understanding of dual vectors and tensors are advised to consult resources such as "Functional and Structured Tensor Analysis for Engineers" and courses in differential geometry.
  • #1
Aero51
548
10
Just like the title says, what is a dual vector. I am reviewing Panton's "Incompressible Flow", Chapter 3, and a brief section is dedicated to calculating the dual vector and its inverse. Unfortunately, along with many other concepts in this book (if you're into fluids mechanics I don't recommend this text), Panton fails to give an in-depth discussion regarding the motivation, purpose or derivation of a dual vectors. In other words, I would like to know what purpose the dual vector serves, why would one want to use it and if possible an "intuitive" derivation/proof. My background in abstract mathematics is informal and limited so go easy!

Side Note: After doing some research I acquired information on dual spaces, which seems to be a space that when a vector field from its original space multiplied by its corresponding field in the dual space yields the Kronecker delta. Would the dual vector be just one vector from this vector field? That still doesn't answer why one would want to make use of it.

Thanks
 
Physics news on Phys.org
  • #2
In any vector space, you have the freedom to choose the coordinate axes. The vectors parallel (or "tangent") to these axes form the basis of the space. These vectors need not be unit length or orthogonal, though often it is convenient if they are.

In the case that they aren't, however, there is a nontrivial "dual basis". In an xyz coordinate system, the dual basis vector of x is perpendicular to the plane formed by the y and z axes, and so on. In cartesian coordinates, this is the same as the tangent basis vector, but in a non-orthogonal system, the dual basis vectors and the ordinary basis vectors are usually not the same.

Why do we deal with the dual basis? As it turns out, there is a close relationship between the dual basis and the vector derivative operator (usually denoted [itex]\nabla[/itex]). If dual basis vectors are written [itex]e^a[/itex] (and ordinary basis vectors [itex]e_a[/itex]), then we tend to say [itex]\nabla = \sum_a e^{a} \frac{\partial}{\partial x^a}[/itex]. If that is difficult to interpret, it should suffice to say that [itex]\nabla[/itex] is defined in terms of dual vectors, not ordinary vectors, and this has always been the case. It's just usually glossed over in Cartesian coordinates because there is no reason, at that point, to introduce dual vectors at all.
 
  • Like
Likes DanielG00
  • #3
I understand, that explains why the dual vector takes the form of the cross product in Cartesian coordinates. I really wish I had a better background in higher level mathematics beyond calculus and linear algebra so these concepts could be understood more rigorously. When you wrote the gradient in index notation, were you using contra-variant components instead of covariant components (usually written as a lower subscript in the texts I've seen). Too tell you the truth, I'm not even sure what the difference is between co-variant and contra-variant components of a tensor are. The bad thing about engineering is that nobody cares about mathematics..
:(
 
  • #4
Yes, contravariant components. Most true, ordinary vectors are written in terms of contravariant components of ordinary ("tangent") basis vectors. I.e. a velocity would be [itex]v = v^x e_x + v^y e_y + v^z e_z[/itex], not in terms of covariant components. This, again, is often glossed over because the components (and basis vectors) are the same in cartesian.The covariant and contravariant components just come from the difference between ordinary and dual vectors. Contravariant components multiply ordinary basis vectors. Covariant components multiply dual basis vectors. For this reason, it's not uncommon to refer to the odinary basis as the "contravariant basis" and the dual basis as the "covariant basis". There is an overabundance of words used to describe these concepts.

Really, the whole topic is inadequately treated, even in a physics curriculum, but to me, it's intensely fascinating. I find the general presentation of these concepts (and the concepts of objects beyond scalar and vectors) really lacking in terms of a connection to ordinary vector analysis and calculus. That's something I'd love to see addressed in the future.
 
  • #5
Well there is a free PDF, "Functional and Structured Tensor Analysis For Engineers", which I have a draft copy. It is pretty good for explaining the basics, but because the chapters are ordered "funny" it is sometimes hard to use.
 
  • #6
Also, why do author's stick with using co-variant notation when multiplying vector components by their basis (in cartesian coordinates) even though this is not the "true" mathematics that should be carried out. It is very misleading! For instance, many authors define the gradient using index notation as:

[itex] grad(x) = \sum e_i \partial_i[/itex]

Panton doesn't even bother to include the basis vector.
 
  • #7
Aero51 said:
I really wish I had a better background in higher level mathematics beyond calculus and linear algebra so these concepts could be understood more rigorously. When you wrote the gradient in index notation, were you using contra-variant components instead of covariant components (usually written as a lower subscript in the texts I've seen). Too tell you the truth, I'm not even sure what the difference is between co-variant and contra-variant components of a tensor are. The bad thing about engineering is that nobody cares about mathematics..
:(

Really you didn't have to have more advanced math courses to run into this stuff (in a perfect world, I realize this subject is highly neglected in our curriculum imo), it's really only the application of things you already know from vector calculus/linear algebra applied to the case of a generalized curvilinear coordinate system. Often this stuff gets introduced (by a good professor) when talking about generalized curvilinear coordinates at the beginning of an upper level undergraduate physics course like mechanics or E&M. However, the formal math course you would find all of this stuff in is differential geometry.

Aero51 said:
Also, why do author's stick with using co-variant notation when multiplying vector components by their basis (in cartesian coordinates) even though this is not the "true" mathematics that should be carried out. It is very misleading! For instance, many authors define the gradient using index notation as:

[itex] grad(x) = \sum e_i \partial_i[/itex]

Panton doesn't even bother to include the basis vector.

Really this is because using contravariant/covariant notation when students are just doing problems in cartesian coordinates (where there's no difference between contravariant/covariant vectors) in basic vector calculus can raise more questions than it answers. I agree though, it's inconsistent, but it's for pedagogical purposes I'm sure.

I recently wrote a pedagogical guide to tensors for undergraduate physics majors this summer, and so I would like to copy and paste the treatment I gave to basis vectors and dual basis vectors as it's probably the most basic and straightforward way to introduce the subject:

Every vector can be represented by either specifying the contravarient components along with the usual basis vectors, "[itex]\vec{v} = v^{1}\vec{e}_{1} + v^{2}\vec{e}_{2} + v^{3}\vec{e}_{3}[/itex]", or the very same vector can be represented with covarient components and "dual basis vectors", "[itex]\vec{v} = v_{1}\vec{e}^{1} + v_{2}\vec{e}^{2} + v_{3}\vec{e}^{3}[/itex]". Ordinarily, in orthogonal coordinate systems ([itex]\hat{x} \perp \hat{y} \perp \hat{z}[/itex]), such distinctions aren't noticed because both the contravariant and covariant components of the vector are equal, as well as the fact that the basis vectors are equal to the dual basis vectors. But let's see how this changes with non-orthogonal coordinate systems and why these contravariant and covariant concepts should be introduced.

Consider vector 'A' in fig 1. Consider how one would find the components for vector A if the 'y' axis were rotated as shown in the figure (the 'y' axis is rotated to form the new axis 'y', where as the new 'x' axis is not changed and simply overlays the old 'x' axis.) This new primed coordinate system is known as a non-orthogonal coordinate system, and in such systems the same vector can be represented multiple ways. One way one could represent A' would be to run lines parallel to the primed axises and see where they intercept 'x' and 'y' and make these our components for A. In doing so we come up with the contravariant vector components for A, [itex]A^{x}[/itex] and [itex]A^{y}[/itex]. But what if we found the components of A by running perpendicularly to the primed coordinate system to find our components instead? In doing that, we come up with the covariant components [itex]A_{x}[/itex] and [itex]A_{y}[/itex]. But how can these both be components of the same vector? Surely [itex]A_{x'} \vec{e}_{x'} + A_{y'} \vec{e}_{y'} \neq A^{x'} \vec{e}_{x'} + A^{y'} \vec{e}_{y'}[/itex], well this would be a correct assessment, they're NOT equal, but which one of the component schemes is the culprit?

<Insert Figure 1 which I've added as a thumbnail attachment at the bottom of this post..>

Notice that only one of these schemes produces the true vector A upon vector addition, and that is the vector "[itex]A^{x'} \vec{e}_{x'} + A^{y'} \vec{e}_{y'}[/itex]". This can clearly be seen by applying the parallelogram rule for vector addition to each vector geometrically. So does this make the covariant components worthless? Not really, you just have to choose different basis vectors to maintain the legitimate representation of the vector A. These new basis vectors, often called "dual basis vectors" or "1 forms" will allow you to represent A in the usual way, notice that when writing the dual basis vectors I simply raise the index as if I were working with the components: [itex]\vec{A} = A_{x'} \vec{e}^{x'} + A_{y'} \vec{e}^{y'}[/itex]. The dual basis vectors are found by rules for their dot products with all original basis vectors. For original basis vectors with different indices than the basis vector in question, the dot product must be zero, and for the basis vector of the same index, the dot product must be 1.
[tex]\vec{e}^{i} \circ \vec{e}_{j} = {\delta}_{ij}[/tex]
Note that there is no need for the dual basis vectors to have the same magnitude as their original basis vector counterparts. From this equation (or rather to meet the needs of this equation), the relationship between the dual basis vectors and the original basis vectors can be found in three dimensions, and the 3 equations for each dual basis vector are as follows:
[tex]\vec{e}^{1} = \frac{{\vec{e}_{2}} \times {\vec{e}_{3}}}{{\vec{e}_{1}} \circ {\vec{e}_{2}} \times {\vec{e}_{3}}} [/tex]
[tex]\vec{e}^{2} = \frac{{\vec{e}_{3}} \times {\vec{e}_{1}}}{{\vec{e}_{1}} \circ {\vec{e}_{2}} \times {\vec{e}_{3}}} [/tex]
[tex]\vec{e}^{3} = \frac{{\vec{e}_{1}} \times {\vec{e}_{2}}}{{\vec{e}_{1}} \circ {\vec{e}_{2}} \times {\vec{e}_{3}}} [/tex]
These equations allow the dual basis vectors to be calculated if the original basis vectors are known. The equations were simply found by creating an object that would abide by our dot product rules stated previously using some ingenuity. You may have already seen these equations in physics if you've studied physics of the solid state, these were the equations relating the reciprocal lattice vectors to the primitive translational vectors, but now you know what these vectors are on a much deeper level!

Thus, we have now seen that any vector in both orthogonal and non-orthogonal coordinate systems can be represented two different ways:
[tex]\vec{A} = A^{1}\vec{e}_{1} + A^{2}\vec{e}_{2} + A^{3}\vec{e}_{3} = A_{1}\vec{e}^{1} + A_{2}\vec{e}^{2} + A_{3}\vec{e}^{3}[/tex]

Now that was a nice little geometric approach to the contravariant vs. covariant distinction with vectors, but I would like to stress another major point about contravariant and covariant components. And that is that the real definition for contravariant and covariant components comes from transformation laws, the behavior of these components under a change of coordinate system, the same exact transformation laws that are necessarily abided by tensors. What are these transformation laws? Well, there's a different transformation law for contravariant components versus covariant components. Anything that transforms like [tex]A^{i'} = \frac{{\partial}{x^{i'}}}{{\partial} x^{j}} A^{j}[/tex] transforms "contravariantly", and anything that transforms like [tex]A_{i'} = \frac{{\partial}{x^{j}}}{{\partial} x^{i'}} A_{j}[/tex] does so "covariantly". Why do the coefficients of the transformations look the way that they do (partial derivatives of the variables)? I make the claim that covariant vector components transform the same way that regular basis vectors do from one coordinate system to another. This is the origin of the term "covariant", "co-" being a prefix that means "with" or "together". Likewise, "contra-" means "against" or "opposite". Contravariant vector components transform in the opposite way as basis vectors (the transformation factors are "flipped"), instead transforming in the way that dual basis vectors transform from one coordinate system to another. Thus we need to study how standard basis vectors transform. We'll consider the transformation of a basis vector to a generalized curvilinear coordinate axis in one dimension (the process can then be expanded to take into account 3 or more dimensions).

The situation can be seen in fig. 2. In curvilinear coordinates, the basis vectors change depending on where in the space you are located, so we define a set of axes (here called the '[itex]b_{1}[/itex]' axis) that are tangent to the curvilinear axes at a specific point. It is along these axes that the curvilinear basis vectors point. Now, we want to find the projection of the vector '[itex]\vec{b}_{1}[/itex]' onto the 'x' axis ('[itex]\vec{b}_{1}[/itex]' is simply a vector along the '[itex]b_{1}[/itex]' axis. It is not, in general, normalized.):
[tex]comp_{x}^{b} = \frac{\vec{b}_{1} \circ \vec{e}_{1}}{|\vec{e}_{1}|} = \frac{|\vec{b}_{1}||\vec{e}_{1}|}{|\vec{e}_{1}|} \cos{\theta}[/tex]
We can simplify this expression by noticing the triangle formed by the vectors [itex]\vec{e}_{1}[/itex] and [itex]\vec{b}_{1}[/itex]. Further, we notice that this same ratio can be described by differential length elements along each axis (you'll see in a second why we want to notice both relations to the "[itex]\cos{\theta}[/itex]" term):
[tex]\cos{\theta} = \frac{|\vec{e}_{1}|}{|\vec{b}_{1}|} = \frac{d x_{1}}{d x '_{1}}[/tex]
Plugging back into the projection equation:
[tex]comp_{x}^{b} = |\vec{b}_{1}| \frac{d x_{1}}{d x '_{1}}[/tex]
This equation can be generalized into the 3 dimensions, projecting the curvilinear axis [itex]b_{1}[/itex] onto each of the 3 cartesian axes to obtain a complete representation of [itex]\vec{b}_{1}[/itex] in terms of the cartesian basis vectors:
\begin{eqnarray*}
\vec{b}_{1} &=& comp_{x}^{b} \vec{e}_{1} + comp_{y}^{b} \vec{e}_{2} + comp_{z}^{b} \vec{e}_{3} \\
&=& |\vec{b}_{1}| \frac{{\partial} {x_{1}}}{{\partial}{x '_{1}}} \vec{e}_{1} + |\vec{b}_{1}| \frac{{\partial}{x_{2}}}{{\partial}{x '_{1}}} \vec{e}_{2} + |\vec{b}_{1}| \frac{{\partial}{x_{3}}}{{\partial}{x '_{1}}} \vec{e}_{3}
\end{eqnarray*}
But we don't just want 'b' vectors that span the space, we want a basis for the curvilinear coordinate space, thus:
[tex]\frac{\vec{b}_{1}}{|\vec{b}_{1}|} = \vec{e'}_{1} = \frac{{\partial}{x_{i}}}{{\partial}{x '_{1}}} \vec{e}_{i}[/tex]
Or in general for all curvilinear basis vectors the transformation between cartesian basis vectors and curvilinear basis vectors can be written as follows:
[tex]\vec{e'}_{j} = \frac{{\partial}{x_{i}}}{{\partial}{x '_{j}}} \vec{e}_{i}[/tex]
And thus basis vectors transform covariantly.

<Insert Figure 2 from the thumbnails..>

The reader might also recognize the transformation factors as components of the Jacobian transformation matrix:
[tex]J = \frac{\partial (x_{1}, x_{2}, x_{3})}{\partial ({x'}_{1}, {x'}_{2}, {x'}_{3})} = \begin{bmatrix}
\frac{{\partial}{x_{1}}}{{\partial}{{x'}_{1}}} & \frac{{\partial}{x_{1}}}{{\partial}{{x'}_{2}}} & \frac{{\partial}{x_{1}}}{{\partial}{{x'}_{3}}} \\
\frac{{\partial}{x_{2}}}{{\partial}{{x'}_{1}}} & \frac{{\partial}{x_{2}}}{{\partial}{{x'}_{2}}} & \frac{{\partial}{x_{2}}}{{\partial}{{x'}_{3}}} \\
\frac{{\partial}{x_{3}}}{{\partial}{{x'}_{1}}} & \frac{{\partial}{x_{3}}}{{\partial}{{x'}_{2}}} & \frac{{\partial}{x_{3}}}{{\partial}{{x'}_{3}}}
\end{bmatrix}[/tex]

One now sees why vectors are invariant. If a vector is constructed from components and basis vectors that each use these different, but related, transformations, the result will be the canceling out of the transformation factors and nothing will happen to the actual vector.
[tex]\vec{A} = A^{i} \vec{e}_{i}[/tex]
[tex]\frac{{\partial}{x^{i'}}}{{\partial}{x^{i}}} \frac{{\partial}{x^{i}}}{{\partial}{x^{i'}}} \vec{A} = \frac{{\partial}{x^{i'}}}{{\partial}{x^{i}}} A^{i} \frac{{\partial}{x^{i}}}{{\partial}{x^{i'}}} \vec{e}_{i}[/tex]
[tex]\vec{A} = A^{i} \vec{e}_{i}[/tex]

Finally, notice that the inner product between a contravariant and covariant vector is an invariant scalar quantity:
[tex]s = \vec{A} \circ \vec{A} = A_{i} A^{i}[/tex]
This is easily proved upon, again, multiplication of both sides of the equation by the transformation factors and noticing their cancelation:
[tex]\frac{{\partial}{x^{i}}}{{\partial}{x^{i'}}} \frac{{\partial}{x^{i'}}}{{\partial}{x^{i}}} s = A_{i} \frac{{\partial}{x^{i}}}{{\partial}{x^{i'}}} A^{i} \frac{{\partial}{x^{i'}}}{{\partial}{x^{i}}}[/tex]
[tex]s = A_{i} A^{i}[/tex]
And so the equation (and the quantity 's') is unaffected by a coordinate transformation.
This is NOT true (unless it's in cartesian coordinate systems where covariant and contravariant components are indistinguishable form each other) in the case of the inner product between between two covariant vectors or two contravariant vectors together.

I never had this article reviewed but I'm sure it's all correct..

Also, I agree with/like Murphid's posts on the subject.
 
Last edited:
  • #8
Hmmm, did this thing not include my attachments?..

"nonorthogonal.jpg" is Figure 1, and "curvilinear.jpg" is Figure 2.
 

Attachments

  • nonorthogonal.jpg
    nonorthogonal.jpg
    22 KB · Views: 1,403
  • curvilinear.jpg
    curvilinear.jpg
    13 KB · Views: 1,178
  • #9
Note that there are ways to find the dual basis vectors in non-3d spaces; since the cross product doesn't exist generally in such spaces, you need a different definition, but the basic gist of it is the same.

The Jacobian's importance and relationship to tensor transformation laws cannot be emphasized enough. It really captures the idea behind coordinate system invariance--that though we may change the components, the overall object being described must be the same. The use of the Jacobian puts this into precise mathemtical language.

Edit: by the by, you might find \cdot (as in [itex]A \cdot A[/itex]) useful.

Edit edit: Here's another explanation (not even remotely my own).

Let [itex]x' = f(x)[/itex] represent a coordinate system transformation. This could be as easy as, for example, [itex]x' = r e_1 + \theta e_1 = \sqrt{x \cdot x} e_1 + e_2 \arctan \left( \frac{x \cdot e^2}{x \cdot e^1} \right)[/itex], which should make it clear how this function can produce a new coordinate system.

Now then, let [itex]\phi (x) = \phi' (x')[/itex] represent a scalar field. As long as [itex]x[/itex] corresponds to [itex]x'[/itex], the values of this field are the same, but [itex]\phi[/itex] represents the field in terms of the original coordinates and [itex]\phi'[/itex] represents the field in terms of the new coordinates. Imposing this condition ensures that the field is coordinate system invariant.

How can we talk about derivatives of this field? Well, it should be clear that, for any vector [itex]a[/itex], then [itex]a \cdot \nabla \phi(x) =a \cdot \nabla \phi'(x')[/itex], but that right side isn't very useful since [itex]\phi'[/itex] is in terms of the new coordinates. We'll need to use the chain rule:

[tex]a \cdot \nabla \phi'(x') = [a \cdot \nabla x'] \cdot \nabla' \phi'(x') = [a \cdot \nabla f(x)] \cdot \nabla' \phi'(x')[/tex]

Let's define [itex]a \cdot \nabla f(x) \equiv \underline f(a)[/itex] as a linear operator on [itex]a[/itex] (but with x-dependence as well). This is the Jacobian! It may be more clear when we use basis vectors:

[tex]\underline f(e_a) \cdot e_b = [e_a \cdot \nabla f(x)] \cdot e_b = \frac{\partial {x'}^b}{\partial x^a}[/tex]

But this notation gives us a nice way to talk about the action of the Jacobian without using a basis.

There's actually a nice trick where you can do [itex]a \cdot \underline f(b) = \overline f(a) \cdot b[/itex], where the overline denotes the "transpose" (really the "adjoint" is the better word). That means we can take our chain rule thing and say this:

[tex]a \cdot \nabla \phi(x) = [a \cdot \nabla f(x)] \cdot \nabla' \phi'(x') = \underline f(a) \cdot \nabla' \phi'(x') = a \cdot \overline f(\nabla') \phi'(x')[/tex]

Or, more succinctly,

[tex]\nabla = \overline f(\nabla')[/tex]

You may be looking at this and wondering, "how can this be?" But it works, and you can derive, for example, the expressions for [itex]\nabla[/itex] in terms of polar, spherical, or cylindrical coordinates using this expression--just taking [itex]\nabla' = e^1 \partial_r + e^2 \partial_\theta[/itex] for instance in polar coordinates, finding the polar Jacobian and working back to get [itex]\nabla[/itex].

And what you should take away from the above math is that the Jacobian arises just from saying our fields and derivatives should be coordinate system invariant and the chain rule. It's cool, cool stuff.

And finally, there's one other equation to consider. If we say [itex]x = x(\tau)[/itex] is a parameterized curve, then we get

[tex]\frac{dx}{d\tau} = \underline f^{-1} \left( \frac{dx'}{d\tau}\right)[/tex]

This is applicable to velocities of objects, and this is what captures the difference between covariant and contravariant basis vectors. The behavior of [itex]\nabla[/itex] and [itex]dx/d\tau[/itex] use different factors of the Jacobian.
 
Last edited:
  • #10
Dual vectors are just linear maps from a vector space to the scalars. All you have to do to believe in their relevance is therefore to believe in the relevance of linear algebra. Some things just happen to be naturally linear, but often it's the old calculus story that differentiable functions are precisely those that look linear when you zoom in on them. For example, gradients can be viewed as dual vectors. A real-valued function on R^2 looks like a linear map from R^2 to R when you zoom in on it, which is a dual vector (gradient).

To translate from this stereotypical mathematicians' viewpoint to the stereotypical physicists' viewpoint, just pick a basis. The components of the dual vector are just the values it takes on those basis elements.
 

1. What is the purpose of dual vectors in fluid mechanics?

Dual vectors, also known as covectors, are mathematical objects used to describe the flow of fluids in a specific direction. They are essential in fluid mechanics as they help us understand the physical properties and behavior of fluids, such as velocity, pressure, and density, which are crucial in various engineering applications.

2. How are dual vectors derived in fluid mechanics?

Dual vectors are derived through a process called the Reynolds Transport Theorem, which involves applying the principles of conservation of mass, momentum, and energy to a control volume in a fluid. This allows us to express the flow properties in terms of dual vectors, which are then used to analyze and predict fluid behavior.

3. What is the difference between a vector and a dual vector in fluid mechanics?

A vector represents a physical quantity, such as velocity or force, with both magnitude and direction. On the other hand, a dual vector represents the rate of change of a vector in a specific direction. In fluid mechanics, vectors are used to describe flow properties, while dual vectors are used to analyze and predict their behavior in a particular direction.

4. How are dual vectors used in practical applications of fluid mechanics?

Dual vectors are used extensively in various engineering applications, such as designing aircraft wings, optimizing pipe networks, and predicting weather patterns. They allow us to understand and manipulate fluid flow in specific directions, which is crucial in creating efficient and safe designs for various systems.

5. Can dual vectors be applied to all types of fluids?

Yes, dual vectors can be applied to all types of fluids, including gases, liquids, and even non-Newtonian fluids. The principles of fluid mechanics, including the use of dual vectors, are applicable to any fluid substance, making it a versatile tool in understanding and analyzing fluid behavior.

Similar threads

  • Special and General Relativity
2
Replies
35
Views
4K
Replies
7
Views
3K
  • Beyond the Standard Models
Replies
14
Views
3K
  • General Math
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
20
Views
8K
  • Differential Geometry
Replies
9
Views
4K
  • Quantum Interpretations and Foundations
Replies
3
Views
1K
  • Beyond the Standard Models
Replies
0
Views
985
  • Differential Geometry
Replies
9
Views
4K
Replies
4
Views
775
Back
Top