Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Thread starter Phrak
  • Start date Start date
  • Tags Tags
    Tensors
Click For Summary
Bras and kets can be understood as vectors in a Hilbert space, with kets represented as vectors with upper indices and bras as vectors with lower indices. In finite-dimensional spaces, bras exist in the dual space, which is isomorphic to the primal space, but this distinction becomes significant in infinite dimensions. The Hermitian inner product in finite dimensions can be defined similarly to how it's done with real vectors, using complex conjugation for inner products. While kets can be considered as elements of a vector space, the concept of a coordinate basis is less applicable, as the basis vectors are typically eigenvectors of Hermitian operators. Overall, the discussion emphasizes the relationship between bras, kets, and tensor products within the framework of linear algebra and functional analysis.
  • #121
Hurkyl said:
The problem is that you are using the letter R -- a letter well-established to indicate something akin to "the one-dimensional vector space over the reals".
I used \mathbb{R} to indicate the range of the single continues index of a one dimensional vector
with \infty elements and I use \mathbb{R}^3 to describe the 3 continuous indices of a function in a volume.
I shouldn't have used \mathbb{C} in this context.


So, symbolically in, in terms of indices:

A\otimes B\otimes C ~=~ D

If the inidices of A, B and C are given by \mathbb{R} then the indices of D are given by \mathbb{R}^3
Indices (tensor ranks) add. The direct product of three tensors of rank 1 is a tensor
of rank 3.

\mbox{rank}(A\otimes B\otimes C) ~=~ \mbox{rank}(A)+\mbox{rank}(B)+\mbox{rank}(C) ~=~ \mbox{rank}(D)
You are associating \mathbb{R}^n with the number of elements instead of the indices and thus
you get the following in the same case:

If the number of elements of A, B and C is given by \mathbb{R}^\infty then the number of elements of D
is given by \mathbb{R}^{\infty^3}. The number of elements multiply and hence the number of \infty^3

As long as we understand each other.
Regards, Hans
 
Last edited:
Physics news on Phys.org
  • #122
Hans de Vries said:
reilly was talking about a non relativistic two particle wave function as the vector direct
product of two single particle function which is correct according to the definition of the
vector direct product given here:

http://mathworld.wolfram.com/VectorDirectProduct.html
Which is the same thing as the tensor product the rest of us are talking about.

(Fine print -- there are a bunch of equivalent ways to define tensor products, so I should really say this is just a particular realization of the tensor product)


You may have an argument in that I implicitly assume that in R\otimes R one is
a row vector and the other is a column vector, so an nx1 vector times a 1xn
vector is an nxn matrix,
That's not what the argument is. The argument is that for elements of R, we have n=1. The argument is that while you might mean to talk about continuously indexed spaces, the thing you are actually saying is "the product of a 1x1 matrix with a 1x1 matrix is a matrix with 2 entries".
 
  • #123
Hurkyl said:
while you might mean to talk about continuously indexed spaces
It's indeed exactly this. Once you replace A\otimes B with \mathbb{R}^n\otimes\mathbb{R}^m, then that's another
level of symbolization and there is an ambiguity of to what \mathbb{R}^n refers to, the number
of elements or the number of continuous indices.Regards, Hans
 
Last edited:
  • #124
Am I understanding you right (if we take the finite case), that you say that with the dimension you mean the size of fx. two vectors (n x 1) and (1 x m) then you get a
(n x m) matrix, with the tensor (or as you call it vector direct product), because that is indeed true.

The problem is that we need another dimension, and it is very much used in physics. Haven't you ever seen, something like "assume that we have a two level system, of two states |0> and |1>." This means that |0> and |1> are a basis for our problem, that is two dimensional. If we then wan't to describe two of these systems, we take the tensor product of the two spaces and a basis for this new vector space is

|0> \otimes |0>,|0> \otimes |1>,|1> \otimes |0>,|1> \otimes |1>,

this is very standard and used by a lot of physisist (at least all there is doing QM). I'm a bit baffled that you would immediately think of the vector dirrect product between vectors (and matrices). Do you have some refferences where this is used (prefereble online), because >I know that realization of the tensor product, but never seen it in use anywhere.

I still understand how you will use that definition if you don't have finite vectors like (n x 1)?

Do you know the general definition of the tensor product? Because you are talking about tensor product between elements of a vector space, which in the final case can be realized as you say, but that new element actually lives in the spaces that we are talking about, that is say we have a vector space (V) spanned by (1,0) and (0,1) then you construct a new like this

(1,0) \otimes (0,1)^T = ((0,1)^T,(0,0)^T)

this i actually an element in the new vector space denoted

V \otimes V up to isomorphism

and if one particle is living in the space V, then this is the natural space to describe two of these particles in. Because two particles of these types can be in any linear combination of products like (1,0) \otimes (0,1)^T = ((0,1)^T,(0,0)^T) this, where you take all different combination of the basis vectors. So you are talking about the elements where we are talking about the space these live in.

But it is standard QM, and it is importent what spaces you get, more than just taking the elements, because it is rare that you can just say we this particle in one state (a,b), and a particle number two in another state (c,d), and then only be interested in the combination (a(c,d)^T,b(c,d)^T), because as time evolse it can be a lot of other things, and these things it can become live in the space we are talking about, that's why it is importent because you know with space to restrict to.
 
  • #125
Hans de Vries said:
It's indeed exactly this. Once you replace A\otimes B with \mathbb{R}^n\otimes\mathbb{R}^m, then that's another
level of symbolization and there is an ambiguity of to what \mathbb{R}^n refers to, the number
of elements or the number of continuous indices.


Regards, Hans
You do realize that R, R², R³, ... all have the same number of elements, right? Even the separable Hilbert space (e.g. the space of wavefunctions continuously indexed by \mathbb{R}^n) has the same number of elements as R! So, I should hope nobody ever uses those symbols in this context to indicate number of elements.

\mathbb{R}^n cannot be denoting a number in this context. There is usually no ambiguity here, because \mathbb{R}^n is always meant to indicate the standard n-dimensional vector space over R -- you are the only source I have ever seen who insists on using R in any other way in this context... and it's somewhat bewildering why you would do so, not just because you insist upon confusing an index set with a vector space 'over' those indices, but also because you refuse to use the name of the actual operation you are doing on index sets -- the Cartesian product -- and instead prefer to use the name of the operation performed on the corresponding vector spaces.

This whole thing would be akin to me insisting upon saying 3 \cdot 5 = 8 when I really mean e^3 \cdot e^5 = e^8. (And not even using 3 + 5 = 8 which would be a correct statement)
 
  • #126
Hurkyl said:
\mathbb{R}^n is always meant to indicate the standard n-dimensional vector space over R
\mathbb{R}^n is a continues n dimensional vector space. Yes, of course, this is the definition I was using all along.

Hurkyl said:
Hans de Vries said:
I'm using the vector direct product as defined here: http://mathworld.wolfram.com/VectorDirectProduct.html
Which contains an example indicating \mathbb{R}^3 \otimes \mathbb{R}^3 \cong \mathbb{R}^9 -- not \mathbb{R}^6 as you suggest.
But here you use a 2nd, different definition of \mathbb{R}^n. In this case \mathbb{R}^n means n real elements. OK...

Hurkyl said:
\mathbb{R}^n cannot be denoting a number in this context.
Now \mathbb{R}^n can not denote n real indices or n real elements anymore? As in your 2nd definition?

Hurkyl said:
you insist upon confusing an index set with a vector space 'over' those indices
Are you now accusing me of confusing between the two different interpretations of \mathbb{R}^n you gave ?

Hurkyl said:
You do realize that R, R², R³, ... all have the same number of elements, right?
No, define your R, R², R³ and "elements" properly instead of making a guessing game out of this. Regards, Hans.
 
Last edited:
  • #127
can't you see that you use \otimes as the operation between two elements of some vector space, that is the link you are referring to. This is legit, but then you write it between vector spaces, and not elements, and what you say is wrong, it is as simple as that.

You are right that if you have a 1x3 vector (which can be indexed with one index) and take \otimes between such two then you can visualise it as a 3x3 matrix, which can be indexed with two indexes (is this what you call dimension = 2?).

I'm pretty sure you are using the termonology wrong, how much math background do you have? And do you have some references that is doing what you do because, I can't simply grasp that anyone do it like you do?
 
  • #128
mrandersdk said:
can't you see that you use \otimes as the operation between two elements of some vector space, that is the link you are referring to. This is legit, but then you write it between vector spaces, and not elements, and what you say is wrong, it is as simple as that.

You are right that if you have a 1x3 vector (which can be indexed with one index) and take \otimes between such two then you can visualise it as a 3x3 matrix, which can be indexed with two indexes (is this what you call dimension = 2?).

I'm pretty sure you are using the termonology wrong, how much math background do you have? And do you have some references that is doing what you do because, I can't simply grasp that anyone do it like you do?



The link defines the http://mathworld.wolfram.com/VectorDirectProduct.html" as follows:

==================================================

Given vectors u and v, the vector direct product is:

uv = u\otimes v^T

If u and v have three elements then:

uv ~=~ <br /> \left[\begin{array}{ccc} u_1 &amp; u_2 &amp; u_3 \end{array}\right]<br /> ~\otimes ~<br /> \left[\begin{array}{c} v_1 \\ v_2 \\ v_3 \end{array}\right] <br /> ~~ = ~~ <br /> \left[\begin{array}{ccc} <br /> u_1v_1 &amp; u_1v_2 &amp; u_1v_3 \\<br /> u_2v_1 &amp; u_2v_2 &amp; u_2v_3 \\<br /> u_3v_1 &amp; u_3v_2 &amp; u_3v_3 \\<br /> \end{array}\right]

==================================================


Note first that the Transpose is not used in a 100% strict way. It merely reminds
us that one of the vectors is a row vector and the other is a column vector.

u~\otimes~ v ~~=~~ (1\times 3) \otimes (3\times 1) ~~=~~ (3\times 3).


I you want to extend this to a triple product then u, v and w must be of the form:

u~\otimes~ v~\otimes ~w ~~=~~ (1\times 1\times 3) \otimes (1\times 3\times 1) \otimes (3\times 1\times 1) ~~=~~ (3\times 3\times 3).



u, v and w are all vectors, one dimensional, and in the continuous limit they become
one dimensional spaces represented by \mathbb{R}^1. The result has three indices. It is a rank 3
tensor. In the continuous limit it becomes a volume which is represented by \mathbb{R}^3



Regards, Hans.
 
Last edited by a moderator:
  • #129
Hans de Vries said:
The link defines the http://mathworld.wolfram.com/VectorDirectProduct.html" as follows:

==================================================

Given vectors u and v, the vector direct product is:

uv = u\otimes v^T

If u and v have three elements then:

uv ~=~ <br /> \left[\begin{array}{ccc} u_1 &amp; u_2 &amp; u_3 \end{array}\right]<br /> ~\otimes ~<br /> \left[\begin{array}{c} v_1 \\ v_2 \\ v_3 \end{array}\right] <br /> ~~ = ~~ <br /> \left[\begin{array}{ccc} <br /> u_1v_1 &amp; u_1v_2 &amp; u_1v_3 \\<br /> u_2v_1 &amp; u_2v_2 &amp; u_2v_3 \\<br /> u_3v_1 &amp; u_3v_2 &amp; u_3v_3 \\<br /> \end{array}\right]

==================================================


Note first that the Transpose is not used in a 100% strict way. It merely reminds
us that one of the vectors is a row vector and the other is a column vector.

u~\otimes~ v ~~=~~ (1\times 3) \otimes (3\times 1) ~~=~~ (3\times 3).


I you want to extend this to a triple product then u, v and w must be of the form:

u~\otimes~ v~\otimes ~w ~~=~~ (1\times 1\times 3) \otimes (1\times 3\times 1) \otimes (3\times 1\times 1) ~~=~~ (3\times 3\times 3).



u, v and w are all vectors, one dimensional, and in the continuous limit they become
one dimensional spaces represented by \mathbb{R}^1. The result has three indices. It is a rank 3
tensor. In the continuous limit it becomes a volume which is represented by \mathbb{R}^3



Regards, Hans.


this is not right, we can take a vector that is finite an represent this as an finite array (a_1,a_2,...,a_n), we can then extend this to a countable but not finite set, this is a sequence (a_1,a_2,...), we can then maybe say that we extend to some kind of sequence over a uncountable set like the real (this would be like a normal function), but to say this is represented by R is wrong. In that notion of a uncountable sequence you maybe can say that the real are an element of these, this is simply the function f(x) = x (this you could maybe denote R, even though i don't think anyone does it). But what about the function f(x) = 1, this is not anything like R.

Can't you give some other references than that link, I know that construction, but i still think you use it a bit wrong, please give me some material where they use it to something like you do, and say that in the continuous limit it is R.

You say:

"u, v and w are all vectors, one dimensional, and in the continuous limit they become
one dimensional spaces represented by R"

This is completely nonsence, a vector doesn't go to some space. A vector space is something where you can add things, as already been pointed out, you think of what index set is used, like in the example i gave above, I think you use the termonology completely wrong.
 
Last edited by a moderator:
  • #130
mrandersdk said:
This is completely nonsence, a vector doesn't go to some space. A vector space is something where you can add things, as already been pointed out, you think of what index set is used, like in the example i gave above, I think you use the termonology completely wrong.

O please mrandersdk, There is nothing wrong in considering a one dimensional space as a
vector with a single continuous index. This is done all the time.Regards, Hans
 
Last edited:
  • #131
then show me where, because that i don't believe, i never seen the reals considered as a vector.
 
  • #132
mrandersdk said:
then show me where, because that i don't believe, i never seen the reals considered as a vector.


It's not that the reals are considered as a vector. It's the index of the vector which
becomes a real. The values of the vector become a function of x where x is the index
and x is a real number.


Regards, Hans
 
  • #133
okay, that was exactly was i was saying. But the notation, where you use R to denote a vector is wrong, R is a vector space. I wouldn't denote a finite (1 x n) vector by n, or if it is a sequence by N, this is wrong. You are right that if we have a continuoused indexed vector (that is a function), and you take the tensor product (that is what you do), then you get a higher rank tensor, that you can index by R^2, this is right.

But this you don't denote by

R \otimes R = R^2

this means something completely different. Do you know the general construction of the tensor product?

How would you use the link you gave for a continuoused indexed vector, it clearly works for a finite, i can maybe imagine how to do it for a countable indexed, but don't know how to do it for a continuous one?
 
  • #134
mrandersdk said:
But this you don't denote by R \otimes R = R^2
this means something completely different.
It is unclear what \mathbb{R}^1 \otimes \mathbb{R}^1 = \mathbb{R}^2 means until everything is properly defined.There now seems to be at least a consensus that \mathbb{R}^n should be interpreted as an
n-dimensional space. A tensor of rank n with n different indices which are all real
numbers. That's one.

The other thing which needs to be clear is that one the \mathbb{R}^1 should be a row-vector
and the other \mathbb{R}^1 should be a column-vector.

The notation \mathbb{R}^1 \otimes \mathbb{R}^1 = \mathbb{R}^2 is correct under the above two conditions. The extention
to triple products was given in post https://www.physicsforums.com/showpost.php?p=1793772&postcount=128"Regards, Hans
 
Last edited by a moderator:
  • #135
Hans de Vries said:
It is unclear what \mathbb{R}^1 \otimes \mathbb{R}^1 = \mathbb{R}^2 means until everything is properly defined.


There now seems to be at least a consensus that \mathbb{R}^n should be interpreted as an
n-dimensional function space. A tensor of rank n with n different indices which are
all real numbers. That's one.

The other thing which needs to be clear is that one the \mathbb{R}^1 should be a row-vector
and the other \mathbb{R}^1 should be a column-vector.

The notation \mathbb{R}^1 \otimes \mathbb{R}^1 = \mathbb{R}^2 is correct under the above two conditions. The extention
to triple products was given in post https://www.physicsforums.com/showpost.php?p=1793772&postcount=128"


Regards, Hans

I still don't understand why you call R for a vector, it is not in any way. And these equations is wrong, unless you have invented your own notation for something, and uses the same symbols, that actually mean something els.
 
Last edited by a moderator:
  • #137
yes a vector space, that is something completely different from a vector, as you call it
 
  • #138
mrandersdk said:
yes a vector space, that is something completely different from a vector, as you call it


\mathbb{R}^1 is defined as a 1-dimensional vector space, which is a tensor of rank 1
(= vector) with a single index, where the index is a real number.


Regards, Hans
 
  • #139
let me see a reference on that definition of a 1-rank tensor, because i never seen that.
 
  • #140
mrandersdk said:
let me see a reference on that definition of a 1-rank tensor, because i never seen that.

http://en.wikipedia.org/wiki/Tensor#Tensor_rank

Quote:

"In the first definition, the rank of a tensor T is the number of indices required to write down the components of T"


Regards, Hans
 
  • #141
i know that, but show me one that says that R^1 is a rank 1-tensor
 
  • #142
mrandersdk said:
i know that, but show me one that says that R^1 is a rank 1-tensor

mrandersdk,

The extension of finite dimensional vectors to infinite dimensional vectors/functions
is one of the pillars of mathematics and physics. I think I've done enough by now.


Regards, Hans
 
  • #143
this is ridiculous if it a pillar of math and physics it must be easy to find a refference. The vector space R^1 is never going to be a tensor.
 
  • #144
mrandersdk said:
this is ridiculous if it a pillar of math and physics it must be easy to find a refference. The vector space R^1 is never going to be a tensor.
You can try it on the math forums, Ask the right question to get the right answer.

A vector (being a tensor of rank 1) which is a one dimensional array of elements becomes
a function in the continuous limit.

For the mathematically pure you should inquire about the "space of functions on the Euclidean
1-space \mathbb{R}^1 rather than \mathbb{R}^1 itself or maybe even the space of square-integrable functions
on Euclidean 1-space \mathbb{R}^1 described as L^2(\mathbb{R}^1) as advised by our good friend Hurkyl, although
this is somewhat QM specific.

You will also see that good manners are helpful in getting assistance.Regards, Hans
 
Last edited:
  • #145
Hans de Vries said:
You can try it on the math forums, Ask the right question to get the right answer.
And he will be told that Rn (in this context) denotes the standard n-dimensional real vector space whose elements are n-tuples of real numbers.

He will be told that Rn is neither a vector, nor a tensor. (Barring set-theoretic tricks to construct some unusual vector spaces)

He will be told that elements of Rn are vectors. He will be told that in the tensor algebra over Rn, elements of Rn are rank 1 tensors.

He will be told that \mathbb{R} \oplus \mathbb{R} \cong \mathbb{R} \times \mathbb{R} \cong \mathbb{R}^2 and \mathbb{R} \otimes \mathbb{R} \cong \mathbb{R}.

He will be told that L^2(\mathbb{R}) and C^\infty(\mathbb{R}), are infinite-dimensional topological vector spaces. (square-integrable and infinitely-differentiable functions, respectively)

He will be told that the number of elements in Rn is |R| (= 2|N|).
 
Last edited:
  • #146
(On a Simpler Note)

Finite dimensional quantum mechanical vectors, operators and coefficients may all be represented by real-valued matrices.

c = a + ib \Rightarrow c = \left( \begin{array}{cc} a &amp; b &amp; -b &amp; a \end{array} \right)

c^* \Rightarrow c^T

For example, an Nx1 complex column vector becomes a 2Nx2 array of reals.

What makes this interesting is that

\left&lt; u \right| X \left| v \right&gt; ^*

becomes

( v^{T} X^T u )^T

The adjoint is applied by transposition only.
 
  • #147
You are right that a vector is a tensor of rank 1 (at least the way physisists look at it), but you say that R^1 is a tensor and that is incorrect. I'm pretty sure i know what a tensor is, I have taking courses in, opeartor analysis, real and complex analysis, measure theory, tensor analysis, Riemannian geometry and Lie groups, if you look in the math section, you will see that one of the people helping me on this subject would be me.

I have also taking general relativity so I also know how physisist look at a tensor (as an multiarray of numbers).

My problem is that you, say that the vector space is a tensor, this is wrong. It is right that R^1 contains on ranktensors. From R^1 we can then construct a space, by taking the tensor product of the two spaces (note between the spaces not elements of it), that is

R^1 \otimes R^1 = R^1

the reason that these two are isomophic, are that given a basis for R^1, let's say e_1, then a basis for R^1 \otimes R^1 is all elements of the form e_i \otimes e_j, but there is only one, namely e_1 \otimes e_1, so it is easy to write an isomophism between the two spaces. And this is not surprising, because this is the space of 1x1 matrices, which is of cause the same as R^1.

If you want to make n x m matrices over R, you need

R^n \otimes R^m = R^{nm}, which again has the basis e_i \otimes e_j \ , \ i=1,...,n \ and \ j=1,...,m, you can look at e_i \otimes e_j to referering to the ij number in the matrix.

You can just now say we make some continuous limit, and then we got functions, and if you do you have to be carefull, and anyway it is not done at all like you do it. The problem is that you wan't to make the tensor product between spaces that is not finite dimensional (uncountable in fact), which is not always so simple.

But in fact I don't think that is what you want, i just think you wan't to take tensor products between functions. So if we have a function space H, with a finite basis, f_1,...,f_n, you can do the same to take the tensor product of H with it self. Then an element in that new vector space is

g_{ij} f_i \otimes f_j einstein summation assumed

writing it in the basis as most physisists do, you would only look at g_{ij}. Now if you wan't to take a non discrete basis (or more precise a non descrete set that spans the sapce), you could write the same thing i guess (not even sure it works, but i guess physisists hope it do)

g_{xy} f_x \otimes f_y

now einstein summation must be an integral to make sense of it. but one have to very carefull, with something like this. The reason that this works, i guess is something to so with the spectral theorem for unbounded operators, and maybe physisists just hope it works because it would be nice.

It seems to me, that you haven't used tensor products between spaces, and just used them between elements not really knowing what's going on, on the higher mathematical plane, and maybe this have led to some confusion, I'm not questioning that you can do calculations, in a specific problem correct, but I'm telling you that many of the indentities you wrote here, is either wrong or you are using completely nonstandard notation.

Ps. It was not to be ruth, but I know a little bit of what I'm talking about, and would very much like to see some references, on how you use it, because that would help a lot, trying to understand how you are doing it, but am I completely wrong if this is notation you have come up with yourself, or do you have some papers or a book that use that notation and tell it like you do?
 
  • #148
mrandersdk,

This is really just a whole lot of confusion about something very trivial.I just tried to convey that a non relativistic two-particle wave function is a function
of 6 parameters: the xyz coordinates of both particles.

This is a result of the vector direct product between the two (non-interacting) single
particle wave-functions. Yes, instead of symbolically writing something in a shorthand
notation like this.

R^3 \otimes R^3 = R^6

It should have been something like:

( U \in L^2(\mathbb{R}^3)) \otimes ( V \in L^2(\mathbb{R}^3))^T = ( W \in L^2(\mathbb{R}^6))

After all, I'm talking about the vector direct product of wave functions, that is
quantum mechanics, and I'm not talking about tensor products between topological
vector spaces
. I even didn't know that these animals existed and it seems pretty
hard do anything physically useful with them when looking at their definition, but OK.

mrandersdk said:
But in fact I don't think that is what you want, i just think you wan't to take tensor products between functions. ?
Indeed, to be exact: The http://mathworld.wolfram.com/VectorDirectProduct.html" which is a tensor product of 2 or more
vectors which are all "orthogonal" to each other in the sense of post https://www.physicsforums.com/showpost.php?p=1793772&postcount=128"Regards, Hans.
 
Last edited by a moderator:
  • #149
oh i see, yes there have been very much confusion about nothing then. Actually the vector direct product you are refereing to, is a speceial case of the tensor product, when you are using finite vectors.

The tensor product is used all the time in QM, also of spaces, because it is naturally that if you have one particle described in one state hilbert space, then two of them is described in the tensor product of these, this should be in all advanced QM books, and is actually what you are saying i guess, you just never seen it for spaces, but the new elements you construct by taking the vector direct product (tensor product), is actually living in this new vector space.

But often people reading these books don't see it because authors often put it a bit in the background, because the full mathematical machinery can be difficult. But it is actually very usefull, and i think you use it all the time without knowing it then.
 
  • #150
mrandersdk-

If |00>,|01>,|10> and |11> (1=up,0=down)

are linear independent vectors, then <01|01> = 0,

rather than <01|01> = <0|0><1|1>, as you suggest.
 
Last edited:

Similar threads

  • · Replies 5 ·
Replies
5
Views
4K
Replies
16
Views
3K
  • · Replies 7 ·
Replies
7
Views
499
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
607
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K