Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Thread starter Phrak
  • Start date Start date
  • Tags Tags
    Tensors
  • #101
Ok, I know we can use the notation for every vector space if we wan't. Of cause we can do that. I'm not sure why you say that multiparticle states are direct products?

If the particles are independent, you can write them as a tensorproduct of two vectors, if they are correlated then you can't nessecarily.

The reason i said you equation was wrong, was because we where talking about QM, so it didn't make sense.

Again you are right that a vector is often described by a n-tuple, but as i have said a lot of times in this thread, the tuple doesn't make sense without a basis, telling us what it means. A bit like your equation didn't make sense because you didn't tell what you ment by |p> and |F>.

The problem about adjoint, is to write the definition used in math

<x,A y> = <A^*x,y>

in diracs notation. You have to be very carefull to write this.

Not sure what your point is about fock-space? Is it because if we have a space describing one particle, and we take a tensor product between such two states then we are not in the space anymore, but in the fock space formalism you incorporate this problem?

I haven't read diracs book, but it sounds interesting, I will look at it in my vecation, thanks for the reference. I agree that he made the notation because it made it simpler to write (maybe to remember some rules of manipulating), but I just think that people often get a bit confused about it, because one learn QM with wavefunctions first and then learn bra-ket, then often people think that the wavefunction is used just like a ket, and it often isn't (even though you proberly could, after all L^2 is a vector space).
 
Physics news on Phys.org
  • #102
reilly said:
Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors
...
So, as a direct product is a tensor
That last statement is (very) incorrect! The direct product of two vector spaces is quite different than their tensor product -- in fact, most quantum 'weirdness' stems from the fact you use direct products classically but tensor products quantum mechanically.
 
  • #103
pwew, there was another that found that a bit disturbing.
 
  • #104
what does this mean, I can't see this can be correct:

"Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors "

Maybe I just can't read it but what does

"So, as a direct product is a tensor"

mean?
 
Last edited:
  • #105
by the way

\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}^2

and

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3n}

is not correct. it is

\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}

and

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3^n}
 
Last edited:
  • #106
why are your post suddenly below mine?

I don't know what you mean with

"This just defines dimensions, of course, if there is interaction then the combined
probabilities are not given by simply multiplication. That's a whole different story
altogether requiring knowledge of the orthogonal states, the propagators and the
interactions."

If one particle is described in C^3, then n particles are described in

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3^n}

but it can be that you can't write the state as |0> \otimes |1> \otimes ... \otimes |n>, is that what you try to say ?
 
  • #107
Hurkyl said:
That last statement is (very) incorrect! The direct product of two vector spaces is quite different than their tensor product

Read again what reilly wrote:

reilly said:
Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors -- there are definitional tricks that allow the multiparticle sates to be considered as a single column vector. So, as a direct product is a tensor, we've now got ...

Like in:

<br /> \mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}^2<br />

Or for a non relativistic QM multiparticle state of n particles:

<br /> \mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3n}<br />

This just defines dimensions, of course, if there is interaction then the combined
probabilities are not given by simply multiplication. That's a whole different story
altogether requiring knowledge of the orthogonal states, the propagators and the
interactions.
Regards, Hans
 
Last edited:
  • #108
mrandersdk said:
why are your post suddenly below mine?

Something went wrong with editing. You just react "too fast". :smile:

mrandersdk said:
I don't know what you mean with

"This just defines dimensions, of course, if there is interaction then the combined
probabilities are not given by simply multiplication. That's a whole different story
altogether requiring knowledge of the orthogonal states, the propagators and the
interactions."

If one particle is described in C^3, then n particles are described in

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3^n}

but it can be that you can't write the state as |0&gt; \otimes |1&gt; \otimes ... \otimes |n&gt;, is that what you try to say ?

The "static" wave function is defined as a complex number in a 3 dimensional space.
The non-relativistic wave function of two particles is defined as a 6 dimensional
space spanned by the x,y,z of the first particle plus the x,y,z of the second particle.

The wave function of an n-particle system is defined in an 3n dimensional space, not
a 3^n dimensional space.Regards, Hans
 
  • #109
how can you say that, the dimension of a tensor product is, like i say. And one particle can have more degrees of freedom than just 3.

The things you say are equal, are simply not equal. If you have a two vector spaces V and W and basis v_1,...,v_n and w_1,...,w_d respectivly, then a basis for

V \otimes W is

all of the form v_i \otimes w_j \ , \ i=1,...,n \ and \ j=1,...,d

there is clearly n times d of these, not n + d as you say. You are right that one particle can be described by a wavefunction of x,y,z, and that two by a wavefunction of x_1,y_1,z_1,x_2,y_2,z_2, but we are talking about the statespace, and if it is fx. 5 dim for both, then the state of both particles lives in a 25 dim space.

I don't think it is right to say that the wavefunction lives in a space spanned by x,y,z.
 
Last edited:
  • #110
mrandersdk said:
how can you say that, the dimension of a tensor product is, like i say. And one particle can have more degrees of freedom than just 3.

The things you say are equal, are simply not equal. If you have a two vector spaces V and W and basis v_1,...,v_n and w_1,...,w_d respectivly, then a basis for

V \otimes W is

all of the form v_i \otimes w_j \ , \ i=1,...,n \ and \ j=1,...,d

there is clearly n times d of these, not n + d as you say. You are right that one particle can be described by a wavefunction of x,y,z, and that two by a wavefunction of x_1,y_1,z_1,x_2,y_2,z_2, but we are talking about the statespace, and if it is fx. 5 dim for both, then the state of both particles lives in a 25 dim space.

I don't think it is right to say that the wavefunction lives in a space spanned by x,y,z.
You are confusing the number of dimensions with the number of elements.

\mathbb{R} has 1 dimension with \infty elements while \mathbb{R}^2 has 2 dimensions with \infty^2 elements.Regards, Hans
 
  • #111
Hans de Vries said:
The "static" wave function is defined as a complex number in a 3 dimensional space.
The non-relativistic wave function of two particles is defined as a 6 dimensional
space spanned by the x,y,z of the first particle plus the x,y,z of the second particle.

The wave function of an n-particle system is defined in an 3n dimensional space, not
a 3^n dimensional space.
Ah, that's where the confusion lies! The rest of us are talking about the state vectors, rather than elements of the underlying topological space of a position-representation of those vectors.

L^2(\mathbb{R}^3) is, of course, the space of square-integrable functions on Euclidean 3-space; i.e. the space of single-particle wavefunctions.

The tensor product of this space with itself is given by1 L^2(\mathbb{R}^3) \otimes L^2(\mathbb{R}^3) = L^2(\mathbb{R}^3 \times \mathbb{R}^3) -- so a 2-particle wavefunction is a square-integrable function of 6 variables.

However, if you only took the direct product of the state space with itself, you'd get L^2(\mathbb{R}^3) \times L^2(\mathbb{R}^3) \neq L^2(\mathbb{R}^3 \times \mathbb{R}^3). This is merely the space of pairs of square-integrable functions of three variables. This isn't even (naturally) a subspace of L^2(\mathbb{R}^3 \times \mathbb{R}^3); the obvious map between them is bilinear, not linear.


1: At least I'm pretty sure I have this right. I haven't actually worked through all the fine print to prove this statement.
 
  • #112
L^2(\mathbb{R}^3) \otimes L^2(\mathbb{R}^3) = L^2(\mathbb{R}^3 \times \mathbb{R}^3) should intuativly be right, but if it is mathematicaly I'm not sure, but i guess it must have something to do with fubinis theorem

http://en.wikipedia.org/wiki/Fubini's_theorem

or at least some variant of it. Hurkyl, as I also pointed out, I agree that it seems as though we are talking about something different, so that's why there are some confussion, but just to make something clear, L^2(R^3) is not spanned by x,y,z (but I guess you mean, as also Hurkyl says, that you can write the wavefunction as a function of x,y,z(but maybe one should be a little carefull here because internal freedoms can play a role, such as spin which we need spin wavefunction to describe, but maybe we should forget about internal freedom in our descussion, so we don't confuse each other even more)).

And the statement

\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}^2

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3n}

is wrong, even though we are talking about different things.
 
  • #114
Oh, I just realized I know how to compute the direct product:

L^2(\mathbb{R}^3) \times L^2(\mathbb{R}^3) \cong<br /> L^2(\mathbb{R}^3 + \mathbb{R}^3)

The + on the right hand side indicates disjoint union -- i.e. that space consists of two separated copies of R³
 
  • #115
mrandersdk said:
And the statement

\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}^2

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3n}
is wrong
There is nothing wrong with this. I'm using the definition of the vector direct product
given here: http://mathworld.wolfram.com/VectorDirectProduct.html

In this example each "dimension" has 3 elements while \mathbb{R} or \mathbb{C} represents 1 continuous
dimension with \infty elements.

If two wavefunctions are non-interacting then the vector direct product describes
the combined probabilities. If they are interacting then one has to go back to the
physics and, in most cases, use an iterative process to numerically determine
the combined two-particle wave function. Regards, Hans.
 
Last edited:
  • #116
As I pointed out, you're not talking about R: you're talking about L²(R). The tensor product of R with itself is clearly R -- in your way of thinking, that's because R is a single-dimensional vector space, and 1*1=1.
 
  • #117
Hurkyl said:
As I pointed out, you're not talking about R: you're talking about L²(R). The tensor product of R with itself is clearly R -- in your way of thinking, that's because R is a single-dimensional vector space, and 1*1=1.
I'm using the vector direct product as defined here:

http://mathworld.wolfram.com/VectorDirectProduct.html

Using the tensor rank: the number of indices (either discrete or continuous) as the
number of dimensions, like most physicist would do.

Hurkyl said:
you're talking about L²(R)
Might be, This isn't language found in physics textbooks or mathematical books for
physicist. So using such an expression like this is quite meaningless for most physicist.Regards, Hans

Let me guess: Square integrable functions, ok? :smile:
 
Last edited:
  • #118
Hans de Vries said:
I'm using the vector direct product as defined here:

http://mathworld.wolfram.com/VectorDirectProduct.html

Using the tensor rank: the number of indices (either discrete or continuous) as the
number of dimensions, like most physicist would do.




Might be, This isn't language found in physics textbooks or mathematical books for
physicist. So using such an expression like this is quite meaningless for most physicist,
however trival its mathematical meaning may be...


Regards, Hans


Hans de Vries said:
There is nothing wrong with this. I'm using the definition of the vector direct product
given here: http://mathworld.wolfram.com/VectorDirectProduct.html

In this example each "dimension" has 3 elements while \mathbb{R} or \mathbb{C} represents 1 continuous
dimension with \infty elements.

If two wavefunctions are non-interacting then the vector direct product describes
the combined probabilities. If they are interacting then one has to go back to the
physics and, in most cases, use an iterative process to numerically determine
the combined two-particle wave function.


Regards, Hans.


you are referring to a page that tells how to take the tensor product between to vectors, but you are taking the tensor product between vector spaces, so you should refer to something like

http://mathworld.wolfram.com/VectorSpaceTensorProduct.html

and it agrees with me. You are right that if particles are none interacting you can write them as pure states (maybe you can't always ?), but they still live in the tensor product of the two hilbertspaces, and the dimension of this is the product of the two dimensions.


But you are aperently talking about indices in the tensor (dimension of fx. a matrix), that is completely different than what is being discussed here. It is of cause trivial that taking two 1-tensors (vectors) and taking the tensot product gives a 2-tensor. By the way, using the notation where you write \mathbb{R}, to be a continuous vector, can't be standard notation, even in physics? And the Vector Direct Product you are referring to, is only defined for finite dimensional tensors.

But as been mentioned we are talking about the state space, and then it is commen to take the tensor product of the individual spaces. And to give you an simple example of what I'm talking about, let's look at two spin ½ particles, where we don't care about anything else than the spin. Then each particle have 2 degrees of freedom, so we could have
´
|00>,|01>,|10> and |11> (1=up,0=down)

that is clearly 2*2 = 4, so this is a 4-dim space, as i say. Then because of how we make the tensor product to a hilbert space, it is a natural thing to describe this in the tensor product of the two spaces, because the inner product is given by

<01|01> = <0|0><1|1>

so the probabillity of being in the down up state, is the probabillity of being in down times being in up, which is very natural. The strange thing is that taking the tensor product of the spaces, gives us states that are intanglet and other strange things.
 
Last edited:
  • #119
Hans de Vries said:
I'm using the vector direct product as defined here:

http://mathworld.wolfram.com/VectorDirectProduct.html
Which contains an example indicating \mathbb{R}^3 \otimes \mathbb{R}^3 \cong \mathbb{R}^9 -- not \mathbb{R}^6 as you suggest.


The problem is that you are using the letter R -- a letter well-established to indicate something akin to "the one-dimensional vector space over the reals". You are using the symbol \otimes -- a symbol well-established to indicate a particular arithmetic operation on vector spaces and on their elements. You are interjecting into a conversaion where we are talking about products on vectors and vector spaces.

So, when you change the meaning of both of those symbols (using R to instead denote some continuously indexed space and \otimes to denote some fancy operation on index spaces) and change the context of the conversation (talking about operations on index spaces rather than on vectors) you should expect there to be much confusion. This is greatly magnified because you didn't give any indication that you were using those symbols in a nonstandard way, and continued to interpret others' posts as using those meanings, despite others having very clearly indicated they were using those symbols according to the usual meaning.

Actually, I think it's far more you've accidentally made a 'level' slip, and confused two layers of abstraction. (The relevant layers here being points of Euclidean space, Euclidean space, and functions on Euclidean space)


That aside, I will admit that this is the first time I've ever heard the phrase 'direct product' used to refer to something that really isn't a direct product but instead a tensor product.
 
Last edited:
  • #120
mrandersdk said:
you are referring to a page that tells how to take the tensor product between to vectors, but you are taking the tensor product between vector spaces, so you should refer to something like

http://mathworld.wolfram.com/VectorSpaceTensorProduct.html


reilly was talking about a non relativistic two particle wave function as the vector direct
product of two single particle function which is correct according to the definition of the
vector direct product given here:

http://mathworld.wolfram.com/VectorDirectProduct.html

You may have an argument in that I implicitly assume that in R\otimes R one is
a row vector and the other is a column vector, so an nx1 vector times a 1xn
vector is an nxn matrix, but I wouldn't even know how to express a transpose
operation at higher ranks without people loosing track of the otherwise very
simple math.



Regards, Hans
 
  • #121
Hurkyl said:
The problem is that you are using the letter R -- a letter well-established to indicate something akin to "the one-dimensional vector space over the reals".
I used \mathbb{R} to indicate the range of the single continues index of a one dimensional vector
with \infty elements and I use \mathbb{R}^3 to describe the 3 continuous indices of a function in a volume.
I shouldn't have used \mathbb{C} in this context.


So, symbolically in, in terms of indices:

A\otimes B\otimes C ~=~ D

If the inidices of A, B and C are given by \mathbb{R} then the indices of D are given by \mathbb{R}^3
Indices (tensor ranks) add. The direct product of three tensors of rank 1 is a tensor
of rank 3.

\mbox{rank}(A\otimes B\otimes C) ~=~ \mbox{rank}(A)+\mbox{rank}(B)+\mbox{rank}(C) ~=~ \mbox{rank}(D)
You are associating \mathbb{R}^n with the number of elements instead of the indices and thus
you get the following in the same case:

If the number of elements of A, B and C is given by \mathbb{R}^\infty then the number of elements of D
is given by \mathbb{R}^{\infty^3}. The number of elements multiply and hence the number of \infty^3

As long as we understand each other.
Regards, Hans
 
Last edited:
  • #122
Hans de Vries said:
reilly was talking about a non relativistic two particle wave function as the vector direct
product of two single particle function which is correct according to the definition of the
vector direct product given here:

http://mathworld.wolfram.com/VectorDirectProduct.html
Which is the same thing as the tensor product the rest of us are talking about.

(Fine print -- there are a bunch of equivalent ways to define tensor products, so I should really say this is just a particular realization of the tensor product)


You may have an argument in that I implicitly assume that in R\otimes R one is
a row vector and the other is a column vector, so an nx1 vector times a 1xn
vector is an nxn matrix,
That's not what the argument is. The argument is that for elements of R, we have n=1. The argument is that while you might mean to talk about continuously indexed spaces, the thing you are actually saying is "the product of a 1x1 matrix with a 1x1 matrix is a matrix with 2 entries".
 
  • #123
Hurkyl said:
while you might mean to talk about continuously indexed spaces
It's indeed exactly this. Once you replace A\otimes B with \mathbb{R}^n\otimes\mathbb{R}^m, then that's another
level of symbolization and there is an ambiguity of to what \mathbb{R}^n refers to, the number
of elements or the number of continuous indices.Regards, Hans
 
Last edited:
  • #124
Am I understanding you right (if we take the finite case), that you say that with the dimension you mean the size of fx. two vectors (n x 1) and (1 x m) then you get a
(n x m) matrix, with the tensor (or as you call it vector direct product), because that is indeed true.

The problem is that we need another dimension, and it is very much used in physics. Haven't you ever seen, something like "assume that we have a two level system, of two states |0> and |1>." This means that |0> and |1> are a basis for our problem, that is two dimensional. If we then wan't to describe two of these systems, we take the tensor product of the two spaces and a basis for this new vector space is

|0&gt; \otimes |0&gt;,|0&gt; \otimes |1&gt;,|1&gt; \otimes |0&gt;,|1&gt; \otimes |1&gt;,

this is very standard and used by a lot of physisist (at least all there is doing QM). I'm a bit baffled that you would immediately think of the vector dirrect product between vectors (and matrices). Do you have some refferences where this is used (prefereble online), because >I know that realization of the tensor product, but never seen it in use anywhere.

I still understand how you will use that definition if you don't have finite vectors like (n x 1)?

Do you know the general definition of the tensor product? Because you are talking about tensor product between elements of a vector space, which in the final case can be realized as you say, but that new element actually lives in the spaces that we are talking about, that is say we have a vector space (V) spanned by (1,0) and (0,1) then you construct a new like this

(1,0) \otimes (0,1)^T = ((0,1)^T,(0,0)^T)

this i actually an element in the new vector space denoted

V \otimes V up to isomorphism

and if one particle is living in the space V, then this is the natural space to describe two of these particles in. Because two particles of these types can be in any linear combination of products like (1,0) \otimes (0,1)^T = ((0,1)^T,(0,0)^T) this, where you take all different combination of the basis vectors. So you are talking about the elements where we are talking about the space these live in.

But it is standard QM, and it is importent what spaces you get, more than just taking the elements, because it is rare that you can just say we this particle in one state (a,b), and a particle number two in another state (c,d), and then only be interested in the combination (a(c,d)^T,b(c,d)^T), because as time evolse it can be a lot of other things, and these things it can become live in the space we are talking about, that's why it is importent because you know with space to restrict to.
 
  • #125
Hans de Vries said:
It's indeed exactly this. Once you replace A\otimes B with \mathbb{R}^n\otimes\mathbb{R}^m, then that's another
level of symbolization and there is an ambiguity of to what \mathbb{R}^n refers to, the number
of elements or the number of continuous indices.


Regards, Hans
You do realize that R, R², R³, ... all have the same number of elements, right? Even the separable Hilbert space (e.g. the space of wavefunctions continuously indexed by \mathbb{R}^n) has the same number of elements as R! So, I should hope nobody ever uses those symbols in this context to indicate number of elements.

\mathbb{R}^n cannot be denoting a number in this context. There is usually no ambiguity here, because \mathbb{R}^n is always meant to indicate the standard n-dimensional vector space over R -- you are the only source I have ever seen who insists on using R in any other way in this context... and it's somewhat bewildering why you would do so, not just because you insist upon confusing an index set with a vector space 'over' those indices, but also because you refuse to use the name of the actual operation you are doing on index sets -- the Cartesian product -- and instead prefer to use the name of the operation performed on the corresponding vector spaces.

This whole thing would be akin to me insisting upon saying 3 \cdot 5 = 8 when I really mean e^3 \cdot e^5 = e^8. (And not even using 3 + 5 = 8 which would be a correct statement)
 
  • #126
Hurkyl said:
\mathbb{R}^n is always meant to indicate the standard n-dimensional vector space over R
\mathbb{R}^n is a continues n dimensional vector space. Yes, of course, this is the definition I was using all along.

Hurkyl said:
Hans de Vries said:
I'm using the vector direct product as defined here: http://mathworld.wolfram.com/VectorDirectProduct.html
Which contains an example indicating \mathbb{R}^3 \otimes \mathbb{R}^3 \cong \mathbb{R}^9 -- not \mathbb{R}^6 as you suggest.
But here you use a 2nd, different definition of \mathbb{R}^n. In this case \mathbb{R}^n means n real elements. OK...

Hurkyl said:
\mathbb{R}^n cannot be denoting a number in this context.
Now \mathbb{R}^n can not denote n real indices or n real elements anymore? As in your 2nd definition?

Hurkyl said:
you insist upon confusing an index set with a vector space 'over' those indices
Are you now accusing me of confusing between the two different interpretations of \mathbb{R}^n you gave ?

Hurkyl said:
You do realize that R, R², R³, ... all have the same number of elements, right?
No, define your R, R², R³ and "elements" properly instead of making a guessing game out of this. Regards, Hans.
 
Last edited:
  • #127
can't you see that you use \otimes as the operation between two elements of some vector space, that is the link you are referring to. This is legit, but then you write it between vector spaces, and not elements, and what you say is wrong, it is as simple as that.

You are right that if you have a 1x3 vector (which can be indexed with one index) and take \otimes between such two then you can visualise it as a 3x3 matrix, which can be indexed with two indexes (is this what you call dimension = 2?).

I'm pretty sure you are using the termonology wrong, how much math background do you have? And do you have some references that is doing what you do because, I can't simply grasp that anyone do it like you do?
 
  • #128
mrandersdk said:
can't you see that you use \otimes as the operation between two elements of some vector space, that is the link you are referring to. This is legit, but then you write it between vector spaces, and not elements, and what you say is wrong, it is as simple as that.

You are right that if you have a 1x3 vector (which can be indexed with one index) and take \otimes between such two then you can visualise it as a 3x3 matrix, which can be indexed with two indexes (is this what you call dimension = 2?).

I'm pretty sure you are using the termonology wrong, how much math background do you have? And do you have some references that is doing what you do because, I can't simply grasp that anyone do it like you do?



The link defines the http://mathworld.wolfram.com/VectorDirectProduct.html" as follows:

==================================================

Given vectors u and v, the vector direct product is:

uv = u\otimes v^T

If u and v have three elements then:

uv ~=~ <br /> \left[\begin{array}{ccc} u_1 &amp; u_2 &amp; u_3 \end{array}\right]<br /> ~\otimes ~<br /> \left[\begin{array}{c} v_1 \\ v_2 \\ v_3 \end{array}\right] <br /> ~~ = ~~ <br /> \left[\begin{array}{ccc} <br /> u_1v_1 &amp; u_1v_2 &amp; u_1v_3 \\<br /> u_2v_1 &amp; u_2v_2 &amp; u_2v_3 \\<br /> u_3v_1 &amp; u_3v_2 &amp; u_3v_3 \\<br /> \end{array}\right]

==================================================


Note first that the Transpose is not used in a 100% strict way. It merely reminds
us that one of the vectors is a row vector and the other is a column vector.

u~\otimes~ v ~~=~~ (1\times 3) \otimes (3\times 1) ~~=~~ (3\times 3).


I you want to extend this to a triple product then u, v and w must be of the form:

u~\otimes~ v~\otimes ~w ~~=~~ (1\times 1\times 3) \otimes (1\times 3\times 1) \otimes (3\times 1\times 1) ~~=~~ (3\times 3\times 3).



u, v and w are all vectors, one dimensional, and in the continuous limit they become
one dimensional spaces represented by \mathbb{R}^1. The result has three indices. It is a rank 3
tensor. In the continuous limit it becomes a volume which is represented by \mathbb{R}^3



Regards, Hans.
 
Last edited by a moderator:
  • #129
Hans de Vries said:
The link defines the http://mathworld.wolfram.com/VectorDirectProduct.html" as follows:

==================================================

Given vectors u and v, the vector direct product is:

uv = u\otimes v^T

If u and v have three elements then:

uv ~=~ <br /> \left[\begin{array}{ccc} u_1 &amp; u_2 &amp; u_3 \end{array}\right]<br /> ~\otimes ~<br /> \left[\begin{array}{c} v_1 \\ v_2 \\ v_3 \end{array}\right] <br /> ~~ = ~~ <br /> \left[\begin{array}{ccc} <br /> u_1v_1 &amp; u_1v_2 &amp; u_1v_3 \\<br /> u_2v_1 &amp; u_2v_2 &amp; u_2v_3 \\<br /> u_3v_1 &amp; u_3v_2 &amp; u_3v_3 \\<br /> \end{array}\right]

==================================================


Note first that the Transpose is not used in a 100% strict way. It merely reminds
us that one of the vectors is a row vector and the other is a column vector.

u~\otimes~ v ~~=~~ (1\times 3) \otimes (3\times 1) ~~=~~ (3\times 3).


I you want to extend this to a triple product then u, v and w must be of the form:

u~\otimes~ v~\otimes ~w ~~=~~ (1\times 1\times 3) \otimes (1\times 3\times 1) \otimes (3\times 1\times 1) ~~=~~ (3\times 3\times 3).



u, v and w are all vectors, one dimensional, and in the continuous limit they become
one dimensional spaces represented by \mathbb{R}^1. The result has three indices. It is a rank 3
tensor. In the continuous limit it becomes a volume which is represented by \mathbb{R}^3



Regards, Hans.


this is not right, we can take a vector that is finite an represent this as an finite array (a_1,a_2,...,a_n), we can then extend this to a countable but not finite set, this is a sequence (a_1,a_2,...), we can then maybe say that we extend to some kind of sequence over a uncountable set like the real (this would be like a normal function), but to say this is represented by R is wrong. In that notion of a uncountable sequence you maybe can say that the real are an element of these, this is simply the function f(x) = x (this you could maybe denote R, even though i don't think anyone does it). But what about the function f(x) = 1, this is not anything like R.

Can't you give some other references than that link, I know that construction, but i still think you use it a bit wrong, please give me some material where they use it to something like you do, and say that in the continuous limit it is R.

You say:

"u, v and w are all vectors, one dimensional, and in the continuous limit they become
one dimensional spaces represented by R"

This is completely nonsence, a vector doesn't go to some space. A vector space is something where you can add things, as already been pointed out, you think of what index set is used, like in the example i gave above, I think you use the termonology completely wrong.
 
Last edited by a moderator:
  • #130
mrandersdk said:
This is completely nonsence, a vector doesn't go to some space. A vector space is something where you can add things, as already been pointed out, you think of what index set is used, like in the example i gave above, I think you use the termonology completely wrong.

O please mrandersdk, There is nothing wrong in considering a one dimensional space as a
vector with a single continuous index. This is done all the time.Regards, Hans
 
Last edited:
  • #131
then show me where, because that i don't believe, i never seen the reals considered as a vector.
 
  • #132
mrandersdk said:
then show me where, because that i don't believe, i never seen the reals considered as a vector.


It's not that the reals are considered as a vector. It's the index of the vector which
becomes a real. The values of the vector become a function of x where x is the index
and x is a real number.


Regards, Hans
 
  • #133
okay, that was exactly was i was saying. But the notation, where you use R to denote a vector is wrong, R is a vector space. I wouldn't denote a finite (1 x n) vector by n, or if it is a sequence by N, this is wrong. You are right that if we have a continuoused indexed vector (that is a function), and you take the tensor product (that is what you do), then you get a higher rank tensor, that you can index by R^2, this is right.

But this you don't denote by

R \otimes R = R^2

this means something completely different. Do you know the general construction of the tensor product?

How would you use the link you gave for a continuoused indexed vector, it clearly works for a finite, i can maybe imagine how to do it for a countable indexed, but don't know how to do it for a continuous one?
 
  • #134
mrandersdk said:
But this you don't denote by R \otimes R = R^2
this means something completely different.
It is unclear what \mathbb{R}^1 \otimes \mathbb{R}^1 = \mathbb{R}^2 means until everything is properly defined.There now seems to be at least a consensus that \mathbb{R}^n should be interpreted as an
n-dimensional space. A tensor of rank n with n different indices which are all real
numbers. That's one.

The other thing which needs to be clear is that one the \mathbb{R}^1 should be a row-vector
and the other \mathbb{R}^1 should be a column-vector.

The notation \mathbb{R}^1 \otimes \mathbb{R}^1 = \mathbb{R}^2 is correct under the above two conditions. The extention
to triple products was given in post https://www.physicsforums.com/showpost.php?p=1793772&postcount=128"Regards, Hans
 
Last edited by a moderator:
  • #135
Hans de Vries said:
It is unclear what \mathbb{R}^1 \otimes \mathbb{R}^1 = \mathbb{R}^2 means until everything is properly defined.


There now seems to be at least a consensus that \mathbb{R}^n should be interpreted as an
n-dimensional function space. A tensor of rank n with n different indices which are
all real numbers. That's one.

The other thing which needs to be clear is that one the \mathbb{R}^1 should be a row-vector
and the other \mathbb{R}^1 should be a column-vector.

The notation \mathbb{R}^1 \otimes \mathbb{R}^1 = \mathbb{R}^2 is correct under the above two conditions. The extention
to triple products was given in post https://www.physicsforums.com/showpost.php?p=1793772&postcount=128"


Regards, Hans

I still don't understand why you call R for a vector, it is not in any way. And these equations is wrong, unless you have invented your own notation for something, and uses the same symbols, that actually mean something els.
 
Last edited by a moderator:
  • #137
yes a vector space, that is something completely different from a vector, as you call it
 
  • #138
mrandersdk said:
yes a vector space, that is something completely different from a vector, as you call it


\mathbb{R}^1 is defined as a 1-dimensional vector space, which is a tensor of rank 1
(= vector) with a single index, where the index is a real number.


Regards, Hans
 
  • #139
let me see a reference on that definition of a 1-rank tensor, because i never seen that.
 
  • #140
mrandersdk said:
let me see a reference on that definition of a 1-rank tensor, because i never seen that.

http://en.wikipedia.org/wiki/Tensor#Tensor_rank

Quote:

"In the first definition, the rank of a tensor T is the number of indices required to write down the components of T"


Regards, Hans
 
  • #141
i know that, but show me one that says that R^1 is a rank 1-tensor
 
  • #142
mrandersdk said:
i know that, but show me one that says that R^1 is a rank 1-tensor

mrandersdk,

The extension of finite dimensional vectors to infinite dimensional vectors/functions
is one of the pillars of mathematics and physics. I think I've done enough by now.


Regards, Hans
 
  • #143
this is ridiculous if it a pillar of math and physics it must be easy to find a refference. The vector space R^1 is never going to be a tensor.
 
  • #144
mrandersdk said:
this is ridiculous if it a pillar of math and physics it must be easy to find a refference. The vector space R^1 is never going to be a tensor.
You can try it on the math forums, Ask the right question to get the right answer.

A vector (being a tensor of rank 1) which is a one dimensional array of elements becomes
a function in the continuous limit.

For the mathematically pure you should inquire about the "space of functions on the Euclidean
1-space \mathbb{R}^1 rather than \mathbb{R}^1 itself or maybe even the space of square-integrable functions
on Euclidean 1-space \mathbb{R}^1 described as L^2(\mathbb{R}^1) as advised by our good friend Hurkyl, although
this is somewhat QM specific.

You will also see that good manners are helpful in getting assistance.Regards, Hans
 
Last edited:
  • #145
Hans de Vries said:
You can try it on the math forums, Ask the right question to get the right answer.
And he will be told that Rn (in this context) denotes the standard n-dimensional real vector space whose elements are n-tuples of real numbers.

He will be told that Rn is neither a vector, nor a tensor. (Barring set-theoretic tricks to construct some unusual vector spaces)

He will be told that elements of Rn are vectors. He will be told that in the tensor algebra over Rn, elements of Rn are rank 1 tensors.

He will be told that \mathbb{R} \oplus \mathbb{R} \cong \mathbb{R} \times \mathbb{R} \cong \mathbb{R}^2 and \mathbb{R} \otimes \mathbb{R} \cong \mathbb{R}.

He will be told that L^2(\mathbb{R}) and C^\infty(\mathbb{R}), are infinite-dimensional topological vector spaces. (square-integrable and infinitely-differentiable functions, respectively)

He will be told that the number of elements in Rn is |R| (= 2|N|).
 
Last edited:
  • #146
(On a Simpler Note)

Finite dimensional quantum mechanical vectors, operators and coefficients may all be represented by real-valued matrices.

c = a + ib \Rightarrow c = \left( \begin{array}{cc} a &amp; b &amp; -b &amp; a \end{array} \right)

c^* \Rightarrow c^T

For example, an Nx1 complex column vector becomes a 2Nx2 array of reals.

What makes this interesting is that

\left&lt; u \right| X \left| v \right&gt; ^*

becomes

( v^{T} X^T u )^T

The adjoint is applied by transposition only.
 
  • #147
You are right that a vector is a tensor of rank 1 (at least the way physisists look at it), but you say that R^1 is a tensor and that is incorrect. I'm pretty sure i know what a tensor is, I have taking courses in, opeartor analysis, real and complex analysis, measure theory, tensor analysis, Riemannian geometry and Lie groups, if you look in the math section, you will see that one of the people helping me on this subject would be me.

I have also taking general relativity so I also know how physisist look at a tensor (as an multiarray of numbers).

My problem is that you, say that the vector space is a tensor, this is wrong. It is right that R^1 contains on ranktensors. From R^1 we can then construct a space, by taking the tensor product of the two spaces (note between the spaces not elements of it), that is

R^1 \otimes R^1 = R^1

the reason that these two are isomophic, are that given a basis for R^1, let's say e_1, then a basis for R^1 \otimes R^1 is all elements of the form e_i \otimes e_j, but there is only one, namely e_1 \otimes e_1, so it is easy to write an isomophism between the two spaces. And this is not surprising, because this is the space of 1x1 matrices, which is of cause the same as R^1.

If you want to make n x m matrices over R, you need

R^n \otimes R^m = R^{nm}, which again has the basis e_i \otimes e_j \ , \ i=1,...,n \ and \ j=1,...,m, you can look at e_i \otimes e_j to referering to the ij number in the matrix.

You can just now say we make some continuous limit, and then we got functions, and if you do you have to be carefull, and anyway it is not done at all like you do it. The problem is that you wan't to make the tensor product between spaces that is not finite dimensional (uncountable in fact), which is not always so simple.

But in fact I don't think that is what you want, i just think you wan't to take tensor products between functions. So if we have a function space H, with a finite basis, f_1,...,f_n, you can do the same to take the tensor product of H with it self. Then an element in that new vector space is

g_{ij} f_i \otimes f_j einstein summation assumed

writing it in the basis as most physisists do, you would only look at g_{ij}. Now if you wan't to take a non discrete basis (or more precise a non descrete set that spans the sapce), you could write the same thing i guess (not even sure it works, but i guess physisists hope it do)

g_{xy} f_x \otimes f_y

now einstein summation must be an integral to make sense of it. but one have to very carefull, with something like this. The reason that this works, i guess is something to so with the spectral theorem for unbounded operators, and maybe physisists just hope it works because it would be nice.

It seems to me, that you haven't used tensor products between spaces, and just used them between elements not really knowing what's going on, on the higher mathematical plane, and maybe this have led to some confusion, I'm not questioning that you can do calculations, in a specific problem correct, but I'm telling you that many of the indentities you wrote here, is either wrong or you are using completely nonstandard notation.

Ps. It was not to be ruth, but I know a little bit of what I'm talking about, and would very much like to see some references, on how you use it, because that would help a lot, trying to understand how you are doing it, but am I completely wrong if this is notation you have come up with yourself, or do you have some papers or a book that use that notation and tell it like you do?
 
  • #148
mrandersdk,

This is really just a whole lot of confusion about something very trivial.I just tried to convey that a non relativistic two-particle wave function is a function
of 6 parameters: the xyz coordinates of both particles.

This is a result of the vector direct product between the two (non-interacting) single
particle wave-functions. Yes, instead of symbolically writing something in a shorthand
notation like this.

R^3 \otimes R^3 = R^6

It should have been something like:

( U \in L^2(\mathbb{R}^3)) \otimes ( V \in L^2(\mathbb{R}^3))^T = ( W \in L^2(\mathbb{R}^6))

After all, I'm talking about the vector direct product of wave functions, that is
quantum mechanics, and I'm not talking about tensor products between topological
vector spaces
. I even didn't know that these animals existed and it seems pretty
hard do anything physically useful with them when looking at their definition, but OK.

mrandersdk said:
But in fact I don't think that is what you want, i just think you wan't to take tensor products between functions. ?
Indeed, to be exact: The http://mathworld.wolfram.com/VectorDirectProduct.html" which is a tensor product of 2 or more
vectors which are all "orthogonal" to each other in the sense of post https://www.physicsforums.com/showpost.php?p=1793772&postcount=128"Regards, Hans.
 
Last edited by a moderator:
  • #149
oh i see, yes there have been very much confusion about nothing then. Actually the vector direct product you are refereing to, is a speceial case of the tensor product, when you are using finite vectors.

The tensor product is used all the time in QM, also of spaces, because it is naturally that if you have one particle described in one state hilbert space, then two of them is described in the tensor product of these, this should be in all advanced QM books, and is actually what you are saying i guess, you just never seen it for spaces, but the new elements you construct by taking the vector direct product (tensor product), is actually living in this new vector space.

But often people reading these books don't see it because authors often put it a bit in the background, because the full mathematical machinery can be difficult. But it is actually very usefull, and i think you use it all the time without knowing it then.
 
  • #150
mrandersdk-

If |00>,|01>,|10> and |11> (1=up,0=down)

are linear independent vectors, then <01|01> = 0,

rather than <01|01> = <0|0><1|1>, as you suggest.
 
Last edited:
Back
Top