Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Thread starter Phrak
  • Start date Start date
  • Tags Tags
    Tensors
Click For Summary
Bras and kets can be understood as vectors in a Hilbert space, with kets represented as vectors with upper indices and bras as vectors with lower indices. In finite-dimensional spaces, bras exist in the dual space, which is isomorphic to the primal space, but this distinction becomes significant in infinite dimensions. The Hermitian inner product in finite dimensions can be defined similarly to how it's done with real vectors, using complex conjugation for inner products. While kets can be considered as elements of a vector space, the concept of a coordinate basis is less applicable, as the basis vectors are typically eigenvectors of Hermitian operators. Overall, the discussion emphasizes the relationship between bras, kets, and tensor products within the framework of linear algebra and functional analysis.
  • #91
I'm very curious about that too. This is really weird. I did a search for his most recent posts, and #89 is the last one. None of his recent posts offer any clue about what happened.
 
Physics news on Phys.org
  • #92
Pete-

If you're still lurking--this thread, at least: Everything I know about tensors I learned from Sean Carroll; a wonderful, and accessible text.

I'd mused-over the points you'd brought-up in your post #89, and partly due to a previous comment you made here about my reference to coordinate bases.

As you suggest, it's just as well to use a metric g_ij(real vector) --> g_ij(complex vector)* = h_ij rather than to introduce a column vector to represent a complex number. I simply thought the column vector of a complex number would be nicer as it combines the two operations of complexification and the lowering/raising operation into one. Either acts equally well on Hilbery space vectors.

So the next task is to demonstrate that type(1,1) tensors with complex entries are a valid representation ( in finite dimensions) of the quantum mechanical operators that act on bras and kets; that is, they behave as required when the adjoint it taken. The adjoint would be applicatioin of the metrics h_ab and h^cd to a qm operator A_b^d to obtain A^a_c.

I'm just slow, or I would have done it by now--or failed to do so because it's simple wrong.
 
Last edited:
  • #93
Within the structure of Newtonian physics, we can write

dP/dt = F, where P and F are the usual momentum and force vectors, in 3D.

Also then according to Dirac's notation
d |P>/dt =|F>. Or does it?

Is it, in the sense of an equivalence relation, really legit to equate P and |P> -- in 3D space ? Why, or why not?
Regards,
Reilly Atkinson
 
  • #94
You can't do that. There are ways to fx. relate rotation in 3D space (rotation of the lab frame), to kets describing a state, but theses at least as I learned it, things you need to postulate.

But this is kind of advance topic, and to get something on this, you need advenced quantum mechanichs book (again sakurai is a classic.)

The short answer why you can't this is that we are dealing with quantum mechanics, and this is a whole lot different than Newtonean physics.

But also the momentum can chareterise a state, but the force on it can't, so that isn't a state. Quantum mechanics builds on hamiltonian mechanics, and this formalism (harsh said) don't use forces, but uses potentials.

It seems like you haven't taking any courses in QM ?
 
  • #95
mrandersdk-

I've been thinking a great deal about your objections to casting vectors in Hilbert space as tensors, or even matrices. Is it that a great deal is lost in taking an abstraction to a representation?

I've given-up on representing Hilbert space vectors as tensors. My assumptions were wrong. However, in your case, I you might find some satisfaction in respresenting both tensors and vectors in Hilbert space under a single abstraction, if it can be done.
 
  • #96
reilly said:
Within the structure of Newtonian physics, we can write

dP/dt = F, where P and F are the usual momentum and force vectors, in 3D.

Also then according to Dirac's notation
d |P>/dt =|F>. Or does it?

Is it, in the sense of an equivalence relation, really legit to equate P and |P> -- in 3D space ? Why, or why not?
I'm guessing that might have been a rhetorical question (such as lecturers sometimes
ask their students)?

If so, I'll have a go and say that the observables P, F, etc, in classical
physics are best thought of as ordinary C^\infty functions on 6D phase space.
In quantization, one maps classical observables such as P to self-adjoint
operators on a Hilbert space, and classical symmetry transformations are expressed
there as U P U^{-1} where U denotes a unitary operator implementing the
transformation in the Hilbert space. If we can find a complete set of eigenstates of P
in the Hilbert space, then we can find a |p\rangle corresponding to any
orientation of 3-momentum.

But the above says nothing about 3D position space. We haven't yet got a "Q" operator
corresponding to the Q position variable in classical phase space. When we try
to incorporate Q as an operator in our Hilbert space, with canonical commutation relations
corresponding to Poisson brackets in the classical theory, we find that it's quite hard to
construct a Hilbert space (rigorously) in which both the P and Q play nice together,
and one usually retreats to the weaker (regularized) Weyl form of the commutation
relations. So it's really a bit misleading to think of the Hilbert space as somehow being
"in" 3D position space.

Regarding the F classical observable, we'd write (classically!) the following:

<br /> F ~=~ \frac{dP}{dt} ~=~ \{H, P\}_{PB}<br />

where the rhs is a Poisson bracket and H is the Hamiltonian. In the quantum theory,
this would become an operator equation with commutators (and with \hbar=1) ,

<br /> F ~=~ i \, \frac{dP}{dt} ~=~ [H, P]<br />

(possibly modulo a sign).

But I'm not sure whether any of this really answers the intended question. (?)
 
  • #97
Hurkyl;1783747 said:
It's not so much that we want to actually represent bras and kets as row and column vectors -- it's that we want to adapt the (highly convenient!) matrix algebra to our setting.

For example, I was once with a group of mathematicians and we decided for fun to work through the opening section of a book on some sort of representation theory. One of the main features of that section was to describe an algebraic structure on abstract vectors, covectors, and linear transformations. In fact, it was precisely the structure we'd see if we replaced "abstract vector" with "column vector", and so forth. The text did this not because it wanted us to think in terms of coordinates, but because it wanted us to use this very useful arithmetic setting.

Incidentally, during the study, I pointed out the analogy with matrix algebra -- one of the others, after digesting my comment, remarked "Oh! It's like a whole new world has opened up to me!)


(Well, maybe the OP really did want to think in terms of row and column vectors -- but I'm trying to point out this algebraic setting is a generally useful one)

If I had any sense, I would have, but I am actually more comfortable with tensors than matrices. In any case, I've retreated to understanding the algebra in terms of matrices.

Please correct me in the following if I am wrong. It seems there are really only a small number of rules involved in a matrix respresentation:

AB = (A^{\dagger} B^{\dagger})^{\dagger}

or even

A^{\dagger}B = (A B^{\dagger})^{\dagger}

With bras and kets represented as 1xN and Nx1 matrices, and with the adjoint of a complex number defined as it's complex conjugate, c^{\dagger} = c^* :

\left&lt; u \right| X \left| v \right&gt; = \left&lt; v \right| X^{\dagger} \left| u \right&gt; ^*

can be represented as

\left( u^{\dagger} X v \right) = \left( v^{\dagger} X^{\dagger} u \right) ^{\dagger}

The next is a little more interesting. The operator

\left| u \right&gt; \left&lt; v \right|

is represented as

u \times v^{\dagger}

the outer product of u \ and v^{\dagger} .

If I am not mistaken, Y = u \times v^{\dagger} is a quantum mechanical operator that acts from the left on kets to return kets, and act from the right on bras to return bras?
 
Last edited:
  • #98
shuoldnt you have

AB = ( B^{\dagger}A^{\dagger})^{\dagger}

A^{\dagger}B = ( B^{\dagger}A)^{\dagger}
 
  • #99
mrandersdk said:
shuoldnt you have

AB = ( B^{\dagger}A^{\dagger})^{\dagger}

A^{\dagger}B = ( B^{\dagger}A)^{\dagger}

Yes, or course you are right. Thank you, mrandersdk!

I forgot to include complex numbers, with

\left( \ c \left| u \right&gt; \ \right)^{\dagger} = \left&lt; u \right| c^\dagger

represented as

\left( c u )^\dagger = u^\dagger c^\dagger

along with double daggers, like

\left( X^\dagger \right) ^\dagger = X

, I think this nearly completes a axiomatic set for manipulating equations.
 
Last edited:
  • #100
mrandersdk said:
You can't do that. There are ways to fx. relate rotation in 3D space (rotation of the lab frame), to kets describing a state, but theses at least as I learned it, things you need to postulate.

But this is kind of advance topic, and to get something on this, you need advenced quantum mechanics book (again sakurai is a classic.)

The short answer why you can't this is that we are dealing with quantum mechanics, and this is a whole lot different than Newtonean physics.

But also the momentum can chareterise a state, but the force on it can't, so that isn't a state. Quantum mechanics builds on hamiltonian mechanics, and this formalism (harsh said) don't use forces, but uses potentials.

It seems like you haven't taking any courses in QM ?

In truth, I've taught the subject several times, both to undergraduates and graduate students. You are, as are many in this thread, confusing content and notation. That is, sure, in Dirac notation |S> stands for a state vector vector. However, he operative word here is vector, any vector in fact. There's nothing in the definition of bras and kets that restricts them to QM.

Why not in mechanics or E&M or control engineering. A vector is a mathematical object. In physics, or in any quantitative discipline. we assign vectors to objects that we,. say, describe, naturally, by ordered pairs, or triplets or, each number in the n-tuplet corresponds to a vector component in the appropriate space. All the stuff about transformation properties is contained in the characteristics of the appropriate space.


Dirac notation is nothing more, and nothing less than one of many equivalent methods for working with linear vector spaces, finite or infinite, real or complex -- in fact, probably over most mathematical fields. All the confusion about transposes and adjoints, operators, direct products and so forth would be problems in any notational scheme. An adjoint is an adjoint, the adjoint of product of operators of flips the order of the individual adjoints, and so forth.

(My suspicion is that Dirac invented his notation to make his writing and publication more simple. Rather than bolding, or underlining variable names to indicate a vector, he chose his famous bra-ket notation, because it was easier for him to write.)

Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors -- there are definitional tricks that allow the multiparticle sates to be considered as a single column vector. So, as a direct product is a tensor, we've now got both tensors and, naturally, vectors in Dirac-land.A better way to do realistic tensors is to create them by tensor-like combinations of creation operators acting on the Fock-space vacuum. Recall also the extensive apparatus of spherical tensors in angular momentum theory. We can often consider both states and operators as tensors.

The Dirac notation is extensively and clearly discussed in Dirac's Quantum Mechanics -- he goes through virtually every issue raised in this thread -- end of his first Chapter. and Chapters II and III. In my opinion, to understand QM as it is practiced today, one must study Dirac's book. For example, the whole apparatus of QM notation and concepts as we know them today, is largely defined and developed in Dirac's book. There's no substitute for the original.

Regards,
Reilly Atkinson
 
  • #101
Ok, I know we can use the notation for every vector space if we wan't. Of cause we can do that. I'm not sure why you say that multiparticle states are direct products?

If the particles are independent, you can write them as a tensorproduct of two vectors, if they are correlated then you can't nessecarily.

The reason i said you equation was wrong, was because we where talking about QM, so it didn't make sense.

Again you are right that a vector is often described by a n-tuple, but as i have said a lot of times in this thread, the tuple doesn't make sense without a basis, telling us what it means. A bit like your equation didn't make sense because you didn't tell what you ment by |p> and |F>.

The problem about adjoint, is to write the definition used in math

&lt;x,A y&gt; = &lt;A^*x,y&gt;

in diracs notation. You have to be very carefull to write this.

Not sure what your point is about fock-space? Is it because if we have a space describing one particle, and we take a tensor product between such two states then we are not in the space anymore, but in the fock space formalism you incorporate this problem?

I haven't read diracs book, but it sounds interesting, I will look at it in my vecation, thanks for the reference. I agree that he made the notation because it made it simpler to write (maybe to remember some rules of manipulating), but I just think that people often get a bit confused about it, because one learn QM with wavefunctions first and then learn bra-ket, then often people think that the wavefunction is used just like a ket, and it often isn't (even though you proberly could, after all L^2 is a vector space).
 
  • #102
reilly said:
Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors
...
So, as a direct product is a tensor
That last statement is (very) incorrect! The direct product of two vector spaces is quite different than their tensor product -- in fact, most quantum 'weirdness' stems from the fact you use direct products classically but tensor products quantum mechanically.
 
  • #103
pwew, there was another that found that a bit disturbing.
 
  • #104
what does this mean, I can't see this can be correct:

"Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors "

Maybe I just can't read it but what does

"So, as a direct product is a tensor"

mean?
 
Last edited:
  • #105
by the way

\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}^2

and

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3n}

is not correct. it is

\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}

and

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3^n}
 
Last edited:
  • #106
why are your post suddenly below mine?

I don't know what you mean with

"This just defines dimensions, of course, if there is interaction then the combined
probabilities are not given by simply multiplication. That's a whole different story
altogether requiring knowledge of the orthogonal states, the propagators and the
interactions."

If one particle is described in C^3, then n particles are described in

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3^n}

but it can be that you can't write the state as |0&gt; \otimes |1&gt; \otimes ... \otimes |n&gt;, is that what you try to say ?
 
  • #107
Hurkyl said:
That last statement is (very) incorrect! The direct product of two vector spaces is quite different than their tensor product

Read again what reilly wrote:

reilly said:
Note that a multiparticle state, |p1, p2> is not usually taken as a column vector, but rather a direct product of two vectors -- there are definitional tricks that allow the multiparticle sates to be considered as a single column vector. So, as a direct product is a tensor, we've now got ...

Like in:

<br /> \mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}^2<br />

Or for a non relativistic QM multiparticle state of n particles:

<br /> \mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3n}<br />

This just defines dimensions, of course, if there is interaction then the combined
probabilities are not given by simply multiplication. That's a whole different story
altogether requiring knowledge of the orthogonal states, the propagators and the
interactions.
Regards, Hans
 
Last edited:
  • #108
mrandersdk said:
why are your post suddenly below mine?

Something went wrong with editing. You just react "too fast". :smile:

mrandersdk said:
I don't know what you mean with

"This just defines dimensions, of course, if there is interaction then the combined
probabilities are not given by simply multiplication. That's a whole different story
altogether requiring knowledge of the orthogonal states, the propagators and the
interactions."

If one particle is described in C^3, then n particles are described in

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3^n}

but it can be that you can't write the state as |0&gt; \otimes |1&gt; \otimes ... \otimes |n&gt;, is that what you try to say ?

The "static" wave function is defined as a complex number in a 3 dimensional space.
The non-relativistic wave function of two particles is defined as a 6 dimensional
space spanned by the x,y,z of the first particle plus the x,y,z of the second particle.

The wave function of an n-particle system is defined in an 3n dimensional space, not
a 3^n dimensional space.Regards, Hans
 
  • #109
how can you say that, the dimension of a tensor product is, like i say. And one particle can have more degrees of freedom than just 3.

The things you say are equal, are simply not equal. If you have a two vector spaces V and W and basis v_1,...,v_n and w_1,...,w_d respectivly, then a basis for

V \otimes W is

all of the form v_i \otimes w_j \ , \ i=1,...,n \ and \ j=1,...,d

there is clearly n times d of these, not n + d as you say. You are right that one particle can be described by a wavefunction of x,y,z, and that two by a wavefunction of x_1,y_1,z_1,x_2,y_2,z_2, but we are talking about the statespace, and if it is fx. 5 dim for both, then the state of both particles lives in a 25 dim space.

I don't think it is right to say that the wavefunction lives in a space spanned by x,y,z.
 
Last edited:
  • #110
mrandersdk said:
how can you say that, the dimension of a tensor product is, like i say. And one particle can have more degrees of freedom than just 3.

The things you say are equal, are simply not equal. If you have a two vector spaces V and W and basis v_1,...,v_n and w_1,...,w_d respectivly, then a basis for

V \otimes W is

all of the form v_i \otimes w_j \ , \ i=1,...,n \ and \ j=1,...,d

there is clearly n times d of these, not n + d as you say. You are right that one particle can be described by a wavefunction of x,y,z, and that two by a wavefunction of x_1,y_1,z_1,x_2,y_2,z_2, but we are talking about the statespace, and if it is fx. 5 dim for both, then the state of both particles lives in a 25 dim space.

I don't think it is right to say that the wavefunction lives in a space spanned by x,y,z.
You are confusing the number of dimensions with the number of elements.

\mathbb{R} has 1 dimension with \infty elements while \mathbb{R}^2 has 2 dimensions with \infty^2 elements.Regards, Hans
 
  • #111
Hans de Vries said:
The "static" wave function is defined as a complex number in a 3 dimensional space.
The non-relativistic wave function of two particles is defined as a 6 dimensional
space spanned by the x,y,z of the first particle plus the x,y,z of the second particle.

The wave function of an n-particle system is defined in an 3n dimensional space, not
a 3^n dimensional space.
Ah, that's where the confusion lies! The rest of us are talking about the state vectors, rather than elements of the underlying topological space of a position-representation of those vectors.

L^2(\mathbb{R}^3) is, of course, the space of square-integrable functions on Euclidean 3-space; i.e. the space of single-particle wavefunctions.

The tensor product of this space with itself is given by1 L^2(\mathbb{R}^3) \otimes L^2(\mathbb{R}^3) = L^2(\mathbb{R}^3 \times \mathbb{R}^3) -- so a 2-particle wavefunction is a square-integrable function of 6 variables.

However, if you only took the direct product of the state space with itself, you'd get L^2(\mathbb{R}^3) \times L^2(\mathbb{R}^3) \neq L^2(\mathbb{R}^3 \times \mathbb{R}^3). This is merely the space of pairs of square-integrable functions of three variables. This isn't even (naturally) a subspace of L^2(\mathbb{R}^3 \times \mathbb{R}^3); the obvious map between them is bilinear, not linear.


1: At least I'm pretty sure I have this right. I haven't actually worked through all the fine print to prove this statement.
 
  • #112
L^2(\mathbb{R}^3) \otimes L^2(\mathbb{R}^3) = L^2(\mathbb{R}^3 \times \mathbb{R}^3) should intuativly be right, but if it is mathematicaly I'm not sure, but i guess it must have something to do with fubinis theorem

http://en.wikipedia.org/wiki/Fubini's_theorem

or at least some variant of it. Hurkyl, as I also pointed out, I agree that it seems as though we are talking about something different, so that's why there are some confussion, but just to make something clear, L^2(R^3) is not spanned by x,y,z (but I guess you mean, as also Hurkyl says, that you can write the wavefunction as a function of x,y,z(but maybe one should be a little carefull here because internal freedoms can play a role, such as spin which we need spin wavefunction to describe, but maybe we should forget about internal freedom in our descussion, so we don't confuse each other even more)).

And the statement

\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}^2

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3n}

is wrong, even though we are talking about different things.
 
  • #114
Oh, I just realized I know how to compute the direct product:

L^2(\mathbb{R}^3) \times L^2(\mathbb{R}^3) \cong<br /> L^2(\mathbb{R}^3 + \mathbb{R}^3)

The + on the right hand side indicates disjoint union -- i.e. that space consists of two separated copies of R³
 
  • #115
mrandersdk said:
And the statement

\mathbb{R}\otimes\mathbb{R} ~=~ \mathbb{R}^2

\mathbb{C}^3\otimes\mathbb{C}^3\otimes\mathbb{C}^3 \otimes ... \otimes \mathbb{C}^3~=~ \mathbb{C}^{3n}
is wrong
There is nothing wrong with this. I'm using the definition of the vector direct product
given here: http://mathworld.wolfram.com/VectorDirectProduct.html

In this example each "dimension" has 3 elements while \mathbb{R} or \mathbb{C} represents 1 continuous
dimension with \infty elements.

If two wavefunctions are non-interacting then the vector direct product describes
the combined probabilities. If they are interacting then one has to go back to the
physics and, in most cases, use an iterative process to numerically determine
the combined two-particle wave function. Regards, Hans.
 
Last edited:
  • #116
As I pointed out, you're not talking about R: you're talking about L²(R). The tensor product of R with itself is clearly R -- in your way of thinking, that's because R is a single-dimensional vector space, and 1*1=1.
 
  • #117
Hurkyl said:
As I pointed out, you're not talking about R: you're talking about L²(R). The tensor product of R with itself is clearly R -- in your way of thinking, that's because R is a single-dimensional vector space, and 1*1=1.
I'm using the vector direct product as defined here:

http://mathworld.wolfram.com/VectorDirectProduct.html

Using the tensor rank: the number of indices (either discrete or continuous) as the
number of dimensions, like most physicist would do.

Hurkyl said:
you're talking about L²(R)
Might be, This isn't language found in physics textbooks or mathematical books for
physicist. So using such an expression like this is quite meaningless for most physicist.Regards, Hans

Let me guess: Square integrable functions, ok? :smile:
 
Last edited:
  • #118
Hans de Vries said:
I'm using the vector direct product as defined here:

http://mathworld.wolfram.com/VectorDirectProduct.html

Using the tensor rank: the number of indices (either discrete or continuous) as the
number of dimensions, like most physicist would do.




Might be, This isn't language found in physics textbooks or mathematical books for
physicist. So using such an expression like this is quite meaningless for most physicist,
however trival its mathematical meaning may be...


Regards, Hans


Hans de Vries said:
There is nothing wrong with this. I'm using the definition of the vector direct product
given here: http://mathworld.wolfram.com/VectorDirectProduct.html

In this example each "dimension" has 3 elements while \mathbb{R} or \mathbb{C} represents 1 continuous
dimension with \infty elements.

If two wavefunctions are non-interacting then the vector direct product describes
the combined probabilities. If they are interacting then one has to go back to the
physics and, in most cases, use an iterative process to numerically determine
the combined two-particle wave function.


Regards, Hans.


you are referring to a page that tells how to take the tensor product between to vectors, but you are taking the tensor product between vector spaces, so you should refer to something like

http://mathworld.wolfram.com/VectorSpaceTensorProduct.html

and it agrees with me. You are right that if particles are none interacting you can write them as pure states (maybe you can't always ?), but they still live in the tensor product of the two hilbertspaces, and the dimension of this is the product of the two dimensions.


But you are aperently talking about indices in the tensor (dimension of fx. a matrix), that is completely different than what is being discussed here. It is of cause trivial that taking two 1-tensors (vectors) and taking the tensot product gives a 2-tensor. By the way, using the notation where you write \mathbb{R}, to be a continuous vector, can't be standard notation, even in physics? And the Vector Direct Product you are referring to, is only defined for finite dimensional tensors.

But as been mentioned we are talking about the state space, and then it is commen to take the tensor product of the individual spaces. And to give you an simple example of what I'm talking about, let's look at two spin ½ particles, where we don't care about anything else than the spin. Then each particle have 2 degrees of freedom, so we could have
´
|00>,|01>,|10> and |11> (1=up,0=down)

that is clearly 2*2 = 4, so this is a 4-dim space, as i say. Then because of how we make the tensor product to a hilbert space, it is a natural thing to describe this in the tensor product of the two spaces, because the inner product is given by

<01|01> = <0|0><1|1>

so the probabillity of being in the down up state, is the probabillity of being in down times being in up, which is very natural. The strange thing is that taking the tensor product of the spaces, gives us states that are intanglet and other strange things.
 
Last edited:
  • #119
Hans de Vries said:
I'm using the vector direct product as defined here:

http://mathworld.wolfram.com/VectorDirectProduct.html
Which contains an example indicating \mathbb{R}^3 \otimes \mathbb{R}^3 \cong \mathbb{R}^9 -- not \mathbb{R}^6 as you suggest.


The problem is that you are using the letter R -- a letter well-established to indicate something akin to "the one-dimensional vector space over the reals". You are using the symbol \otimes -- a symbol well-established to indicate a particular arithmetic operation on vector spaces and on their elements. You are interjecting into a conversaion where we are talking about products on vectors and vector spaces.

So, when you change the meaning of both of those symbols (using R to instead denote some continuously indexed space and \otimes to denote some fancy operation on index spaces) and change the context of the conversation (talking about operations on index spaces rather than on vectors) you should expect there to be much confusion. This is greatly magnified because you didn't give any indication that you were using those symbols in a nonstandard way, and continued to interpret others' posts as using those meanings, despite others having very clearly indicated they were using those symbols according to the usual meaning.

Actually, I think it's far more you've accidentally made a 'level' slip, and confused two layers of abstraction. (The relevant layers here being points of Euclidean space, Euclidean space, and functions on Euclidean space)


That aside, I will admit that this is the first time I've ever heard the phrase 'direct product' used to refer to something that really isn't a direct product but instead a tensor product.
 
Last edited:
  • #120
mrandersdk said:
you are referring to a page that tells how to take the tensor product between to vectors, but you are taking the tensor product between vector spaces, so you should refer to something like

http://mathworld.wolfram.com/VectorSpaceTensorProduct.html


reilly was talking about a non relativistic two particle wave function as the vector direct
product of two single particle function which is correct according to the definition of the
vector direct product given here:

http://mathworld.wolfram.com/VectorDirectProduct.html

You may have an argument in that I implicitly assume that in R\otimes R one is
a row vector and the other is a column vector, so an nx1 vector times a 1xn
vector is an nxn matrix, but I wouldn't even know how to express a transpose
operation at higher ranks without people loosing track of the otherwise very
simple math.



Regards, Hans
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 7 ·
Replies
7
Views
704
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
801
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K