Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Phrak
  • Start date
  • Tags
    Tensors
In summary: Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just avector space equipped with an Hermitian inner product (and some extra arcane stuff about "completion" in the inf-dim case).
  • #141
i know that, but show me one that says that R^1 is a rank 1-tensor
 
Physics news on Phys.org
  • #142
mrandersdk said:
i know that, but show me one that says that R^1 is a rank 1-tensor

mrandersdk,

The extension of finite dimensional vectors to infinite dimensional vectors/functions
is one of the pillars of mathematics and physics. I think I've done enough by now.


Regards, Hans
 
  • #143
this is ridiculous if it a pillar of math and physics it must be easy to find a refference. The vector space R^1 is never going to be a tensor.
 
  • #144
mrandersdk said:
this is ridiculous if it a pillar of math and physics it must be easy to find a refference. The vector space R^1 is never going to be a tensor.
You can try it on the math forums, Ask the right question to get the right answer.

A vector (being a tensor of rank 1) which is a one dimensional array of elements becomes
a function in the continuous limit.

For the mathematically pure you should inquire about the "space of functions on the Euclidean
1-space [itex]\mathbb{R}^1 [/itex] rather than [itex]\mathbb{R}^1 [/itex] itself or maybe even the space of square-integrable functions
on Euclidean 1-space [itex]\mathbb{R}^1 [/itex] described as [itex] L^2(\mathbb{R}^1) [/itex] as advised by our good friend Hurkyl, although
this is somewhat QM specific.

You will also see that good manners are helpful in getting assistance.Regards, Hans
 
Last edited:
  • #145
Hans de Vries said:
You can try it on the math forums, Ask the right question to get the right answer.
And he will be told that Rn (in this context) denotes the standard n-dimensional real vector space whose elements are n-tuples of real numbers.

He will be told that Rn is neither a vector, nor a tensor. (Barring set-theoretic tricks to construct some unusual vector spaces)

He will be told that elements of Rn are vectors. He will be told that in the tensor algebra over Rn, elements of Rn are rank 1 tensors.

He will be told that [itex]\mathbb{R} \oplus \mathbb{R} \cong \mathbb{R} \times \mathbb{R} \cong \mathbb{R}^2[/itex] and [itex]\mathbb{R} \otimes \mathbb{R} \cong \mathbb{R}[/itex].

He will be told that [itex]L^2(\mathbb{R})[/itex] and [itex]C^\infty(\mathbb{R})[/itex], are infinite-dimensional topological vector spaces. (square-integrable and infinitely-differentiable functions, respectively)

He will be told that the number of elements in Rn is |R| (= 2|N|).
 
Last edited:
  • #146
(On a Simpler Note)

Finite dimensional quantum mechanical vectors, operators and coefficients may all be represented by real-valued matrices.

[tex] c = a + ib \Rightarrow c = \left( \begin{array}{cc} a & b & -b & a \end{array} \right)[/tex]

[tex]c^* \Rightarrow c^T[/tex]

For example, an Nx1 complex column vector becomes a 2Nx2 array of reals.

What makes this interesting is that

[tex] \left< u \right| X \left| v \right> ^* [/tex]

becomes

[tex] ( v^{T} X^T u )^T [/tex]

The adjoint is applied by transposition only.
 
  • #147
You are right that a vector is a tensor of rank 1 (at least the way physisists look at it), but you say that R^1 is a tensor and that is incorrect. I'm pretty sure i know what a tensor is, I have taking courses in, opeartor analysis, real and complex analysis, measure theory, tensor analysis, Riemannian geometry and Lie groups, if you look in the math section, you will see that one of the people helping me on this subject would be me.

I have also taking general relativity so I also know how physisist look at a tensor (as an multiarray of numbers).

My problem is that you, say that the vector space is a tensor, this is wrong. It is right that R^1 contains on ranktensors. From R^1 we can then construct a space, by taking the tensor product of the two spaces (note between the spaces not elements of it), that is

[tex] R^1 \otimes R^1 = R^1[/tex]

the reason that these two are isomophic, are that given a basis for R^1, let's say e_1, then a basis for [itex] R^1 \otimes R^1[/itex] is all elements of the form [tex] e_i \otimes e_j[/tex], but there is only one, namely [tex] e_1 \otimes e_1[/tex], so it is easy to write an isomophism between the two spaces. And this is not surprising, because this is the space of 1x1 matrices, which is of cause the same as R^1.

If you want to make n x m matrices over R, you need

[tex] R^n \otimes R^m = R^{nm}[/tex], which again has the basis [tex] e_i \otimes e_j \ , \ i=1,...,n \ and \ j=1,...,m[/tex], you can look at [tex] e_i \otimes e_j[/tex] to referering to the ij number in the matrix.

You can just now say we make some continuous limit, and then we got functions, and if you do you have to be carefull, and anyway it is not done at all like you do it. The problem is that you wan't to make the tensor product between spaces that is not finite dimensional (uncountable in fact), which is not always so simple.

But in fact I don't think that is what you want, i just think you wan't to take tensor products between functions. So if we have a function space H, with a finite basis, f_1,...,f_n, you can do the same to take the tensor product of H with it self. Then an element in that new vector space is

[tex] g_{ij} f_i \otimes f_j [/tex] einstein summation assumed

writing it in the basis as most physisists do, you would only look at [itex] g_{ij} [/itex]. Now if you wan't to take a non discrete basis (or more precise a non descrete set that spans the sapce), you could write the same thing i guess (not even sure it works, but i guess physisists hope it do)

[tex] g_{xy} f_x \otimes f_y [/tex]

now einstein summation must be an integral to make sense of it. but one have to very carefull, with something like this. The reason that this works, i guess is something to so with the spectral theorem for unbounded operators, and maybe physisists just hope it works because it would be nice.

It seems to me, that you haven't used tensor products between spaces, and just used them between elements not really knowing what's going on, on the higher mathematical plane, and maybe this have led to some confusion, I'm not questioning that you can do calculations, in a specific problem correct, but I'm telling you that many of the indentities you wrote here, is either wrong or you are using completely nonstandard notation.

Ps. It was not to be ruth, but I know a little bit of what I'm talking about, and would very much like to see some references, on how you use it, because that would help a lot, trying to understand how you are doing it, but am I completely wrong if this is notation you have come up with yourself, or do you have some papers or a book that use that notation and tell it like you do?
 
  • #148
mrandersdk,

This is really just a whole lot of confusion about something very trivial.I just tried to convey that a non relativistic two-particle wave function is a function
of 6 parameters: the xyz coordinates of both particles.

This is a result of the vector direct product between the two (non-interacting) single
particle wave-functions. Yes, instead of symbolically writing something in a shorthand
notation like this.

[tex] R^3 \otimes R^3 = R^6[/tex]

It should have been something like:

[tex]( U \in L^2(\mathbb{R}^3)) \otimes ( V \in L^2(\mathbb{R}^3))^T = ( W \in L^2(\mathbb{R}^6))[/tex]

After all, I'm talking about the vector direct product of wave functions, that is
quantum mechanics, and I'm not talking about tensor products between topological
vector spaces
. I even didn't know that these animals existed and it seems pretty
hard do anything physically useful with them when looking at their definition, but OK.

mrandersdk said:
But in fact I don't think that is what you want, i just think you wan't to take tensor products between functions. ?
Indeed, to be exact: The http://mathworld.wolfram.com/VectorDirectProduct.html" which is a tensor product of 2 or more
vectors which are all "orthogonal" to each other in the sense of post https://www.physicsforums.com/showpost.php?p=1793772&postcount=128"Regards, Hans.
 
Last edited by a moderator:
  • #149
oh i see, yes there have been very much confusion about nothing then. Actually the vector direct product you are refereing to, is a speceial case of the tensor product, when you are using finite vectors.

The tensor product is used all the time in QM, also of spaces, because it is naturally that if you have one particle described in one state hilbert space, then two of them is described in the tensor product of these, this should be in all advanced QM books, and is actually what you are saying i guess, you just never seen it for spaces, but the new elements you construct by taking the vector direct product (tensor product), is actually living in this new vector space.

But often people reading these books don't see it because authors often put it a bit in the background, because the full mathematical machinery can be difficult. But it is actually very usefull, and i think you use it all the time without knowing it then.
 
  • #150
mrandersdk-

If |00>,|01>,|10> and |11> (1=up,0=down)

are linear independent vectors, then <01|01> = 0,

rather than <01|01> = <0|0><1|1>, as you suggest.
 
Last edited:
  • #151
Phrak said:
mrandersdk-

If |00>,|01>,|10> and |11> (1=up,0=down)

are linear independent vectors, then <01|01> = 0,

rather than <01|01> = <0|0><1|1>, as you suggest.
How do you figure?
 
  • #152
Hans de Vries said:
You may have an argument in that I implicitly assume that in [itex]R\otimes R[/itex] one is a row vector and the other is a column vector, so an nx1 vector times a 1xn vector is an nxn matrix, but I wouldn't even know how to express a transpose operation at higher ranks without people loosing track of the otherwise very
simple math.
Regards, Hans

Transposition is more of a notational device, than anything, to keep track of where the rows and columns are.

In higher ranks, you can use labels to keep track rows, columns, depth..., and use a modified Einstein summation to multiply matrices.

[tex] Y = M^{T} \Rightarrow Y_{cr} = M_{rc} [/tex]

[tex] (M_{abc...z} N_{abc...z})_{(fg)} = \stackrel{\Sum (M_{abc...z} N_{abc...z})}{f,g=i, i=1...n}, f[/tex]
__________________________
Any mistakes I blame on LaTex
 
  • #153
Hans de Vries said:
You may have an argument in that I implicitly assume that in [itex]R\otimes R[/itex] one is a row vector and the other is a column vector, so an nx1 vector times a 1xn vector is an nxn matrix, but I wouldn't even know how to express a transpose operation at higher ranks without people loosing track of the otherwise very
simple math.

Regards, Hans

Transposition is more of a notational device, than anything, to keep track of where the rows and columns are. Which elements combine with which elements between two tensors is unchange by

In higher ranks, you can use labels to keep track of rows, columns, depth...etc, and use a modified Einstein summation to multiply matrices.

[tex] Y = M^{T} \Rightarrow Y_{cr} = M_{rc} [/tex]

[tex] (M_{abc...f} N_{c\: d\: e...z})_{(dp)} \equiv \sum_{d_i , p_i \ i=1...n} (M_{abc...f} N_{c\: d\: e...z})}\ , \ \ \ \ d \neq p[/tex]

[tex] L_{abc_{m}e_{m}f_{m}c_{n}e_{n}f_{n}ghi...o,qrs...z} = (M_{abcef} N_{efg...z}) [/tex]

______________________________________________________________________
Any mistakes now, in the past, or ever, I blame on LaTex, whether I'm using it or not.
 
  • #154
Phrak said:
mrandersdk-

If |00>,|01>,|10> and |11> (1=up,0=down)

are linear independent vectors, then <01|01> = 0,

rather than <01|01> = <0|0><1|1>, as you suggest.

no, [tex]|01>^\dagger = <01|[/tex]
 
  • #155
mrandersdk, Hurkl-

I posted:
If |00>,|01>,|10> and |11> (1=up,0=down)

are linear independent vectors, then <01|01> = 0,

rather than <01|01> = <0|0><1|1>, as you suggest.


Hurkyl said:
How do you figure?

:eek: I figure, I misread <01|01> as <01|10> :redface:

(I wouldn't mind if someone deleted my extra and partially edited post, #152.)
 

Similar threads

Replies
16
Views
1K
Replies
5
Views
3K
Replies
3
Views
1K
Replies
6
Views
1K
  • Special and General Relativity
Replies
3
Views
744
  • Quantum Physics
Replies
2
Views
1K
Replies
7
Views
2K
  • Special and General Relativity
Replies
10
Views
2K
Replies
10
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
817
Back
Top