mrandersdk
- 243
- 1
okay, it shouldn't make sense to you. I would say it is abuse of notation, of cause this is the only way to understand it, but you shouldn't write it.
mrandersdk said:of cause you can make your own notation and I give you that it consistent, but that doesn't make it right.
Phrak said:But this is the interesting part. In transforming a vector ket to a dual vector bra, as if you were lowering an index, the complex conjugate is taken of the vector coefficients. This can be accomplished tensoraly if complex numbers are represented as column vectors,
c = \left( \begin{array}{c} a & b \end{array} \right)
The conjugate of c is taken with
\rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right) .
so that
c_{\beta} = \rho^{\beta} _{\alpha} c^\alpha
In three dimensions, the lowering metric would be
g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)
The idea is that whenever an index is lowered or raised, the complex conjugate is applied. But does this scheme hang together for, say, a unitary operator, where a unitary operator acting on a ket is type(1,1) tensor (one upper and one lower index) that takes a ket and returns a ket? I'm not so sure it hangs together consistantly.
comote said:I must confess I have never seen this process before. Any reference to where I might see this discussed more in-depth?
comote said:I must confess I have never seen this process before. Any reference to where I might see this discussed more in-depth?
Actually L2 is too big since it includes all functions which are not only square integrable but which are also non-continuous. Wave functions must be continuous.mrandersdk said:stil think there is a big differens. Are you thinking of the functions as something fx. in L^2(R^3), and then you can make them into a number by taking inner product with the delta function, bacause if you do, then the hilbert space L^2(R^3), is not general enough. The hilbert space is bigger, and when you choose a basis you get a wave function in fx L^2(R^3) if you choose the position basis. But choosing different bases, you wil get other "wave functions". Maybe it is because you are only use to work in the position basis, and yet not seen bra-ket in on it's full scale?
Caution should be exercised with the usage of that kind of notation. The notation \psi always represents either an operator or a scalar. Its never used to represent a vector. A quantity is denoted a vector when it appears using the ket notation. Otherwise it'd be like writing A as a vector without placing an arrow over it or not making it boldfaced to denote that its a vector.comote said:\psi represents a vector in Hilbert space, just like |\psi\rangle...
In such case the Hilbert space would not be big enough. It would be too big.comote said:I admit I have not seen the Bra-Ket notation in its' full glory.
When you say L^2 is not big enough are you referring to the fact that the eigenvectors of continuous observables don't live in L^2 or are you referring
to something else?
Unless, of course, they aren't.pmb_phy said:Actually L2 is too big since it includes all functions which are not only square integrable but which are also non-continuous. Wave functions must be continuous.
Pete
Hold on a second. I question whether or not that is true. An element of the Hilbert space must be a ket and therefore must represent some quantum state, right?pmb_phy said:I guess what I'm saying (rather, the teachers and text from which I learned QM) is that, while it is true that all quantum states that are representable by a ket is an element of a Hilbert space it is not true that all elements of a Hilbert space correspond to a quantum state that are representable by a ket.
It sounds like they've defined wavefunctions to be elements of the Hilbert space.pmb_phy said:But specifically I have the following statement in mind. From Quantum Mechanics by Cohen-Tannoudji, Diu and Laloe, page 94-95
Hurkyl said:It sounds like they've defined wavefunctions to be elements of the Hilbert space.
(and then proceeded to argue a certain class of test functions can serve as adequate approximations)
Thus we are led to studying the set of square-integrable functions. These are the functions for which the integral (A-1) converges. This set is called L2 by mathematicians and it has the structure of a Hilbert space.
mrandersdk said:to get back to phrak's question.
I don't think that your approach is very usefull. Given a finite dimensional vector spae and a basis for this, let's say \{ v^1,v^2,...,v^n \} then there is a canonical basis for it's dual defined by
w_j(v^i) = \delta_j^i
You write:
\left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)
but if you represent c as a 2 x 1 columnvector, and g_ij is a 3 x 3 matrix, how is it defined, and how do they work on the ket? For it to make a little sense you need to represent the ket as a n x 1 vector and to make g_ij transform it into a 1 x n vector with all the numbers in it complex conjugated.
I think my problem is also why this technique would be smart if it even worked?