Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Thread starter Phrak
  • Start date Start date
  • Tags Tags
    Tensors
Click For Summary
Bras and kets can be understood as vectors in a Hilbert space, with kets represented as vectors with upper indices and bras as vectors with lower indices. In finite-dimensional spaces, bras exist in the dual space, which is isomorphic to the primal space, but this distinction becomes significant in infinite dimensions. The Hermitian inner product in finite dimensions can be defined similarly to how it's done with real vectors, using complex conjugation for inner products. While kets can be considered as elements of a vector space, the concept of a coordinate basis is less applicable, as the basis vectors are typically eigenvectors of Hermitian operators. Overall, the discussion emphasizes the relationship between bras, kets, and tensor products within the framework of linear algebra and functional analysis.
  • #31
okay, it shouldn't make sense to you. I would say it is abuse of notation, of cause this is the only way to understand it, but you shouldn't write it.
 
Physics news on Phys.org
  • #32
I would never write that, I would write
\psi = c_1\psi_1+c_2\psi_2
or
|\psi\rangle = c_1|\psi_1\rangle+c_2|\psi_2\rangle
or
\langle\psi| = c_1\langle\psi_1|+c_2\langle\psi_2|
depending on context.

These are not problems with qm, these are problems that arise BECAUSE of the dependency on this notation and the use of statements like "eigenstates of the momentum operator". If you use spectral theory this is all well understood.

|\psi\rangle\langle\psi|\psi surprisingly does make sense to me, although it seems rather redundant. I understand it as
\langle\psi,\psi\rangle\psi.
 
  • #33
It is an understanding, once you defined the projection operator
P=|\psi\rangle\langle\psi| which is self adjoint, it acts the same way on a bra as a ket and so the statement makes sense. What would not make sense is to say
|\psi\rangle\langle\phi|x, the operator
|\psi\rangle\langle\phi| is not self-adjoint and so we have to specify its' action differently on Ket's and Bra's. The space and its' dual.

It is in discussing operators like this where I find the Bra-Ket notation really useful.
 
  • #34
my point is that

|\psi><\psi|\psi

doesn'r make sence, an object like |\psi><\psi| works on kets or bras not, just \psi.
 
  • #35
of cause you can make your own notation and I give you that it consistent, but that doesn't make it right.
 
  • #36
The projection does have meaning in Hilbert space,
Px = |\psi\rangle\langle\psi|x =\langle\psi,x\rangle\psi the problem with writing
|\psi\rangle\langle\phi|x
is the ambiguity, does it mean
\langle\psi,x\rangle\phi or does it mean
\langle\phi,x\rangle\psi. This ambiguity doesn't come up with the projection operator and so while you may not like the notation, it is consistent.
 
  • #37
mrandersdk said:
of cause you can make your own notation and I give you that it consistent, but that doesn't make it right.

But it is not my notation! I agree it can be confusing notation, but the only measure we have on mathematics is consistency. Mathematical correctness == consistency with axioms. There is no ambiguity in the statement and so therefore it is valid.
 
  • #38
there can be rules for using notation, even though something else makes sence, it can be wrong. When you write a opeartor in the bra ket notation you have to make it work on kets, i agree with you that you can make sense of it, but you shouldn't.
 
  • #39
In Bra-Ket notation the expected value of an observable is represented as
\langle\psi|A|\psi\rangle
This only has meaning if the operator is self-adjoint. If the operator is not self-adjoint then you have nonsense.
The projection is self-adjoint and so the statement has an unambiguous meaning.

The only rules for using notation should be, does what you write have an unambiguous meaning. If so then it is valid.
 
  • #40
Phrak said:
But this is the interesting part. In transforming a vector ket to a dual vector bra, as if you were lowering an index, the complex conjugate is taken of the vector coefficients. This can be accomplished tensoraly if complex numbers are represented as column vectors,
c = \left( \begin{array}{c} a & b \end{array} \right)
The conjugate of c is taken with
\rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right) .
so that
c_{\beta} = \rho^{\beta} _{\alpha} c^\alpha

In three dimensions, the lowering metric would be
g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)

The idea is that whenever an index is lowered or raised, the complex conjugate is applied. But does this scheme hang together for, say, a unitary operator, where a unitary operator acting on a ket is type(1,1) tensor (one upper and one lower index) that takes a ket and returns a ket? I'm not so sure it hangs together consistantly.

I must confess I have never seen this process before. Any reference to where I might see this discussed more in-depth?
 
  • #41
\langle\psi|A|\psi\rangle this makes sense even if A is not self-adjoint, you just don't know if it is real.

But it is wrong still to write

|\psi><\psi|\psi

because as i say |\psi><\psi| works to the left on bras and to the right on kets, and not on \psi, unless you mean \psi = |\psi> which I guess you don't because you said you would never write that.
 
  • #42
For A not self-adjoint, does \langle\psi|A|\psi\rangle mean
\langle\psi,A\psi\rangle or does it mean
\langle A\psi,\psi\rangle?

As for the projection. Given a vector x\in\mathcal{H} it has a representation as both a ket
|x\rangle and as a bra \langle x|. If we define
P=|\psi\rangle\langle\psi| then the action on the ket is given by
|\psi\rangle\langle\psi|x\rangle
which is the conjugate transpose of
\langle x|\psi\rangle\langle\psi| which is the action of P on the bra. In this sense we can say that the operation Px is well defined.
 
  • #43
Listen what i say. It is welldefined as you say, but that doesn't make it right, you can't define your own notation, this a notation developed by dirac and you are using it wrong.

If A not is self-adjoint you have

(\langle\psi|A) |\psi\rangle = \langle\psi| (A |\psi\rangle)

if A is selfadjoint you get

<\psi|A|\psi> = \int <\psi|A|x><x|\psi> dx = \int <\psi|A|x> \psi(x) dx = \int <x|A|\psi>^\dagger \psi(x) dx =
\int (A_x\psi(x))^\dagger \psi(x) dx = \int (A_x\psi(x)^*)\psi(x) dx

or just

<\psi|A|\psi> = \int \psi(x)^*(A_x\psi(x)) dx

where A_x denotes the operator in the position basis, see my example with the momentum operator to se what i mean with that.
 
  • #44
When one says
(\langle\psi|A) |\psi\rangle
I understand that to be the inner product of the vectors
A^*\psi and \psi;
\langle A^*\psi,\psi\rangle
on the other hand I understand that
\langle\psi| (A |\psi\rangle)
is the inner product of the same two vectors changing the order
ie \langle\psi,A\psi\rangle.
Is this correct?
 
Last edited:
  • #45
yeah with

\langle A\psi,\psi\rangle = \int (A_x^\dagger \psi^*(x)) \psi(x) dx.

But I think you should read sakurai or some other advanced QM book, because it sounds like you read some introduction books that make an easier approach, but it is not so general.
 
  • #46
OK, after some reading, I saw I was wrong in saying that the Bra-Ket notation has no meaning for operators that are not self-adjoint.

I agree, and actually at the beginning even claimed that I wasn't using the Dirac notation in the standard way. I only use it for purposes that I find useful. For those purposes and as a way of being able to better understand them. Thinking about them as column and row vectors has its' usefulness. It is not 100% correct in the sense that I am not using it in the same way that Dirac used it, thank you for explaining the intended purpose of the Dirac notation to me.
 
  • #47
No problem. You are right that using the notation in its full purpose can be a jerk, so getting a faster and easier way is fine. I just think it is important to know where you are "cheating", because many people do it wrong without knowing it, and this is because introduction books do it, but to use it in it's full power can be a bit confusing, so that's probably the best way to teach it.
 
  • #48
comote said:
I must confess I have never seen this process before. Any reference to where I might see this discussed more in-depth?

I'm afraid this is all my own mad invention, if it hangs together.

The metric with tensor entries isn't necessary, but possibly useful, in order to understand bras and kets in a finite dimensional Hilbert space in the language made familiar through tensor calculus.

I should make a correction! I should have written rho with two lower indeces.

With complex numbers are represented as column vectors,
c = \left( \begin{array}{c} a & b \end{array} \right)

The conjugate of c is taken with
\rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right) .

so that
c_{\beta} = \rho^{\beta} _{\alpha} c^\alpha

In three dimensions, the lowering metric, that turns kets into bras, where the basis are orthonormal would be
g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)

So, for a ket (vector)
c \left| \phi \right>

the bra (dual vector) is
\left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)

which is a verbose way to say
\hat{V}V = gV
 
  • #49
comote said:
I must confess I have never seen this process before. Any reference to where I might see this discussed more in-depth?

I'm afraid this is all my own mad invention in order to understand bras and kets, if it hangs together.

The metric with tensor entries isn't necessary, but possibly useful, in order to understand bras and kets in a finite dimensional Hilbert space in the language made familiar through tensor calculus as used in relativity.

I should make a correction! I should have written rho with two lower indeces.

With complex numbers are represented as column vectors,
c = \left( \begin{array}{c} a & b \end{array} \right)

The conjugate of c is taken with
\rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right) .

so that
c_{\beta} = \rho_{\beta \alpha} c^\alpha

In three dimensions, the lowering metric, that turns kets into bras, where the basis are orthonormal would be
g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)

So, for a ket (vector)
c \left| \phi \right>

the bra (dual vector) is
\left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)

which is a verbose way to say
\hat{V} = gV
in a basis independent way.
 
  • #50
mrandersdk said:
stil think there is a big differens. Are you thinking of the functions as something fx. in L^2(R^3), and then you can make them into a number by taking inner product with the delta function, bacause if you do, then the hilbert space L^2(R^3), is not general enough. The hilbert space is bigger, and when you choose a basis you get a wave function in fx L^2(R^3) if you choose the position basis. But choosing different bases, you wil get other "wave functions". Maybe it is because you are only use to work in the position basis, and yet not seen bra-ket in on it's full scale?
Actually L2 is too big since it includes all functions which are not only square integrable but which are also non-continuous. Wave functions must be continuous.

Pete
 
  • #51
comote said:
\psi represents a vector in Hilbert space, just like |\psi\rangle...
Caution should be exercised with the usage of that kind of notation. The notation \psi always represents either an operator or a scalar. Its never used to represent a vector. A quantity is denoted a vector when it appears using the ket notation. Otherwise it'd be like writing A as a vector without placing an arrow over it or not making it boldfaced to denote that its a vector.

Pete
 
  • #52
comote said:
I admit I have not seen the Bra-Ket notation in its' full glory.

When you say L^2 is not big enough are you referring to the fact that the eigenvectors of continuous observables don't live in L^2 or are you referring
to something else?
In such case the Hilbert space would not be big enough. It would be too big.

Pete
 
  • #53
pmb_phy said:
Actually L2 is too big since it includes all functions which are not only square integrable but which are also non-continuous. Wave functions must be continuous.

Pete
Unless, of course, they aren't. :-p The usage in this thread -- 'element of a Hilbert space' -- is the usage I am familiar with. It's also the definition seen on Wikipedia.

Now, a (Schwartz) test function is required to be continuous (among other things). Maybe you are thinking of that?
 
  • #54
pmb_phy said:
I guess what I'm saying (rather, the teachers and text from which I learned QM) is that, while it is true that all quantum states that are representable by a ket is an element of a Hilbert space it is not true that all elements of a Hilbert space correspond to a quantum state that are representable by a ket.
Hold on a second. I question whether or not that is true. An element of the Hilbert space must be a ket and therefore must represent some quantum state, right?

Pete
 
Last edited:
  • #55
pmb_phy said:
But specifically I have the following statement in mind. From Quantum Mechanics by Cohen-Tannoudji, Diu and Laloe, page 94-95
It sounds like they've defined wavefunctions to be elements of the Hilbert space.

(and then proceeded to argue a certain class of test functions can serve as adequate approximations)
 
  • #56
Hurkyl said:
It sounds like they've defined wavefunctions to be elements of the Hilbert space.

(and then proceeded to argue a certain class of test functions can serve as adequate approximations)

Prior to that paragraph the authors wrote
Thus we are led to studying the set of square-integrable functions. These are the functions for which the integral (A-1) converges. This set is called L2 by mathematicians and it has the structure of a Hilbert space.

Pete
 
  • #57
to get back to phrak's question.

I don't think that your approach is very usefull. Given a finite dimensional vector spae and a basis for this, let's say \{ v^1,v^2,...,v^n \} then there is a canonical basis for it's dual defined by

w_j(v^i) = \delta_j^i

You write:

\left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)

but if you represent c as a 2 x 1 columnvector, and g_ij is a 3 x 3 matrix, how is it defined, and how do they work on the ket? For it to make a little sense you need to represent the ket as a n x 1 vector and to make g_ij transform it into a 1 x n vector with all the numbers in it complex conjugated.

I think my problem is also why this technique would be smart if it even worked?
 
  • #58
mrandersdk said:
to get back to phrak's question.

I don't mind the side discussions at all. In fact they're probable better foder anyway! :smile:.

I don't think that your approach is very usefull. Given a finite dimensional vector spae and a basis for this, let's say \{ v^1,v^2,...,v^n \} then there is a canonical basis for it's dual defined by

w_j(v^i) = \delta_j^i

You write:

\left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)

But I should have written :-p
\left< \psi \right| c* = g_{ij} \left( c \left| \psi \right> \right)

but if you represent c as a 2 x 1 columnvector, and g_ij is a 3 x 3 matrix, how is it defined, and how do they work on the ket? For it to make a little sense you need to represent the ket as a n x 1 vector and to make g_ij transform it into a 1 x n vector with all the numbers in it complex conjugated.

c is a N x 1 column vector whos entries are 2 x 1 column vectors of real entries.
g_ij is an N x N matrix having 2 x 2 maxtrix entries. In othonormal basis the diagonal elements consist of rho, where all the other elements are zero.

The 2 x 1 column vectors serve as complex numbers. with entry 1,1=real part and 2,1=complex part. It's simply a trick to associate each g_ij and g^ij with an implicite conjugation operation and each mixed metric with the Kronecker delta.

You can test that this is true in the orthonormal basis
g_{ij}g^{jk} = \delta_i^k

The equation you've quoted is actually a mess, comprised of mixed notation, that was intended to serve as a transition equation to get you to
\hat{V} = gV where \hat{V} is the adjoint of V

I think my problem is also why this technique would be smart if it even worked?

'Can't really tell until you get there, what the veiw is like.
 
Last edited:
  • #59
Last edited by a moderator:
  • #60
It is first a column vector when some basis is choosen. Let me give you a simple example, because this have nothing to do with the ket space, this is for all vector spaces.

lets say we are working in R^3, then we have a vector fx. v = (1,1,1) WRONG!

This is completely meaningless, what does this array mean, nothing. First when i give you some basis, you can make sense of this, many people don't see this because we are thought everything from some canonical basis in the first place.

A vector in a vector space is an element satisfying some things, choosing a basis makes it isomorphic to R^n, and then choosing the standard basis implicit for R^n, we can make calculations with arrays like (1,1,1), but this element can mean a lot of things, it could be 1 apple, one orange and one banana (if someone could give this space a proper vector space structure).

So if column vector is referring to some array, it only makes sense given a basis.

It is like when people say that a matrix is just a linear transformation, this isn't actually the complete truth, the linear transformation between two vector spaces are one to one with the matrices (of appropiate size). Thus one should be very carefull when saying statements like these. I know a lot of authors use the same words for matrices and linear transformation, and that is fine as long as it is made clear or the author know what he means.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
4K
Replies
16
Views
3K
  • · Replies 7 ·
Replies
7
Views
499
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
607
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K