Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Phrak
  • Start date
  • Tags
    Tensors
In summary: Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just avector space equipped with an Hermitian inner product (and some extra arcane stuff about "completion" in the inf-dim case).
  • #1
Phrak
4,267
6
Is there a way bras and kets can be understood in terms of vectors and tensors and coordinate bases?

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.
 
Physics news on Phys.org
  • #2
Phrak said:
Is there a way bras and kets can be understood in terms of vectors and tensors and coordinate bases?

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.

Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just a
vector space equipped with an Hermitian inner product (and some extra arcane stuff
about "completion" in the inf-dim case).

For a finite-dimensional Hilbert space H, it's all rather easy. The bras actually live in the "dual"
space [itex]H^*[/itex] (i.e., the space of linear functionals over H, meaning the space of
linear mappings from H to the complex numbers). That's really what this upper/lower
index business is all about. For finite-dimensional spaces, the dual [itex]H^*[/itex] is actually
isomorphic to the primal space H, so people tend to forget about the distinction. But in infinite
dimensions, the primal and dual spaces are no longer isomorphic in general, so there
is no canonical foundation for raising and lowering indices in general. People also tend to be
more pedantic, and talk about "self-adjoint", or "symmetric" operators and such-like. This is
discussed in textbooks on "Functional Analysis".
 
  • #3
Phrak said:
Is there a way bras and kets can be understood in terms of vectors and tensors and coordinate bases?
Yes. Absolutely.

Kets can be thought of as elements of a vector space. However I'm not sure if the term "coordinate basis" can be applied here. There is definitely a basis though. The basis "vectors" are eigenvectors to Hermitian operators. Tenor products can be defined in terms of trhe tensor products of the "vectors" (kets).
I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, ..
That is correct.

Pete
 
  • #4
strangerep said:
Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just a
vector space equipped with an Hermitian inner product (and some extra arcane stuff
about "completion" in the inf-dim case).

For a finite-dimensional Hilbert space H, it's all rather easy. The bras actually live in the "dual"
space [itex]H^*[/itex] (i.e., the space of linear functionals over H, meaning the space of
linear mappings from H to the complex numbers). That's really what this upper/lower
index business is all about. For finite-dimensional spaces, the dual [itex]H^*[/itex] is actually
isomorphic to the primal space H, so people tend to forget about the distinction. But in infinite
dimensions, the primal and dual spaces are no longer isomorphic in general, so there
is no canonical foundation for raising and lowering indices in general. People also tend to be
more pedantic, and talk about "self-adjoint", or "symmetric" operators and such-like. This is
discussed in textbooks on "Functional Analysis".

Thankyou strangerep. I don't understand all this languge, I'm afraid; I'm just now learning bras and kets. But could you tell me how the Hermitian inner product is accomplished in finite dimensions? In using vectors with real entries, that I'm familiar with, the inner product is accomplished with the metric tensor. Is there a similar operator in Hilbert space?
 
  • #5
pmb_phy said:
...Kets can be thought of as elements of a vector space. However I'm not sure if the term "coordinate basis" can be applied here. There is definitely a basis though. The basis "vectors" are eigenvectors to Hermitian operators. Tenor products can be defined in terms of trhe tensor products of the "vectors" (kets).
Pete

I take it that kets are always unit vectors. So that if |r> is a ket, b is a complex number and |s> = |b r>, then |s> is not properly a ket?
 
  • #6
Phrak said:
In using vectors with real entries, that I'm familiar with, the inner product is accomplished with the metric tensor. Is there a similar operator in Hilbert space?
Imagine two vectors u,v with complex entries. Complex-conjugate the entries of one
vector, and then form the familiar inner product, e.g., [itex]u^*_1v_1 + u^*_2v_2 + ...[/itex].
Mostly, you don't need to worry about the metric here because it's just [itex]\delta_{ij}[/itex].
 
  • #7
Phrak said:
I take it that kets are always unit vectors. So that if |r> is a ket, b is a complex number and |s> = |b r>, then |s> is not properly a ket?
You can always normalize them to unity so if the norm is not 1 then it doesn't imply that it isn't a ket.

By the way. The way to write |s> is |s> = b|r>. Did you see it written some other way? I have a vauge recollection of such a notation but I can't recall where. Thanks.

Pete
 
  • #8
as already mentioned:

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.

is correct, but could you elabrorate on what you mean by:

but getting the rest of it all to look like tensors is rather mysterious.
 
  • #9
Phrak said:
I'm just now learning bras and kets.
In that case you may find this recent thread about bras and kets useful. Read #17 first, because it explains a silly mistake I made in #2.

Phrak said:
But could you tell me how the Hermitian inner product is accomplished in finite dimensions? In using vectors with real entries, that I'm familiar with, the inner product is accomplished with the metric tensor. Is there a similar operator in Hilbert space?
The question is a little bit strange. An inner product on a real vector space V is just a function from [itex]V\times V[/itex] into [itex]\mathbb R[/itex] that satisfies a few conditions. It doesn't need to be "accomplished" by something else. You're probably thinking about how you can use matrix multiplication to define an inner product when the vector space is [itex]\mathbb R^n[/itex]: [itex]\left<x,y\right>=x^TSy[/itex] where S is a symmetric matrix. But even in this case, the inner product isn't defined using the metric tensor. It is the metric tensor. S is just a matrix. The inner product/metric tensor is the map [itex](x,y)\mapsto \left<x,y\right>.[/itex]
 
  • #10
pmb_phy said:
Kets can be thought of as elements of a vector space. However I'm not sure if the term "coordinate basis" can be applied here. There is definitely a basis though. The basis "vectors" are eigenvectors to Hermitian operators. Tenor products can be defined in terms of trhe tensor products of the "vectors" (kets).
That is correct.

Pete

That's a point. Without a coordinate system, it makes no sense to talk about coordinate basis. Knowing that you're using coordinate bases, or veilbien would be important when applying operators like
[tex] \frac{ \partial x^i }{\partial x^{j'}}[/tex]

Can tensors be defined as objects that transform as products of bras and kets?

By the way. The way to write |s> is |s> = b|r>. Did you see it written some other way?

My mistake. Apparently, the proper way to write bras and kets is to place only labels within the delimiters, as I've recently learned.
 
Last edited:
  • #11
In finite dimensional space I think of it this way.

I have a vector [tex] x [/tex] in Hilbert space. The Ket [tex] |x\rangle [/tex] is the representation as a column vector and the bra [tex] \langle x | [/tex] is the representation
as a row vector. Now [tex]\langle x|y\rangle[/tex] will give you a scalar and [tex]|x\rangle\langle y|[/tex] will give you a matrix.

I know that it is not exactly the same way it is normally used but this retains the spirit of the notation while, at least for me, getting rid of some of the confusion.
 
Last edited:
  • #12
I don't think that is a completely good way to look at it, because the ket is more general it doesn't assume a certain basis. Choosing a basis though you can in the finite case use that view.

But actually the column vector is more like a wavefunction (in a discrete finite version).
 
  • #13
You have a point, usually though if what I am thinking about is independent of basis I shy away from Bra-Ket notation.
 
  • #14
ok, but the meaning of a state ket is actually to make the state independent of a basis, you kan then write it a basis fx. the posistion basis like this

[tex] |\Psi> = \hat{I}|\Psi> = \int |x><x|\Psi> dx = \int <x|\Psi> |x> dx [/tex]

[tex] \Psi(x) = <x|\Psi> [/tex] is what called the wave function. So when you write a column vector, you only writing [tex] \Psi(x) = <x|\Psi> [/tex], because the basis[tex] |x> [/tex], is assumed. So I think that [tex] \Psi(x) = <x|\Psi> [/tex] is actually more like a column vector, this thoug has a infinite index namely x.

I the finite discrete case we often use some [tex] c_i [/tex], and them write them in a column, but they are exactly the same as [tex] \Psi(x) [/tex].

So the point is that, the ket notation is actually there to avoid referring to a basis before nessesary (when you need to do explicit calculations it is often more convenient to choose some basis, fx position, momentum or the energy bases.)
 
  • #15
not sure what your point is whit that equallity. And guess you mean

[tex]\psi(x) = \int \delta(x-y)\psi(y)dy[/tex]
 
  • #16
Or

[tex]\psi(x) = \int\delta_x(y)\psi(y)dy \\
= \langle\delta_x,\psi\rangle [/tex]

We are both getting to the same point though. We have to remember what space we are working on. I would rather not even think about using Bra-Ket notation except when using proper vectors.

Yeah, I messed it up, that's why I deleted it and put this one up.
 
  • #17
[tex]\psi(x) = \int\delta_x(y)\psi(y)dy \\ = \langle\delta_x,\psi\rangle [/tex]

is nothing like

[tex] |\Psi> = \hat{I}|\Psi> = \int |x><x|\Psi> dx = \int <x|\Psi>|x> dx [/tex]

This is using a basis. You use a delta function, to write something that doesn't say any thing.
 
  • #18
[tex]\psi[/tex] represents a vector in Hilbert space, just like [tex]|\psi\rangle[/tex]
and [tex]\psi(x)[/tex] represents a number, just like
[tex]\langle x|\psi\rangle[/tex]

I meant to to its' value at a point in Euclidean space before, sorry for the confusion.
 
  • #19
strangerep said:
Imagine two vectors u,v with complex entries. Complex-conjugate the entries of one
vector, and then form the familiar inner product, e.g., [itex]u^*_1v_1 + u^*_2v_2 + ...[/itex].
Mostly, you don't need to worry about the metric here because it's just [itex]\delta_{ij}[/itex].

But this is the interesting part. In transforming a vector ket to a dual vector bra, as if you were lowering an index, the complex conjugate is taken of the vector coefficients. This can be accomplished tensoraly if complex numbers are represented as column vectors,
[tex] c = \left( \begin{array}{c} a & b \end{array} \right)[/tex]
The conjugate of c is taken with
[tex] \rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right)[/tex] .
so that
[tex]c_{\beta} = \rho^{\beta} _{\alpha} c^\alpha[/tex]

In three dimensions, the lowering metric would be
[tex] g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)[/tex]

The idea is that whenever an index is lowered or raised, the complex conjugate is applied. But does this scheme hang together for, say, a unitary operator, where a unitary operator acting on a ket is type(1,1) tensor (one upper and one lower index) that takes a ket and returns a ket? I'm not so sure it hangs together consistantly.
 
  • #20
stil think there is a big differens. Are you thinking of the functions as something fx. in [tex]L^2(R^3)[/tex], and then you can make them into a number by taking inner product with the delta function, bacause if you do, then the hilbert space [tex]L^2(R^3)[/tex], is not general enough. The hilbert space is bigger, and when you choose a basis you get a wave function in fx [tex]L^2(R^3)[/tex] if you choose the position basis. But choosing different bases, you wil get other "wave functions". Maybe it is because you are only use to work in the position basis, and yet not seen bra-ket in on it's full scale?
 
  • #21
I admit I have not seen the Bra-Ket notation in its' full glory.

When you say [tex]L^2[/tex] is not big enough are you referring to the fact that the eigenvectors of continuous observables don't live in [tex]L^2[/tex] or are you referring
to something else?

In this particular example, yes I am thinking about functions in [tex]L^2[/tex]. I prefer to think separately about inner products and distributions which is probably why I never really liked the Bra-Ket notation in standard QM.

However once you get into QM of interacting systems I find the notation to be much more convenient and so I am trying to learn as much as I can about it.

btw thanks for clearing up my misunderstandings.
 
  • #22
I'm actually refereing to that you could project onto other bases like momemtum. Like this

[tex] |\Psi> = \int |p><p|\Psi> dp = \int <p|\Psi> |p> dp [/tex]

so know i have a function of the impuls [tex] \Psi(p) = <p|\Psi> [/tex]

I could also project onto the energy bases

[tex] |\Psi> = \sum_n |E_n><E_n|\Psi> = \sum <E_n|\Psi> |E_n> [/tex]

know i have a function [tex] \Psi(n) = <E_n|\Psi> [/tex], which is actually from the natural numbers.

Maybe you know that the, wave function as a function of p and of x are related by a Fourier transform this is seen by:

[tex] \int <p|\Psi> |p> dp = \int \int <p|\Psi> <x|p>|x> dp dx = \int \int \Psi(p)<x|p> dp |x> dx [/tex]

it can be shown that <x|p> is actually on the form [tex]e^(i A x p)[/tex] where A is a constant. so you get

[tex] \int <p|\Psi> |p> dp = \int \int \Psi(p) e^(i A x p) dp |x> dx [/tex] so you see that

[tex]\Psi(x) = \int \Psi(p) e^(i A x p) dp[/tex] so this is exactly the Fourier transform, up to some constants.

So you se the ket space is some general hilbert space, projecting on different bases, give you the wavefunction as a function of position, as you may be most familiar with, but you can project on all kinds of different bases, and change between them, this is what make ket space so powerfull, because it is very general. Of cause to do calculations you wan't to choose some basis, and here we often choose position, when we are learning about QM at least, bacuse it is the easiest to understand.
 
  • #23
Just because you have different representations of a basis doesn't make the Hilbert space any bigger. It's the same space, you are just interested in a different probability space on it.

So what, there are different representations of a vector in a Hilbert space. If that is the big advantage of Bra-Ket notation then I am unconvinced of its' usefulness. I'll take the spectral theorem instead.

Getting back to the first thing I said. Even in basis independent notation what I said about column/row vectors has meaning. If we are given a unit vector [tex]\psi[/tex] then we can understand it as being an element of some orthonormal basis and then saying
[tex]|\psi\rangle[/tex] is the representation as a column vector makes sense.
 
  • #24
You are right it is not bigger that was a mistake. Not sure what you mean when you say you take the spectral theorem instead, the bra ket, exaxctly uses the spectral thoerem.

The point is that the ket space is more general, because all the other probabilities are different spaces, but the ket space incorporate all this in one, because that these spaces are the coefficients from different projections, on to different bases, so making things is this space can prove things about all the spaces at ones.

So you got a abstract hilbert space, then by making a projection onto some basis, you get coefficients, these coefficients constitute different spaces of functions, from R^3 or the natural numbers, and maybe others.

Because we know the bases the information in the coefficients (the wave functions), gives us the same information, but how to change from fx position to the energy wave function, is best described in this picture, and proving things in ket space can be a lot easier and you get your result for all the other spaces at once.


You can't make sense of a column vector without referring to some basis.
 
  • #25
You refer abstractly to any basis that contains [tex]\frac{\psi}{||\psi||}[/tex] as an element.

I understand that Bra-Ket notation is using the Spectral Theorem, by saying I'll take the Spectral theorem I am saying I will take the spectral theorem without the added notation of Bra-Ket notation.

Ket space is more general then what exactly?
 
  • #26
but [tex]\psi[/tex] doesn't make any sence, without it is referring to some basis. This is exactly my point, you are implicit referring to the position basis when you write [tex]\psi[/tex] and then [tex]<delta_x,\psi> = \psi(x)[/tex]. That is why [tex]\psi[/tex] is not as general as [tex]|\psi>[/tex]. [tex]\psi[/tex] is a function, where as [tex]|\psi>[/tex] is a vector in a more general space. A projection in that space then gives you the function [tex]\psi[/tex].

Have you red sakurai modern quantum mechanics. There are some excellent chapters describing this.
 
  • #27
Another instructive example is maybe to look at the momentum operator. You probably think of it like this

[tex] \hat{p} = -i \hbar \nabla [/tex]

but more general it is defined to do this on the eigenstates of the operator

[tex] \hat{p}|p> = p|p> [/tex]

from this equation one can show

[tex] \hat{p}|\psi> = \int \hat{p}<x|\psi> |x> dp = \int -i\hbar\frac{\partial}{\partial x}<x|\psi> |x> dx [/tex]

now projecting on to the position basis we get

[tex] <x'|\hat{p}|\psi> = \int -i\hbar \frac{\partial}{\partial x}<x|\psi> <x'|x> dx = -i\hbar \frac{\partial}{\partial x'}<x'|\psi>= -i\hbar \frac{\partial}{\partial x}\psi(x) [/tex]

so you see working in the position basis, we get the usual impuls operator, but in another basis it could look completely different, this is why i say it is more general, the usual case is a special case of ket space, where you only are working with the coefficients, and it is implicit that these refer to the position basis.
 
  • #28
Ahh, ok. I see what you are saying. I just disagree. The Ket vectors are not a different space than the standard Hilbert space, it is just saying there are different REPRESENTATIONS of the same space. When I make the statement: Let [tex]\psi\in\mathcal{H}[/tex] I have not yet made any statement about what representation I am using, I have just said I am using an element in Hilbert space. I can define the projection operator [tex]|\psi\rangle\langle\psi |[/tex] using the understanding that I have used before. Now I can pick any self-adjoint operator and express my vector using the spectral decomposition provided by that operator.
 
  • #29
The momentum operator does not have eigenstates per se, the solutions to the eigenvalue equation don't belong in the Hilbert space. Is this the "rigged Hilbert space" that I have heard about? If so then I have read that rigged Hilbert space does not accomplish anything that can't be done with the spectral theorem.
 
  • #30
You are right there are some problems in QM, so that's why there are some postulates of QM that try to fix these problems.

It sounds to me that you just wan't to use the notation everybody else use, or maybe you are somewhere in between. It is not good notaion to write fx.

[tex]\psi = c_1|1>+c_2|2>[/tex]

this should be

[tex] |\psi> = c_1|1>+c_2|2>[/tex]

but things like these is often written because people misuse the notation. If you wan't to take a general element without referring to a basis, it is most agreed on to use the ket notation, so you don't get to write something like

[tex]|\psi><\psi| \psi [/tex]

this doesn't really make sense even though it is sometimes seen even in textbooks.
 
  • #31
okay, it shouldn't make sense to you. I would say it is abuse of notation, of cause this is the only way to understand it, but you shouldn't write it.
 
  • #32
I would never write that, I would write
[tex]\psi = c_1\psi_1+c_2\psi_2 [/tex]
or
[tex]|\psi\rangle = c_1|\psi_1\rangle+c_2|\psi_2\rangle [/tex]
or
[tex]\langle\psi| = c_1\langle\psi_1|+c_2\langle\psi_2| [/tex]
depending on context.

These are not problems with qm, these are problems that arise BECAUSE of the dependency on this notation and the use of statements like "eigenstates of the momentum operator". If you use spectral theory this is all well understood.

[tex]|\psi\rangle\langle\psi|\psi[/tex] surprisingly does make sense to me, although it seems rather redundant. I understand it as
[tex]\langle\psi,\psi\rangle\psi[/tex].
 
  • #33
It is an understanding, once you defined the projection operator
[tex]P=|\psi\rangle\langle\psi|[/tex] which is self adjoint, it acts the same way on a bra as a ket and so the statement makes sense. What would not make sense is to say
[tex]|\psi\rangle\langle\phi|x[/tex], the operator
[tex]|\psi\rangle\langle\phi|[/tex] is not self-adjoint and so we have to specify its' action differently on Ket's and Bra's. The space and its' dual.

It is in discussing operators like this where I find the Bra-Ket notation really useful.
 
  • #34
my point is that

[tex]|\psi><\psi|\psi[/tex]

doesn'r make sence, an object like [tex]|\psi><\psi|[/tex] works on kets or bras not, just [tex]\psi[/tex].
 
  • #35
of cause you can make your own notation and I give you that it consistent, but that doesn't make it right.
 

Similar threads

Replies
16
Views
1K
Replies
5
Views
3K
Replies
3
Views
1K
Replies
6
Views
1K
  • Special and General Relativity
Replies
3
Views
751
  • Quantum Physics
Replies
2
Views
1K
Replies
7
Views
2K
  • Special and General Relativity
Replies
10
Views
2K
Replies
10
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
823
Back
Top