- #1

- 4,239

- 1

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.

- Thread starter Phrak
- Start date

- #1

- 4,239

- 1

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.

- #2

strangerep

Science Advisor

- 3,162

- 996

Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just a

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.

vector space equipped with an Hermitian inner product (and some extra arcane stuff

about "completion" in the inf-dim case).

For a finite-dimensional Hilbert space H, it's all rather easy. The bras actually live in the "dual"

space [itex]H^*[/itex] (i.e., the space of linear functionals over H, meaning the space of

linear mappings from H to the complex numbers). That's really what this upper/lower

index business is all about. For finite-dimensional spaces, the dual [itex]H^*[/itex] is actually

isomorphic to the primal space H, so people tend to forget about the distinction. But in infinite

dimensions, the primal and dual spaces are no longer isomorphic in general, so there

is no canonical foundation for raising and lowering indices in general. People also tend to be

more pedantic, and talk about "self-adjoint", or "symmetric" operators and such-like. This is

discussed in textbooks on "Functional Analysis".

- #3

- 2,946

- 0

Yes. Absolutely.Is there a way bras and kets can be understood in terms of vectors and tensors and coordinate bases?

Kets can be thought of as elements of a vector space. However I'm not sure if the term "coordinate basis" can be applied here. There is definitely a basis though. The basis "vectors" are eigenvectors to Hermitian operators. Tenor products can be defined in terms of trhe tensor products of the "vectors" (kets).

That is correct.I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, ..

Pete

- #4

- 4,239

- 1

Thankyou strangerep. I don't understand all this languge, I'm afraid; I'm just now learning bras and kets. But could you tell me how the Hermitian inner product is accomplished in finite dimensions? In using vectors with real entries, that I'm familiar with, the inner product is accomplished with the metric tensor. Is there a similar operator in Hilbert space?Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just a

vector space equipped with an Hermitian inner product (and some extra arcane stuff

about "completion" in the inf-dim case).

For a finite-dimensional Hilbert space H, it's all rather easy. The bras actually live in the "dual"

space [itex]H^*[/itex] (i.e., the space of linear functionals over H, meaning the space of

linear mappings from H to the complex numbers). That's really what this upper/lower

index business is all about. For finite-dimensional spaces, the dual [itex]H^*[/itex] is actually

isomorphic to the primal space H, so people tend to forget about the distinction. But in infinite

dimensions, the primal and dual spaces are no longer isomorphic in general, so there

is no canonical foundation for raising and lowering indices in general. People also tend to be

more pedantic, and talk about "self-adjoint", or "symmetric" operators and such-like. This is

discussed in textbooks on "Functional Analysis".

- #5

- 4,239

- 1

I take it that kets are always unit vectors. So that if |r> is a ket, b is a complex number and |s> = |b r>, then |s> is not properly a ket?...Kets can be thought of as elements of a vector space. However I'm not sure if the term "coordinate basis" can be applied here. There is definitely a basis though. The basis "vectors" are eigenvectors to Hermitian operators. Tenor products can be defined in terms of trhe tensor products of the "vectors" (kets).

Pete

- #6

strangerep

Science Advisor

- 3,162

- 996

Imagine two vectors u,v with complex entries. Complex-conjugate the entries of oneIn using vectors with real entries, that I'm familiar with, the inner product is accomplished with the metric tensor. Is there a similar operator in Hilbert space?

vector, and then form the familiar inner product, e.g., [itex]u^*_1v_1 + u^*_2v_2 + ...[/itex].

Mostly, you don't need to worry about the metric here because it's just [itex]\delta_{ij}[/itex].

- #7

- 2,946

- 0

You can always normalize them to unity so if the norm is not 1 then it doesn't imply that it isn't a ket.I take it that kets are always unit vectors. So that if |r> is a ket, b is a complex number and |s> = |b r>, then |s> is not properly a ket?

By the way. The way to write |s> is |s> = b|r>. Did you see it written some other way? I have a vauge recollection of such a notation but I can't recall where. Thanks.

Pete

- #8

- 246

- 1

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.

is correct, but could you elabrorate on what you mean by:

but getting the rest of it all to look like tensors is rather mysterious.

- #9

Fredrik

Staff Emeritus

Science Advisor

Gold Member

- 10,851

- 412

In that case you may find this recent thread about bras and kets useful. Read #17 first, because it explains a silly mistake I made in #2.I'm just now learning bras and kets.

The question is a little bit strange. An inner product on a real vector space V is just a function from [itex]V\times V[/itex] into [itex]\mathbb R[/itex] that satisfies a few conditions. It doesn't need to be "accomplished" by something else. You're probably thinking about how you can use matrix multiplication to define an inner product when the vector space is [itex]\mathbb R^n[/itex]: [itex]\left<x,y\right>=x^TSy[/itex] where S is a symmetric matrix. But even in this case, the inner product isn't defined using the metric tensor. ItBut could you tell me how the Hermitian inner product is accomplished in finite dimensions? In using vectors with real entries, that I'm familiar with, the inner product is accomplished with the metric tensor. Is there a similar operator in Hilbert space?

- #10

- 4,239

- 1

That's a point. Without a coordinate system, it makes no sense to talk about coordinate basis. Knowing that you're using coordinate bases, or veilbien would be important when applying operators likeKets can be thought of as elements of a vector space. However I'm not sure if the term "coordinate basis" can be applied here. There is definitely a basis though. The basis "vectors" are eigenvectors to Hermitian operators. Tenor products can be defined in terms of trhe tensor products of the "vectors" (kets).

That is correct.

Pete

[tex] \frac{ \partial x^i }{\partial x^{j'}}[/tex]

Can tensors be defined as objects that transform as products of bras and kets?

My mistake. Apparently, the proper way to write bras and kets is to place only labels within the delimiters, as I've recently learned.By the way. The way to write |s> is |s> = b|r>. Did you see it written some other way?

Last edited:

- #11

- 72

- 0

In finite dimensional space I think of it this way.

I have a vector [tex] x [/tex] in Hilbert space. The Ket [tex] |x\rangle [/tex] is the representation as a column vector and the bra [tex] \langle x | [/tex] is the representation

as a row vector. Now [tex]\langle x|y\rangle[/tex] will give you a scalar and [tex]|x\rangle\langle y|[/tex] will give you a matrix.

I know that it is not exactly the same way it is normally used but this retains the spirit of the notation while, at least for me, getting rid of some of the confusion.

I have a vector [tex] x [/tex] in Hilbert space. The Ket [tex] |x\rangle [/tex] is the representation as a column vector and the bra [tex] \langle x | [/tex] is the representation

as a row vector. Now [tex]\langle x|y\rangle[/tex] will give you a scalar and [tex]|x\rangle\langle y|[/tex] will give you a matrix.

I know that it is not exactly the same way it is normally used but this retains the spirit of the notation while, at least for me, getting rid of some of the confusion.

Last edited:

- #12

- 246

- 1

But actually the column vector is more like a wavefunction (in a discrete finite version).

- #13

- 72

- 0

- #14

- 246

- 1

[tex] |\Psi> = \hat{I}|\Psi> = \int |x><x|\Psi> dx = \int <x|\Psi> |x> dx [/tex]

[tex] \Psi(x) = <x|\Psi> [/tex] is what called the wave function. So when you write a column vector, you only writing [tex] \Psi(x) = <x|\Psi> [/tex], because the basis[tex] |x> [/tex], is assumed. So I think that [tex] \Psi(x) = <x|\Psi> [/tex] is actually more like a column vector, this thoug has a infinite index namely x.

I the finite discrete case we often use some [tex] c_i [/tex], and them write them in a column, but they are exactly the same as [tex] \Psi(x) [/tex].

So the point is that, the ket notation is actually there to avoid refering to a basis before nessesary (when you need to do explicit calculations it is often more convenient to choose some basis, fx position, momentum or the energy bases.)

- #15

- 246

- 1

[tex]\psi(x) = \int \delta(x-y)\psi(y)dy[/tex]

- #16

- 72

- 0

[tex]\psi(x) = \int\delta_x(y)\psi(y)dy \\

= \langle\delta_x,\psi\rangle [/tex]

We are both getting to the same point though. We have to remember what space we are working on. I would rather not even think about using Bra-Ket notation except when using proper vectors.

Yeah, I messed it up, that's why I deleted it and put this one up.

- #17

- 246

- 1

is nothing like

[tex] |\Psi> = \hat{I}|\Psi> = \int |x><x|\Psi> dx = \int <x|\Psi>|x> dx [/tex]

This is using a basis. You use a delta function, to write something that doesn't say any thing.

- #18

- 72

- 0

and [tex]\psi(x)[/tex] represents a number, just like

[tex]\langle x|\psi\rangle[/tex]

I meant to to its' value at a point in Euclidean space before, sorry for the confusion.

- #19

- 4,239

- 1

But this is the interesting part. In transforming a vector ket to a dual vector bra, as if you were lowering an index, the complex conjugate is taken of the vector coefficients. This can be accomplished tensoraly if complex numbers are represented as column vectors,Imagine two vectors u,v with complex entries. Complex-conjugate the entries of one

vector, and then form the familiar inner product, e.g., [itex]u^*_1v_1 + u^*_2v_2 + ...[/itex].

Mostly, you don't need to worry about the metric here because it's just [itex]\delta_{ij}[/itex].

[tex] c = \left( \begin{array}{c} a & b \end{array} \right)[/tex]

The conjugate of c is taken with

[tex] \rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right)[/tex] .

so that

[tex]c_{\beta} = \rho^{\beta} _{\alpha} c^\alpha[/tex]

In three dimensions, the lowering metric would be

[tex] g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)[/tex]

The idea is that whenever an index is lowered or raised, the complex conjugate is applied. But does this scheme hang together for, say, a unitary operator, where a unitary operator acting on a ket is type(1,1) tensor (one upper and one lower index) that takes a ket and returns a ket? I'm not so sure it hangs together consistantly.

- #20

- 246

- 1

- #21

- 72

- 0

When you say [tex]L^2[/tex] is not big enough are you referring to the fact that the eigenvectors of continuous observables don't live in [tex]L^2[/tex] or are you referring

to something else?

In this particular example, yes I am thinking about functions in [tex]L^2[/tex]. I prefer to think separately about inner products and distributions which is probably why I never really liked the Bra-Ket notation in standard QM.

However once you get into QM of interacting systems I find the notation to be much more convenient and so I am trying to learn as much as I can about it.

btw thanks for clearing up my misunderstandings.

- #22

- 246

- 1

[tex] |\Psi> = \int |p><p|\Psi> dp = \int <p|\Psi> |p> dp [/tex]

so know i have a function of the impuls [tex] \Psi(p) = <p|\Psi> [/tex]

I could also project onto the energy bases

[tex] |\Psi> = \sum_n |E_n><E_n|\Psi> = \sum <E_n|\Psi> |E_n> [/tex]

know i have a function [tex] \Psi(n) = <E_n|\Psi> [/tex], which is actually from the natural numbers.

Maybe you know that the, wave function as a function of p and of x are related by a fourier transform this is seen by:

[tex] \int <p|\Psi> |p> dp = \int \int <p|\Psi> <x|p>|x> dp dx = \int \int \Psi(p)<x|p> dp |x> dx [/tex]

it can be shown that <x|p> is actually on the form [tex]e^(i A x p)[/tex] where A is a constant. so you get

[tex] \int <p|\Psi> |p> dp = \int \int \Psi(p) e^(i A x p) dp |x> dx [/tex] so you see that

[tex]\Psi(x) = \int \Psi(p) e^(i A x p) dp[/tex] so this is exactly the fourier transform, up to some constants.

So you se the ket space is some general hilbert space, projecting on different bases, give you the wavefunction as a function of position, as you may be most familiar with, but you can project on all kinds of different bases, and change between them, this is what make ket space so powerfull, because it is very general. Of cause to do calculations you wan't to choose some basis, and here we often choose position, when we are learning about QM at least, bacuse it is the easiest to understand.

- #23

- 72

- 0

So what, there are different representations of a vector in a Hilbert space. If that is the big advantage of Bra-Ket notation then I am unconvinced of its' usefulness. I'll take the spectral theorem instead.

Getting back to the first thing I said. Even in basis independent notation what I said about column/row vectors has meaning. If we are given a unit vector [tex]\psi[/tex] then we can understand it as being an element of some orthonormal basis and then saying

[tex]|\psi\rangle[/tex] is the representation as a column vector makes sense.

- #24

- 246

- 1

The point is that the ket space is more general, because all the other probabilities are different spaces, but the ket space incorporate all this in one, because that these spaces are the coefficients from different projections, on to different bases, so making things is this space can prove things about all the spaces at ones.

So you got a abstract hilbert space, then by making a projection onto some basis, you get coefficients, these coefficients constitute different spaces of functions, from R^3 or the natural numbers, and maybe others.

Because we know the bases the information in the coefficients (the wave functions), gives us the same information, but how to change from fx position to the energy wave function, is best described in this picture, and proving things in ket space can be a lot easier and you get your result for all the other spaces at once.

You can't make sence of a column vector without refering to some basis.

- #25

- 72

- 0

I understand that Bra-Ket notation is using the Spectral Theorem, by saying I'll take the Spectral theorem I am saying I will take the spectral theorem without the added notation of Bra-Ket notation.

Ket space is more general then what exactly?