# Bras and Kets and Tensors

Is there a way bras and kets can be understood in terms of vectors and tensors and coordinate bases?

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.

Related Quantum Physics News on Phys.org
strangerep
Is there a way bras and kets can be understood in terms of vectors and tensors and coordinate bases?

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.
Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just a
vector space equipped with an Hermitian inner product (and some extra arcane stuff
about "completion" in the inf-dim case).

For a finite-dimensional Hilbert space H, it's all rather easy. The bras actually live in the "dual"
space $H^*$ (i.e., the space of linear functionals over H, meaning the space of
linear mappings from H to the complex numbers). That's really what this upper/lower
index business is all about. For finite-dimensional spaces, the dual $H^*$ is actually
isomorphic to the primal space H, so people tend to forget about the distinction. But in infinite
dimensions, the primal and dual spaces are no longer isomorphic in general, so there
is no canonical foundation for raising and lowering indices in general. People also tend to be
more pedantic, and talk about "self-adjoint", or "symmetric" operators and such-like. This is
discussed in textbooks on "Functional Analysis".

Is there a way bras and kets can be understood in terms of vectors and tensors and coordinate bases?
Yes. Absolutely.

Kets can be thought of as elements of a vector space. However I'm not sure if the term "coordinate basis" can be applied here. There is definitely a basis though. The basis "vectors" are eigenvectors to Hermitian operators. Tenor products can be defined in terms of trhe tensor products of the "vectors" (kets).
I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, ..
That is correct.

Pete

Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just a
vector space equipped with an Hermitian inner product (and some extra arcane stuff
about "completion" in the inf-dim case).

For a finite-dimensional Hilbert space H, it's all rather easy. The bras actually live in the "dual"
space $H^*$ (i.e., the space of linear functionals over H, meaning the space of
linear mappings from H to the complex numbers). That's really what this upper/lower
index business is all about. For finite-dimensional spaces, the dual $H^*$ is actually
isomorphic to the primal space H, so people tend to forget about the distinction. But in infinite
dimensions, the primal and dual spaces are no longer isomorphic in general, so there
is no canonical foundation for raising and lowering indices in general. People also tend to be
more pedantic, and talk about "self-adjoint", or "symmetric" operators and such-like. This is
discussed in textbooks on "Functional Analysis".
Thankyou strangerep. I don't understand all this languge, I'm afraid; I'm just now learning bras and kets. But could you tell me how the Hermitian inner product is accomplished in finite dimensions? In using vectors with real entries, that I'm familiar with, the inner product is accomplished with the metric tensor. Is there a similar operator in Hilbert space?

...Kets can be thought of as elements of a vector space. However I'm not sure if the term "coordinate basis" can be applied here. There is definitely a basis though. The basis "vectors" are eigenvectors to Hermitian operators. Tenor products can be defined in terms of trhe tensor products of the "vectors" (kets).
Pete
I take it that kets are always unit vectors. So that if |r> is a ket, b is a complex number and |s> = |b r>, then |s> is not properly a ket?

strangerep
In using vectors with real entries, that I'm familiar with, the inner product is accomplished with the metric tensor. Is there a similar operator in Hilbert space?
Imagine two vectors u,v with complex entries. Complex-conjugate the entries of one
vector, and then form the familiar inner product, e.g., $u^*_1v_1 + u^*_2v_2 + ...$.
Mostly, you don't need to worry about the metric here because it's just $\delta_{ij}$.

I take it that kets are always unit vectors. So that if |r> is a ket, b is a complex number and |s> = |b r>, then |s> is not properly a ket?
You can always normalize them to unity so if the norm is not 1 then it doesn't imply that it isn't a ket.

By the way. The way to write |s> is |s> = b|r>. Did you see it written some other way? I have a vauge recollection of such a notation but I can't recall where. Thanks.

Pete

I'm fairly sure that if a ket is thought of as a vector with an upper index, then it's bra is a vector with a lower index, but getting the rest of it all to look like tensors is rather mysterious.

is correct, but could you elabrorate on what you mean by:

but getting the rest of it all to look like tensors is rather mysterious.

Fredrik
Staff Emeritus
Gold Member
I'm just now learning bras and kets.
In that case you may find this recent thread about bras and kets useful. Read #17 first, because it explains a silly mistake I made in #2.

But could you tell me how the Hermitian inner product is accomplished in finite dimensions? In using vectors with real entries, that I'm familiar with, the inner product is accomplished with the metric tensor. Is there a similar operator in Hilbert space?
The question is a little bit strange. An inner product on a real vector space V is just a function from $V\times V$ into $\mathbb R$ that satisfies a few conditions. It doesn't need to be "accomplished" by something else. You're probably thinking about how you can use matrix multiplication to define an inner product when the vector space is $\mathbb R^n$: $\left<x,y\right>=x^TSy$ where S is a symmetric matrix. But even in this case, the inner product isn't defined using the metric tensor. It is the metric tensor. S is just a matrix. The inner product/metric tensor is the map $(x,y)\mapsto \left<x,y\right>.$

Kets can be thought of as elements of a vector space. However I'm not sure if the term "coordinate basis" can be applied here. There is definitely a basis though. The basis "vectors" are eigenvectors to Hermitian operators. Tenor products can be defined in terms of trhe tensor products of the "vectors" (kets).
That is correct.

Pete
That's a point. Without a coordinate system, it makes no sense to talk about coordinate basis. Knowing that you're using coordinate bases, or veilbien would be important when applying operators like
$$\frac{ \partial x^i }{\partial x^{j'}}$$

Can tensors be defined as objects that transform as products of bras and kets?

By the way. The way to write |s> is |s> = b|r>. Did you see it written some other way?
My mistake. Apparently, the proper way to write bras and kets is to place only labels within the delimiters, as I've recently learned.

Last edited:
In finite dimensional space I think of it this way.

I have a vector $$x$$ in Hilbert space. The Ket $$|x\rangle$$ is the representation as a column vector and the bra $$\langle x |$$ is the representation
as a row vector. Now $$\langle x|y\rangle$$ will give you a scalar and $$|x\rangle\langle y|$$ will give you a matrix.

I know that it is not exactly the same way it is normally used but this retains the spirit of the notation while, at least for me, getting rid of some of the confusion.

Last edited:
I don't think that is a completely good way to look at it, because the ket is more general it doesn't assume a certain basis. Choosing a basis though you can in the finite case use that view.

But actually the column vector is more like a wavefunction (in a discrete finite version).

You have a point, usually though if what I am thinking about is independent of basis I shy away from Bra-Ket notation.

ok, but the meaning of a state ket is actually to make the state independent of a basis, you kan then write it a basis fx. the posistion basis like this

$$|\Psi> = \hat{I}|\Psi> = \int |x><x|\Psi> dx = \int <x|\Psi> |x> dx$$

$$\Psi(x) = <x|\Psi>$$ is what called the wave function. So when you write a column vector, you only writing $$\Psi(x) = <x|\Psi>$$, because the basis$$|x>$$, is assumed. So I think that $$\Psi(x) = <x|\Psi>$$ is actually more like a column vector, this thoug has a infinite index namely x.

I the finite discrete case we often use some $$c_i$$, and them write them in a column, but they are exactly the same as $$\Psi(x)$$.

So the point is that, the ket notation is actually there to avoid refering to a basis before nessesary (when you need to do explicit calculations it is often more convenient to choose some basis, fx position, momentum or the energy bases.)

not sure what your point is whit that equallity. And guess you mean

$$\psi(x) = \int \delta(x-y)\psi(y)dy$$

Or

$$\psi(x) = \int\delta_x(y)\psi(y)dy \\ = \langle\delta_x,\psi\rangle$$

We are both getting to the same point though. We have to remember what space we are working on. I would rather not even think about using Bra-Ket notation except when using proper vectors.

Yeah, I messed it up, that's why I deleted it and put this one up.

$$\psi(x) = \int\delta_x(y)\psi(y)dy \\ = \langle\delta_x,\psi\rangle$$

is nothing like

$$|\Psi> = \hat{I}|\Psi> = \int |x><x|\Psi> dx = \int <x|\Psi>|x> dx$$

This is using a basis. You use a delta function, to write something that doesn't say any thing.

$$\psi$$ represents a vector in Hilbert space, just like $$|\psi\rangle$$
and $$\psi(x)$$ represents a number, just like
$$\langle x|\psi\rangle$$

I meant to to its' value at a point in Euclidean space before, sorry for the confusion.

Imagine two vectors u,v with complex entries. Complex-conjugate the entries of one
vector, and then form the familiar inner product, e.g., $u^*_1v_1 + u^*_2v_2 + ...$.
Mostly, you don't need to worry about the metric here because it's just $\delta_{ij}$.
But this is the interesting part. In transforming a vector ket to a dual vector bra, as if you were lowering an index, the complex conjugate is taken of the vector coefficients. This can be accomplished tensoraly if complex numbers are represented as column vectors,
$$c = \left( \begin{array}{c} a & b \end{array} \right)$$
The conjugate of c is taken with
$$\rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right)$$ .
so that
$$c_{\beta} = \rho^{\beta} _{\alpha} c^\alpha$$

In three dimensions, the lowering metric would be
$$g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)$$

The idea is that whenever an index is lowered or raised, the complex conjugate is applied. But does this scheme hang together for, say, a unitary operator, where a unitary operator acting on a ket is type(1,1) tensor (one upper and one lower index) that takes a ket and returns a ket? I'm not so sure it hangs together consistantly.

stil think there is a big differens. Are you thinking of the functions as something fx. in $$L^2(R^3)$$, and then you can make them in to a number by taking inner product with the delta function, bacause if you do, then the hilbert space $$L^2(R^3)$$, is not general enough. The hilbert space is bigger, and when you choose a basis you get a wave function in fx $$L^2(R^3)$$ if you choose the position basis. But choosing different bases, you wil get other "wave functions". Maybe it is because you are only use to work in the position basis, and yet not seen bra-ket in on it's full scale?

I admit I have not seen the Bra-Ket notation in its' full glory.

When you say $$L^2$$ is not big enough are you referring to the fact that the eigenvectors of continuous observables don't live in $$L^2$$ or are you referring
to something else?

In this particular example, yes I am thinking about functions in $$L^2$$. I prefer to think separately about inner products and distributions which is probably why I never really liked the Bra-Ket notation in standard QM.

However once you get into QM of interacting systems I find the notation to be much more convenient and so I am trying to learn as much as I can about it.

btw thanks for clearing up my misunderstandings.

I'm actually refereing to that you could project onto other bases like momemtum. Like this

$$|\Psi> = \int |p><p|\Psi> dp = \int <p|\Psi> |p> dp$$

so know i have a function of the impuls $$\Psi(p) = <p|\Psi>$$

I could also project onto the energy bases

$$|\Psi> = \sum_n |E_n><E_n|\Psi> = \sum <E_n|\Psi> |E_n>$$

know i have a function $$\Psi(n) = <E_n|\Psi>$$, which is actually from the natural numbers.

Maybe you know that the, wave function as a function of p and of x are related by a fourier transform this is seen by:

$$\int <p|\Psi> |p> dp = \int \int <p|\Psi> <x|p>|x> dp dx = \int \int \Psi(p)<x|p> dp |x> dx$$

it can be shown that <x|p> is actually on the form $$e^(i A x p)$$ where A is a constant. so you get

$$\int <p|\Psi> |p> dp = \int \int \Psi(p) e^(i A x p) dp |x> dx$$ so you see that

$$\Psi(x) = \int \Psi(p) e^(i A x p) dp$$ so this is exactly the fourier transform, up to some constants.

So you se the ket space is some general hilbert space, projecting on different bases, give you the wavefunction as a function of position, as you may be most familiar with, but you can project on all kinds of different bases, and change between them, this is what make ket space so powerfull, because it is very general. Of cause to do calculations you wan't to choose some basis, and here we often choose position, when we are learning about QM at least, bacuse it is the easiest to understand.

Just because you have different representations of a basis doesn't make the Hilbert space any bigger. It's the same space, you are just interested in a different probability space on it.

So what, there are different representations of a vector in a Hilbert space. If that is the big advantage of Bra-Ket notation then I am unconvinced of its' usefulness. I'll take the spectral theorem instead.

Getting back to the first thing I said. Even in basis independent notation what I said about column/row vectors has meaning. If we are given a unit vector $$\psi$$ then we can understand it as being an element of some orthonormal basis and then saying
$$|\psi\rangle$$ is the representation as a column vector makes sense.

You are right it is not bigger that was a mistake. Not sure what you mean when you say you take the spectral theorem instead, the bra ket, exaxctly uses the spectral thoerem.

The point is that the ket space is more general, because all the other probabilities are different spaces, but the ket space incorporate all this in one, because that these spaces are the coefficients from different projections, on to different bases, so making things is this space can prove things about all the spaces at ones.

So you got a abstract hilbert space, then by making a projection onto some basis, you get coefficients, these coefficients constitute different spaces of functions, from R^3 or the natural numbers, and maybe others.

Because we know the bases the information in the coefficients (the wave functions), gives us the same information, but how to change from fx position to the energy wave function, is best described in this picture, and proving things in ket space can be a lot easier and you get your result for all the other spaces at once.

You can't make sence of a column vector without refering to some basis.

You refer abstractly to any basis that contains $$\frac{\psi}{||\psi||}$$ as an element.

I understand that Bra-Ket notation is using the Spectral Theorem, by saying I'll take the Spectral theorem I am saying I will take the spectral theorem without the added notation of Bra-Ket notation.

Ket space is more general then what exactly?