Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Phrak
  • Start date
  • Tags
    Tensors
In summary: Kets (states) are just vectors in the Hilbert space. And after all, a Hilbert space is just avector space equipped with an Hermitian inner product (and some extra arcane stuff about "completion" in the inf-dim case).
  • #36
The projection does have meaning in Hilbert space,
[tex]Px = |\psi\rangle\langle\psi|x =\langle\psi,x\rangle\psi[/tex] the problem with writing
[tex]|\psi\rangle\langle\phi|x[/tex]
is the ambiguity, does it mean
[tex]\langle\psi,x\rangle\phi[/tex] or does it mean
[tex]\langle\phi,x\rangle\psi[/tex]. This ambiguity doesn't come up with the projection operator and so while you may not like the notation, it is consistent.
 
Physics news on Phys.org
  • #37
mrandersdk said:
of cause you can make your own notation and I give you that it consistent, but that doesn't make it right.

But it is not my notation! I agree it can be confusing notation, but the only measure we have on mathematics is consistency. Mathematical correctness == consistency with axioms. There is no ambiguity in the statement and so therefore it is valid.
 
  • #38
there can be rules for using notation, even though something else makes sence, it can be wrong. When you write a opeartor in the bra ket notation you have to make it work on kets, i agree with you that you can make sense of it, but you shouldn't.
 
  • #39
In Bra-Ket notation the expected value of an observable is represented as
[tex]\langle\psi|A|\psi\rangle[/tex]
This only has meaning if the operator is self-adjoint. If the operator is not self-adjoint then you have nonsense.
The projection is self-adjoint and so the statement has an unambiguous meaning.

The only rules for using notation should be, does what you write have an unambiguous meaning. If so then it is valid.
 
  • #40
Phrak said:
But this is the interesting part. In transforming a vector ket to a dual vector bra, as if you were lowering an index, the complex conjugate is taken of the vector coefficients. This can be accomplished tensoraly if complex numbers are represented as column vectors,
[tex] c = \left( \begin{array}{c} a & b \end{array} \right)[/tex]
The conjugate of c is taken with
[tex] \rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right)[/tex] .
so that
[tex]c_{\beta} = \rho^{\beta} _{\alpha} c^\alpha[/tex]

In three dimensions, the lowering metric would be
[tex] g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)[/tex]

The idea is that whenever an index is lowered or raised, the complex conjugate is applied. But does this scheme hang together for, say, a unitary operator, where a unitary operator acting on a ket is type(1,1) tensor (one upper and one lower index) that takes a ket and returns a ket? I'm not so sure it hangs together consistantly.

I must confess I have never seen this process before. Any reference to where I might see this discussed more in-depth?
 
  • #41
[tex]\langle\psi|A|\psi\rangle[/tex] this makes sense even if A is not self-adjoint, you just don't know if it is real.

But it is wrong still to write

[tex]|\psi><\psi|\psi[/tex]

because as i say [tex]|\psi><\psi|[/tex] works to the left on bras and to the right on kets, and not on [tex]\psi[/tex], unless you mean [tex]\psi = |\psi>[/tex] which I guess you don't because you said you would never write that.
 
  • #42
For A not self-adjoint, does [tex]\langle\psi|A|\psi\rangle[/tex] mean
[tex]\langle\psi,A\psi\rangle[/tex] or does it mean
[tex]\langle A\psi,\psi\rangle[/tex]?

As for the projection. Given a vector [tex]x\in\mathcal{H}[/tex] it has a representation as both a ket
[tex]|x\rangle[/tex] and as a bra [tex]\langle x|[/tex]. If we define
[tex]P=|\psi\rangle\langle\psi|[/tex] then the action on the ket is given by
[tex]|\psi\rangle\langle\psi|x\rangle[/tex]
which is the conjugate transpose of
[tex]\langle x|\psi\rangle\langle\psi|[/tex] which is the action of P on the bra. In this sense we can say that the operation [tex]Px[/tex] is well defined.
 
  • #43
Listen what i say. It is welldefined as you say, but that doesn't make it right, you can't define your own notation, this a notation developed by dirac and you are using it wrong.

If A not is self-adjoint you have

[tex](\langle\psi|A) |\psi\rangle = \langle\psi| (A |\psi\rangle) [/tex]

if A is selfadjoint you get

[tex] <\psi|A|\psi> = \int <\psi|A|x><x|\psi> dx = \int <\psi|A|x> \psi(x) dx = \int <x|A|\psi>^\dagger \psi(x) dx = [/tex]
[tex] \int (A_x\psi(x))^\dagger \psi(x) dx = \int (A_x\psi(x)^*)\psi(x) dx [/tex]

or just

[tex] <\psi|A|\psi> = \int \psi(x)^*(A_x\psi(x)) dx [/tex]

where A_x denotes the operator in the position basis, see my example with the momentum operator to se what i mean with that.
 
  • #44
When one says
[tex](\langle\psi|A) |\psi\rangle[/tex]
I understand that to be the inner product of the vectors
[tex]A^*\psi[/tex] and [tex]\psi[/tex];
[tex]\langle A^*\psi,\psi\rangle[/tex]
on the other hand I understand that
[tex] \langle\psi| (A |\psi\rangle) [/tex]
is the inner product of the same two vectors changing the order
ie [tex]\langle\psi,A\psi\rangle[/tex].
Is this correct?
 
Last edited:
  • #45
yeah with

[tex]\langle A\psi,\psi\rangle = \int (A_x^\dagger \psi^*(x)) \psi(x) dx [/tex].

But I think you should read sakurai or some other advanced QM book, because it sounds like you read some introduction books that make an easier approach, but it is not so general.
 
  • #46
OK, after some reading, I saw I was wrong in saying that the Bra-Ket notation has no meaning for operators that are not self-adjoint.

I agree, and actually at the beginning even claimed that I wasn't using the Dirac notation in the standard way. I only use it for purposes that I find useful. For those purposes and as a way of being able to better understand them. Thinking about them as column and row vectors has its' usefulness. It is not 100% correct in the sense that I am not using it in the same way that Dirac used it, thank you for explaining the intended purpose of the Dirac notation to me.
 
  • #47
No problem. You are right that using the notation in its full purpose can be a jerk, so getting a faster and easier way is fine. I just think it is important to know where you are "cheating", because many people do it wrong without knowing it, and this is because introduction books do it, but to use it in it's full power can be a bit confusing, so that's probably the best way to teach it.
 
  • #48
comote said:
I must confess I have never seen this process before. Any reference to where I might see this discussed more in-depth?

I'm afraid this is all my own mad invention, if it hangs together.

The metric with tensor entries isn't necessary, but possibly useful, in order to understand bras and kets in a finite dimensional Hilbert space in the language made familiar through tensor calculus.

I should make a correction! I should have written rho with two lower indeces.

With complex numbers are represented as column vectors,
[tex] c = \left( \begin{array}{c} a & b \end{array} \right)[/tex]

The conjugate of c is taken with
[tex] \rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right)[/tex] .

so that
[tex]c_{\beta} = \rho^{\beta} _{\alpha} c^\alpha[/tex]

In three dimensions, the lowering metric, that turns kets into bras, where the basis are orthonormal would be
[tex] g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)[/tex]

So, for a ket (vector)
[tex] c \left| \phi \right> [/tex]

the bra (dual vector) is
[tex] \left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)[/tex]

which is a verbose way to say
[tex] \hat{V}V = gV[/tex]
 
  • #49
comote said:
I must confess I have never seen this process before. Any reference to where I might see this discussed more in-depth?

I'm afraid this is all my own mad invention in order to understand bras and kets, if it hangs together.

The metric with tensor entries isn't necessary, but possibly useful, in order to understand bras and kets in a finite dimensional Hilbert space in the language made familiar through tensor calculus as used in relativity.

I should make a correction! I should have written rho with two lower indeces.

With complex numbers are represented as column vectors,
[tex] c = \left( \begin{array}{c} a & b \end{array} \right)[/tex]

The conjugate of c is taken with
[tex] \rho = \left( \begin{array}{cc} 1 & 0 & 0 & -1 \end{array} \right)[/tex] .

so that
[tex]c_{\beta} = \rho_{\beta \alpha} c^\alpha[/tex]

In three dimensions, the lowering metric, that turns kets into bras, where the basis are orthonormal would be
[tex] g_{ij} = \left( \begin{array}{ccc} \rho & 0 & 0 & 0 & \rho & 0 & 0 & 0 & \rho \end{array} \right)[/tex]

So, for a ket (vector)
[tex] c \left| \phi \right> [/tex]

the bra (dual vector) is
[tex] \left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)[/tex]

which is a verbose way to say
[tex] \hat{V} = gV[/tex]
in a basis independent way.
 
  • #50
mrandersdk said:
stil think there is a big differens. Are you thinking of the functions as something fx. in [tex]L^2(R^3)[/tex], and then you can make them into a number by taking inner product with the delta function, bacause if you do, then the hilbert space [tex]L^2(R^3)[/tex], is not general enough. The hilbert space is bigger, and when you choose a basis you get a wave function in fx [tex]L^2(R^3)[/tex] if you choose the position basis. But choosing different bases, you wil get other "wave functions". Maybe it is because you are only use to work in the position basis, and yet not seen bra-ket in on it's full scale?
Actually L2 is too big since it includes all functions which are not only square integrable but which are also non-continuous. Wave functions must be continuous.

Pete
 
  • #51
comote said:
[tex]\psi[/tex] represents a vector in Hilbert space, just like [tex]|\psi\rangle[/tex]...
Caution should be exercised with the usage of that kind of notation. The notation [itex]\psi[/itex] always represents either an operator or a scalar. Its never used to represent a vector. A quantity is denoted a vector when it appears using the ket notation. Otherwise it'd be like writing A as a vector without placing an arrow over it or not making it boldfaced to denote that its a vector.

Pete
 
  • #52
comote said:
I admit I have not seen the Bra-Ket notation in its' full glory.

When you say [tex]L^2[/tex] is not big enough are you referring to the fact that the eigenvectors of continuous observables don't live in [tex]L^2[/tex] or are you referring
to something else?
In such case the Hilbert space would not be big enough. It would be too big.

Pete
 
  • #53
pmb_phy said:
Actually L2 is too big since it includes all functions which are not only square integrable but which are also non-continuous. Wave functions must be continuous.

Pete
Unless, of course, they aren't. :tongue: The usage in this thread -- 'element of a Hilbert space' -- is the usage I am familiar with. It's also the definition seen on Wikipedia.

Now, a (Schwartz) test function is required to be continuous (among other things). Maybe you are thinking of that?
 
  • #54
pmb_phy said:
I guess what I'm saying (rather, the teachers and text from which I learned QM) is that, while it is true that all quantum states that are representable by a ket is an element of a Hilbert space it is not true that all elements of a Hilbert space correspond to a quantum state that are representable by a ket.
Hold on a second. I question whether or not that is true. An element of the Hilbert space must be a ket and therefore must represent some quantum state, right?

Pete
 
Last edited:
  • #55
pmb_phy said:
But specifically I have the following statement in mind. From Quantum Mechanics by Cohen-Tannoudji, Diu and Laloe, page 94-95
It sounds like they've defined wavefunctions to be elements of the Hilbert space.

(and then proceeded to argue a certain class of test functions can serve as adequate approximations)
 
  • #56
Hurkyl said:
It sounds like they've defined wavefunctions to be elements of the Hilbert space.

(and then proceeded to argue a certain class of test functions can serve as adequate approximations)

Prior to that paragraph the authors wrote
Thus we are led to studying the set of square-integrable functions. These are the functions for which the integral (A-1) converges. This set is called L2 by mathematicians and it has the structure of a Hilbert space.

Pete
 
  • #57
to get back to phrak's question.

I don't think that your approach is very usefull. Given a finite dimensional vector spae and a basis for this, let's say [tex]\{ v^1,v^2,...,v^n \}[/tex] then there is a canonical basis for it's dual defined by

[tex] w_j(v^i) = \delta_j^i[/tex]

You write:

[tex] \left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)[/tex]

but if you represent c as a 2 x 1 columnvector, and g_ij is a 3 x 3 matrix, how is it defined, and how do they work on the ket? For it to make a little sense you need to represent the ket as a n x 1 vector and to make g_ij transform it into a 1 x n vector with all the numbers in it complex conjugated.

I think my problem is also why this technique would be smart if it even worked?
 
  • #58
mrandersdk said:
to get back to phrak's question.

I don't mind the side discussions at all. In fact they're probable better foder anyway! :smile:.

I don't think that your approach is very usefull. Given a finite dimensional vector spae and a basis for this, let's say [tex]\{ v^1,v^2,...,v^n \}[/tex] then there is a canonical basis for it's dual defined by

[tex] w_j(v^i) = \delta_j^i[/tex]

You write:

[tex] \left< \psi \right| c* = g_{ij} \left( c \left| \phi \right> \right)[/tex]

But I should have written :tongue2:
[tex] \left< \psi \right| c* = g_{ij} \left( c \left| \psi \right> \right)[/tex]

but if you represent c as a 2 x 1 columnvector, and g_ij is a 3 x 3 matrix, how is it defined, and how do they work on the ket? For it to make a little sense you need to represent the ket as a n x 1 vector and to make g_ij transform it into a 1 x n vector with all the numbers in it complex conjugated.

c is a N x 1 column vector whos entries are 2 x 1 column vectors of real entries.
g_ij is an N x N matrix having 2 x 2 maxtrix entries. In othonormal basis the diagonal elements consist of rho, where all the other elements are zero.

The 2 x 1 column vectors serve as complex numbers. with entry 1,1=real part and 2,1=complex part. It's simply a trick to associate each g_ij and g^ij with an implicite conjugation operation and each mixed metric with the Kronecker delta.

You can test that this is true in the orthonormal basis
[tex]g_{ij}g^{jk} = \delta_i^k[/tex]

The equation you've quoted is actually a mess, comprised of mixed notation, that was intended to serve as a transition equation to get you to
[tex] \hat{V} = gV [/tex] where [tex] \hat{V}[/tex] is the adjoint of [tex]V[/tex]

I think my problem is also why this technique would be smart if it even worked?

'Can't really tell until you get there, what the veiw is like.
 
Last edited:
  • #59
Last edited by a moderator:
  • #60
It is first a column vector when some basis is choosen. Let me give you a simple example, because this have nothing to do with the ket space, this is for all vector spaces.

lets say we are working in R^3, then we have a vector fx. v = (1,1,1) WRONG!

This is completely meaningless, what does this array mean, nothing. First when i give you some basis, you can make sense of this, many people don't see this because we are thought everything from some canonical basis in the first place.

A vector in a vector space is an element satisfying some things, choosing a basis makes it isomorphic to R^n, and then choosing the standard basis implicit for R^n, we can make calculations with arrays like (1,1,1), but this element can mean a lot of things, it could be 1 apple, one orange and one banana (if someone could give this space a proper vector space structure).

So if column vector is referring to some array, it only makes sense given a basis.

It is like when people say that a matrix is just a linear transformation, this isn't actually the complete truth, the linear transformation between two vector spaces are one to one with the matrices (of appropiate size). Thus one should be very carefull when saying statements like these. I know a lot of authors use the same words for matrices and linear transformation, and that is fine as long as it is made clear or the author know what he means.
 
  • #61
By the way the reason people say that ket are column vectors and bras are row vectors is because they can write.

[tex] |\Psi> = (v_1,v_2,v_3)^T [/tex]

and

[tex] <\Psi| = (v_1^*,v_2^*,v_3^*) [/tex]

and then

[tex] <\Psi|\Psi> = (v_1^*,v_2^*,v_3^*) . (v_1,v_2,v_3)^T = v_1^2+v_2^2+v_3^2 [/tex]

but you could write them both as column vectors if you would, then just the define the iner product as abocve, in the finite case a vector space is isomorphic to it's dual, but because of matrix multiplication, it is easier to remember it like that because placing the vectors beside each other in the right order, makes sense and give the inner product.

But before given a basis this column or row vector doesn't make sense, because what does v_1,v_2 and v_3 describe, they how much we have of something, but of what, that is what the basis tells you.

So to say that a ket is a column vector is false, but it is often used because not all physicist are into math, and it is the easiest way to work with it.

So an operator that works on a ket, that is

[tex]A|\psi>[/tex]

is not an matrix, in the finite case though choosing a basis, then you can describe it by a matrix, and the state as a column vector (or row if you like). This "matrix" is what i denoted above with A_x, but this was in the infinit case so, it may not be totally clear that that is like an infinit matix.
 
  • #62
It's not so much that we want to actually represent bras and kets as row and column vectors -- it's that we want to adapt the (highly convenient!) matrix algebra to our setting.

For example, I was once with a group of mathematicians and we decided for fun to work through the opening section of a book on some sort of representation theory. One of the main features of that section was to describe an algebraic structure on abstract vectors, covectors, and linear transformations. In fact, it was precisely the structure we'd see if we replaced "abstract vector" with "column vector", and so forth. The text did this not because it wanted us to think in terms of coordinates, but because it wanted us to use this very useful arithmetic setting.

Incidentally, during the study, I pointed out the analogy with matrix algebra -- one of the others, after digesting my comment, remarked "Oh! It's like a whole new world has opened up to me!)


(Well, maybe the OP really did want to think in terms of row and column vectors -- but I'm trying to point out this algebraic setting is a generally useful one)

Penrose did the same thing with tensors -- formally defining his "abstract index notation" where we think of tensors abstractly, but we can still use indices like dummy variables to indicate how we are combining them.
 
  • #63
Any Hilbert space is self-dual, even infinite dimensional ones.

We assume a canonical basis and then when we are interested in the values of some observable quantity we can represent our vectors in a basis that is more convenient. By doing a change of basis you are not fundamentally changing the vector in any way, you are just changing the way it is represented.

I agree in saying that a Ket is a column vector is not technically correct, but read what I write carefully . . . a Ket is a representation of a vector as a column vector in a given basis. A bra is a representation of a vector as a row vector in a given basis.

Even if you are not given a basis, one can still think of a Ket as a column vector. Granted my construction is artificial but it still helps to understand the concept.

Given a vector [tex]v\in\mathbb{R}^n[/tex]. Let us call [tex]u_1 = \frac{v}{\|v\|}[/tex]. Let [tex]\{u_k\}_{k=2}^{n}[/tex] be any set of vectors that are mutaully orthogonal and all orthogonal to [tex]u_1[/tex]. We have now constructed a basis in which we can represent [tex]|v\rangle[/tex] as a column vector and [tex]\langle v|[/tex] as a row vector. If we need to consider a different basis we will by means of a unitary transform.
 
  • #64
The reason I want to stick with the idea of thinking of Kets as column vectors is because it simply helped me keep better track of the manipulations. When doing mathematical manipulation in new areas I think it is best to keep a concrete example of something already understand in mind as an example.
 
  • #65
You are right, my point is it is important to know what is going on, then all that help you are great. The problem is when teaching I think that saying that it is just a column vector can seem to confuse, especialy when one get to more advanced topics.

And people seem to forget that, there is a differens between a column vector and a vector, even in the finite case.
 
  • #66
Agreed, it is tough and we have to remember that people step into learning QM with backgrounds that are not always equal.
 
  • #67
mrandersdk said:
You are right, my point is it is important to know what is going on, then all that help you are great. The problem is when teaching I think that saying that it is just a column vector can seem to confuse, especialy when one get to more advanced topics.

And people seem to forget that, there is a differens between a column vector and a vector, even in the finite case.

I'm trying to understand bras and kets and operators in finite dimensional Hilbert space within notation that I'm familar with, rather than trying to sell this idea. It may not even work but perhaps you can help me see if it does. The complex coefficients, kets, bras and the innder product seem to work consistently, but I don't know how to deal with operators. Whenever the transpose of an operator is taken, it is also cojugated, right?

If I understand correctly any operator acting to the left on a ket obtains a ket. Operating to the right on a bra, it obtains a bra. But when does one take the adjoint of an operator?
 
  • #68
The Hilbert space adjoint of an operator [tex]A[/tex] is the operator [tex]A^*[/tex] satisfying [tex](\langle x|A^*)\cdot|y\rangle = \langle x|\cdot(A|y\rangle)[/tex] for all [tex]x,y[/tex]. To appease rigor, when we go to infinite dimensions we should say something about the respective domains of the operator and its' adjoint.
 
  • #69
you should transpose and complex conjugate all the numbers in the matrix (now assuming you are working with the operator as a matrix)

You are right about that an operator working on a ket from the right gives an ket, and from the left an bra. The adjoint of an operator is also an operator, so it is the same.

Something important about the adjoint, is given a ket [tex]|\psi>[/tex] then we can make the ket [tex]A|\psi>[/tex], the corresponding dual [tex]<\psi|A^\dagger[/tex].

Maybe it is that 'corresponding' you are worried about. This is just because that (as comote pointed out) in a hilbert space there is a unique one to one correspondance between the space and it dual, so given a ket [tex]|\psi>[/tex] there must be an element we can denote by [tex]<\psi|[/tex]. And we have a function [tex]J: H \rightarrow H^*[/tex] such that [tex]<\psi|= J(|\psi>)[/tex], and i guess it can be shown that, you get [tex]<\psi|A^\dagger= J(A|\psi>)[/tex], so her eyou use it.

Maybe it is actually this function J you have been asking about the whole time?

You shouldn't try to understand this function as lowering and raising indexes as in general relativity (aka. tensor language (at least i don't think so, maybe one could)).

The existence of this great correspondance is due to Frigyes Riesz. Maybe look at

http://en.wikipedia.org/wiki/Riesz_isomorphism


comote: Not sure you are right

[tex](\langle x|A)\cdot|y\rangle = \langle x|\cdot(A|y\rangle)[/tex]

this is defined to do this, without the star.
 
  • #70
if you want to write it in your way it should be defined as the operator satisfying

[tex] \langle x| (A^\dagger|y\rangle) = ((\langle x|A) |y\rangle)^*[/tex]
 

Similar threads

Replies
16
Views
1K
Replies
5
Views
3K
Replies
3
Views
1K
Replies
6
Views
1K
  • Special and General Relativity
Replies
3
Views
749
  • Quantum Physics
Replies
2
Views
1K
Replies
7
Views
2K
  • Special and General Relativity
Replies
10
Views
2K
Replies
10
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
822
Back
Top