Understanding Bras and Kets as Vectors and Tensors

  • Thread starter Thread starter Phrak
  • Start date Start date
  • Tags Tags
    Tensors
Click For Summary
Bras and kets can be understood as vectors in a Hilbert space, with kets represented as vectors with upper indices and bras as vectors with lower indices. In finite-dimensional spaces, bras exist in the dual space, which is isomorphic to the primal space, but this distinction becomes significant in infinite dimensions. The Hermitian inner product in finite dimensions can be defined similarly to how it's done with real vectors, using complex conjugation for inner products. While kets can be considered as elements of a vector space, the concept of a coordinate basis is less applicable, as the basis vectors are typically eigenvectors of Hermitian operators. Overall, the discussion emphasizes the relationship between bras, kets, and tensor products within the framework of linear algebra and functional analysis.
  • #61
By the way the reason people say that ket are column vectors and bras are row vectors is because they can write.

|\Psi> = (v_1,v_2,v_3)^T

and

<\Psi| = (v_1^*,v_2^*,v_3^*)

and then

<\Psi|\Psi> = (v_1^*,v_2^*,v_3^*) . (v_1,v_2,v_3)^T = v_1^2+v_2^2+v_3^2

but you could write them both as column vectors if you would, then just the define the iner product as abocve, in the finite case a vector space is isomorphic to it's dual, but because of matrix multiplication, it is easier to remember it like that because placing the vectors beside each other in the right order, makes sense and give the inner product.

But before given a basis this column or row vector doesn't make sense, because what does v_1,v_2 and v_3 describe, they how much we have of something, but of what, that is what the basis tells you.

So to say that a ket is a column vector is false, but it is often used because not all physicist are into math, and it is the easiest way to work with it.

So an operator that works on a ket, that is

A|\psi>

is not an matrix, in the finite case though choosing a basis, then you can describe it by a matrix, and the state as a column vector (or row if you like). This "matrix" is what i denoted above with A_x, but this was in the infinit case so, it may not be totally clear that that is like an infinit matix.
 
Physics news on Phys.org
  • #62
It's not so much that we want to actually represent bras and kets as row and column vectors -- it's that we want to adapt the (highly convenient!) matrix algebra to our setting.

For example, I was once with a group of mathematicians and we decided for fun to work through the opening section of a book on some sort of representation theory. One of the main features of that section was to describe an algebraic structure on abstract vectors, covectors, and linear transformations. In fact, it was precisely the structure we'd see if we replaced "abstract vector" with "column vector", and so forth. The text did this not because it wanted us to think in terms of coordinates, but because it wanted us to use this very useful arithmetic setting.

Incidentally, during the study, I pointed out the analogy with matrix algebra -- one of the others, after digesting my comment, remarked "Oh! It's like a whole new world has opened up to me!)


(Well, maybe the OP really did want to think in terms of row and column vectors -- but I'm trying to point out this algebraic setting is a generally useful one)

Penrose did the same thing with tensors -- formally defining his "abstract index notation" where we think of tensors abstractly, but we can still use indices like dummy variables to indicate how we are combining them.
 
  • #63
Any Hilbert space is self-dual, even infinite dimensional ones.

We assume a canonical basis and then when we are interested in the values of some observable quantity we can represent our vectors in a basis that is more convenient. By doing a change of basis you are not fundamentally changing the vector in any way, you are just changing the way it is represented.

I agree in saying that a Ket is a column vector is not technically correct, but read what I write carefully . . . a Ket is a representation of a vector as a column vector in a given basis. A bra is a representation of a vector as a row vector in a given basis.

Even if you are not given a basis, one can still think of a Ket as a column vector. Granted my construction is artificial but it still helps to understand the concept.

Given a vector v\in\mathbb{R}^n. Let us call u_1 = \frac{v}{\|v\|}. Let \{u_k\}_{k=2}^{n} be any set of vectors that are mutaully orthogonal and all orthogonal to u_1. We have now constructed a basis in which we can represent |v\rangle as a column vector and \langle v| as a row vector. If we need to consider a different basis we will by means of a unitary transform.
 
  • #64
The reason I want to stick with the idea of thinking of Kets as column vectors is because it simply helped me keep better track of the manipulations. When doing mathematical manipulation in new areas I think it is best to keep a concrete example of something already understand in mind as an example.
 
  • #65
You are right, my point is it is important to know what is going on, then all that help you are great. The problem is when teaching I think that saying that it is just a column vector can seem to confuse, especialy when one get to more advanced topics.

And people seem to forget that, there is a differens between a column vector and a vector, even in the finite case.
 
  • #66
Agreed, it is tough and we have to remember that people step into learning QM with backgrounds that are not always equal.
 
  • #67
mrandersdk said:
You are right, my point is it is important to know what is going on, then all that help you are great. The problem is when teaching I think that saying that it is just a column vector can seem to confuse, especialy when one get to more advanced topics.

And people seem to forget that, there is a differens between a column vector and a vector, even in the finite case.

I'm trying to understand bras and kets and operators in finite dimensional Hilbert space within notation that I'm familar with, rather than trying to sell this idea. It may not even work but perhaps you can help me see if it does. The complex coefficients, kets, bras and the innder product seem to work consistently, but I don't know how to deal with operators. Whenever the transpose of an operator is taken, it is also cojugated, right?

If I understand correctly any operator acting to the left on a ket obtains a ket. Operating to the right on a bra, it obtains a bra. But when does one take the adjoint of an operator?
 
  • #68
The Hilbert space adjoint of an operator A is the operator A^* satisfying (\langle x|A^*)\cdot|y\rangle = \langle x|\cdot(A|y\rangle) for all x,y. To appease rigor, when we go to infinite dimensions we should say something about the respective domains of the operator and its' adjoint.
 
  • #69
you should transpose and complex conjugate all the numbers in the matrix (now assuming you are working with the operator as a matrix)

You are right about that an operator working on a ket from the right gives an ket, and from the left an bra. The adjoint of an operator is also an operator, so it is the same.

Something important about the adjoint, is given a ket |\psi> then we can make the ket A|\psi>, the corresponding dual <\psi|A^\dagger.

Maybe it is that 'corresponding' you are worried about. This is just because that (as comote pointed out) in a hilbert space there is a unique one to one correspondance between the space and it dual, so given a ket |\psi> there must be an element we can denote by <\psi|. And we have a function J: H \rightarrow H^* such that <\psi|= J(|\psi>), and i guess it can be shown that, you get <\psi|A^\dagger= J(A|\psi>), so her eyou use it.

Maybe it is actually this function J you have been asking about the whole time?

You shouldn't try to understand this function as lowering and raising indexes as in general relativity (aka. tensor language (at least i don't think so, maybe one could)).

The existence of this great correspondance is due to Frigyes Riesz. Maybe look at

http://en.wikipedia.org/wiki/Riesz_isomorphism


comote: Not sure you are right

(\langle x|A)\cdot|y\rangle = \langle x|\cdot(A|y\rangle)

this is defined to do this, without the star.
 
  • #70
if you want to write it in your way it should be defined as the operator satisfying

\langle x| (A^\dagger|y\rangle) = ((\langle x|A) |y\rangle)^*
 
  • #71
Ok, I see now. I am more comfortable with the notation
\langle A^*x,y\rangle=\langle x,Ay\rangle.

This is precisely the place where I always thought the Dirac notation was clumsy. Thanks.
 
  • #72
yes you are right about that, one want to write like you do, I had to look in my book to remember the right way to write it to.

But a it is actually smart in the way that you get, if you define the J above as dagger

(|\psi>)^\dagger = <\psi|

and even better, you can show (or define i guess) the adjoint as the operator satisfying

(A|\psi>)^\dagger = <\psi|A^*

so by defining the dagger to do that on kets, and defining A^\dagger = A^* on operators you get an easy correspondance, between the space and it's dual.
 
  • #73
mrandersdk said:
comote: Not sure you are right

(\langle x|A)\cdot|y\rangle = \langle x|\cdot(A|y\rangle)

this is defined to do this, without the star.

comote said:
Ok, I see now. I am more comfortable with the notation
\langle A^*x,y\rangle=\langle x,Ay\rangle.

This is precisely the place where I always thought the Dirac notation was clumsy. Thanks.
This confused me too in a recent discussion, but I realized that (\langle x|A)\cdot|y\rangle = \langle x|\cdot(A|y\rangle) should be interpreted as the definition of the right action of an operator on a bra.

\langle A^\dagger x,y\rangle=\langle x,Ay\rangle is of course the definition of the adjoint operator. I agree that the bra-ket notation is clumsy here.
 
  • #74
comote said:
Ok, I see now. I am more comfortable with the notation
\langle A^*x,y\rangle=\langle x,Ay\rangle.

This is precisely the place where I always thought the Dirac notation was clumsy. Thanks.
Bleh; when working with bras and kets, I've always hated that notation, since it breaks the cohesiveness of the syntax. And similarly if I'm actually working with a covector for some reason -- I would have great physical difficulty writing \langle \omega^*, v \rangle instead of \omega(v). No problem with the bra-ket notation, though, since it maintains the form of a product: \langle \omega | v \rangle.
 
  • #75
Thank you all. You've been immensely helpful. Even the easy confusion that results from leaning this stuff is notable. By the way, thanks for the link to the other thread, Fredrik.

How does one derive this \langle A^*x,y\rangle=\langle x,Ay\rangle ??
 
  • #77
I'm having trouble posting. I get a database error for long posts with a lot of laTex. Am I the only one?

I'm going to give it an hour.
 
  • #78
just seen a mistake in my earlier post

\langle x| (A^\dagger|y\rangle) = ((\langle x|A) |y\rangle)^*

should be

\langle x| (A^\dagger|y\rangle) = ((\langle y|A) |x\rangle)^*
 
  • #79
mrandersdk said:
Something important about the adjoint, is given a ket |\psi> then we can make the ket A|\psi>, the corresponding dual <\psi|A^\dagger.

Maybe it is that 'corresponding' you are worried about. This is just because that (as comote pointed out) in a hilbert space there is a unique one to one correspondance between the space and it dual, so given a ket |\psi> there must be an element we can denote by <\psi|. And we have a function J: H \rightarrow H^* such that <\psi|= J(|\psi>), and i guess it can be shown that, you get <\psi|A^\dagger= J(A|\psi>), so her eyou use it.

Maybe it is actually this function J you have been asking about the whole time?

Go easy on me with the abstract algebra, but yes!

As you say, we have a function J: H \rightarrow H^*.

It's a bijective map, so J^{-1}: H^* \rightarrow H.

I've been calling J=g_{ij} and J^{-1}=g^{ij}

Now, I'd like think we can include the quantum mechanical operators as various products of H and H^\ast.

H \otimes H,\ H \otimes H^\ast,\ H^\ast \otimes H, and H^\ast \otimes H^\ast.

For example A \left| x \right> = \left| y \right>, where

x , y \in H
A \in H \otimes H^\ast
A \in H \otimes H^\ast

This part is guess-work: For an operator \Theta=psi \times phi, where

\psi \in H
\phi \in H^*
\Theta \in H \otimes H^*,

Then \Theta \dagger = J ( \psi ) J ( \phi ), where

\Theta \dagger \in H^* \otimes H

Again, it may not all hang together as desired.
 
  • #80
mrandersdk said:
stil think there is a big differens. Are you thinking of the functions as something fx. in L^2(R^3), ...
I'm a bit confused about your notation. What does the R^3 in L^2(R^3) represent? I just recalled that the notation used in the quantum mechanics for the the set of all square integrable functions is not always written as L^2 as a mathematician might write, but as L_2 or H_2. An example of the former is found in Notes on Hilbert Space, by Prof. C-I Tan, Brown University.

http://jcbmac.chem.brown.edu/baird/quantumpdf/Tan_on_Hilbert_Space.html

An example of the later is found in Introduction of Quantum Mechanics - Third Edition, Richard L. Liboff, page 102

Note on Latex: I see people using normal Latex to write inline equations/notation. To do this properly don't write "tex" in square brackets as you normally would do when the expression is to appear inline. To write inline equations use "itex" inside the square brackets. Its for this reason that letters are being printed out inline but not with the bottom of the letter alligned with the other letters.

Pete
 
Last edited by a moderator:
  • #81
comote said:
Getting back to the first thing I said. Even in basis independent notation what I said about column/row vectors has meaning. If we are given a unit vector \psi ...
I'd like to point out an incorrect usage of notation here. Since this is a thread on bras and kets I think that its imporant to point this out here. I also think that it relates to what some posters are interested in too, i.e. the usefulness of ket notation.

comote - Recall your comment in a previous post, i.e.
If we are given a unit vector \psi ...
\psi is not the notation for a unit vector, unless you are using it as short hand for \psi = \psi(x)? It is the kernel which denotes the quantum state. By kernel I mean a designator. For instance, in tensor notation the components of the stress-energy-momentum tensor are T^{\alpha\beta}. The geometric notation of this tensor looks like T(_,_) where the "_" denote place holders for two 1-forms. The letter "T" as it is used here is called a "kernel". In quantum mechanics \psi almost always denotes a kernel. The actual quantum state is represented using ket notation as |\psi>.

On to your next statement
...then we can understand it as being an element of some orthonormal basis and then saying
|\psi\rangle is the representation as a column vector makes sense.
If one wishes to represent the state in position space then one projects it into position space using the position eigenbra <x| which is dual to the position eigenket |x>. I.e. \psi(x) = <x|\psi>. This represents an element of a column vector. It is the component of the state on the position basis. There is a continuous number of rows here labeled with the continuous index x.

The ket notation thus allows a very general representation of a quantum state. It is best to keep in mind the difference between the kernel which denotes the state, the mathematical object which represents the state and a component of that state on a given basis.

Pete
 
Last edited:
  • #82
comote said:
The momentum operator does not have eigenstates per se,...
They most certainly do. The eigenstates of momentum are well defined. There are also eigenstates of the position operator too.

Pete
 
  • #83
Hi comote = I'm going through each post one by one so please ignore my comments in my previous posts which were already addressed by mrandersdk. You are fortunate to have him here. He seems to have Dirac notation down solid!

mrandersdk and comote - Welcome aboard! Nice to have people here who knows their stuff.

Best wishes

Pete
 
  • #84
Oh, the R^3, was just because i assumed square integrabel over, the vetor space R^3, but this could be lot of other things I guess, depending on what particular thing you are working on.
 
  • #85
pmb_phy said:
They most certainly do. The eigenstates of momentum are well defined. There are also eigenstates of the position operator too.

Pete
The point comote was making is that there do not exist elements of the Hilbert space (i.e. square-integrable functions) that are eigenstates of position and momentum. So those operators do not have eigenstates in the strictest sense.

But that's where the rigged Hilbert space comote mentioned comes into play: it consists of the extra data

. A subspace of test functions. (e.g. the 'Schwartz functions')
. A superspace of linear functionals applicable to test functions. (called 'generalized states')

and then if you take the extension of the position and momentum operators to act on generalized states when possible, you can find eigen-[generalized states] of these extended operators.


Of course, we usually only bother making these distinctions when we have a specific reason to do so -- so in fact both of you are right, you're just talking in different contexts. :) (comote actually caring about the types of objects, while you are using the words in the usual practical sense)
 
Last edited:
  • #86
Hurkyl said:
The point comote was making is that there do not exist elements of the Hilbert space (i.e. square-integrable functions) that are eigenstates of position and momentum. So those operators do not have eigenstates in the strictest sense.
Yes. Upon further inspection I see that is what he was referring to. Thanks. However I disagree in that those operators do have eigenstates in the strictest sense. Just because they don't belong to a Hilbert space, and they don't represent physical states, it doesn't mean that they aren't eigenstates. They are important as intermediates in the math.

Pete
 
Last edited:
  • #87
pmb_phy said:
Just because they don't belong to a Hilbert space, and they don't represent physical states, it doesn't mean that they aren't eigenstates.
Sure it does. The domain of P is (a dense subset of) the Hilbert space. If |v\langle isn't in the Hilbert space, then it's certainly not in the domain of P, and so the expression P |v\rangle is nonsense!
 
  • #88
Hurkyl said:
Sure it does. The domain of P is (a dense subset of) the Hilbert space.
As I recall, that depends on the precise definition of the operator itself. Mind you I'm going by what my QM text says. The authors could have been sloppy but nothing else in that text is sloppy. Its pretty thorough as a matter of fact. Let me get back to you on this.

Phrak - I've thought about your questions some more and have some more to add. In tensor analysis the tensors themselves are often defined in terms of how their components transform. It is commonly thought that the transformation is due to a coordinate transformation. However this is not quite correct. To be precise the tensors defined as such are defined according to how the basis vectors transform. Transforming basis vectors (kets)easily compared to tensor analysis so perhaps we should focus on basis transformations rather than coordinate transformations.

More later.

Pete
 
  • #89
Regarding my comment above, i.e.
It is commonly thought that the transformation is due to a coordinate transformation. However this is not quite correct. To be precise the tensors defined as such are defined according to how the basis vectors transform.
I was reminded of this fact when I was reviewing GR. I had the chance a few weeks ago to take some time and read Sean Carroll's GR lecture notes which are online at
http://xxx.lanl.gov/abs/gr-qc/9712019. On page 44 the author writes
As usual, we are trying to emphasize a somewhat subtle ontological distinction - tensor components do not change when we change coordinates, they change
when we change the basis in the tangent space, but we have decided to use the coordinates to define our basis. Therefore a change of coordinates induces a change of basis:
This is an important fact that is often overlooked.

I looked over your previous posts regarding lowering of indices (e.g. https://www.physicsforums.com/showpost.php?p=1782754&postcount=19) and wanted to point out that you should have tried the identity matrix to represent the metric. If you did that and first taken the complex conjugate of the row vector before you took the product then you would have gotten the result you were looking for, i.e. you'd end up with the dual vector represented as a row vector.

I hope this helps.

Pete
 
  • #90
What in the hell happened to Pete? Why is his name lined-out?
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
4K
Replies
16
Views
3K
  • · Replies 7 ·
Replies
7
Views
499
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
607
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K