Undergrad Bracket VS wavefunction notation in QM

Click For Summary
Bracket notation in quantum mechanics (QM) is closely related to wavefunction notation, with the wavefunction being derived from the ket representation of a state. To express a QM system in bracket notation, one typically solves the Schrödinger equation to obtain the wavefunction, then applies a Fourier transformation to express it in terms of momentum states. The discussion highlights the importance of using complex variables in Fourier transforms and emphasizes that probabilities in QM must be non-negative, adhering to the Kolmogorov axioms. The relationship between position and momentum representations is established through the Fourier transform, linking the wavefunction to the eigenstates of position and momentum. Overall, while the mathematical foundations of QM can be complex, they have proven successful in describing physical phenomena.
  • #31
I think, it's still a bit confused.

The bras are vectors of an abstract Hilbert space. In non-relativistic QM where you deal with systems with a finite amount of degrees of freedom it's the separable Hilbert space. It's unique up to isomorphism. That's why there were two different versions of QM first: Born, Jordan, and Heisenberg's matrix mechanics and Schrödinger's wave mechanics. But as Schrödinger showed very quickly both are the same theory, disinguished just by choosing different orthornomal systems (Born et al discrete sets like the harmonic-oscillator energy eigenstates; Schrödinger the (generalized) position eigenstates) and Dirac brought it in the representation-independent formulation with his bras and kets.

So there are kets ##|\psi \rangle## describing states and the (generalized eigenvectors) of observable operators like ##|\vec{x} \rangle##, which is a generalized set of orthonormal common eigenvectors of the position operators. The wave function a la Schrödinger are the components of the state ket wrt. this generalized basis.
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle.$$
 
  • Like
Likes bhobba
Physics news on Phys.org
  • #32
olgerm said:
##<\Psi_1|\Psi_1>=|\Psi(t,x_1,y_1,z_1,x_2,y_2,z_2)|^2##
Is that True?
 
  • #33
vanhees71 said:
wave function are (generalized) components of vectors with respect to a complete orthonormalized set of generalized common eigenfunctions of a set of self-adjoint operators, representing a complete set of compatible observables.
You meant bras and kets are (generalized) components of vectors?

Is it correct to describe a quantum system that consists of electron and a proton with following ket:
##|my\_ket>=|x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron}>##
and then ##\hat{H}|my\_ket>=E_{pot}(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})+E_{kin}(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})=
\frac{q^2*k_{Coulumb}}{\sqrt{(x_{proton}-x_{electron})^2+(y_{proton}-y_{electron})^2+(z_{proton}-z_{electron})^2}}+\\
\frac{\hbar}{2*m}*(\frac{\partial^2 ?}{\partial x_{proton}^2}+\frac{\partial^2 ?}{\partial y_{proton}^2}+\frac{\partial^2 ?}{\partial z_{proton}^2}+\frac{\partial^2 ?}{\partial x_{electron}^2}+\frac{\partial^2 ?}{\partial y_{electron}^2}+\frac{\partial^2 ?}{\partial z_{electron}^2})##
 
  • #34
[Edit: Corrected in view of #37]
No, kets are vectors and bras are co-vectors. It's never right set a vector equal to its components wrt. some basis.
 
Last edited:
  • Like
Likes bhobba
  • #35
olgerm said:
Is that True?
No, because a vector product is a number, not a function.
$$\langle \psi_1|\psi_1 \rangle=\int_{\mathbb{R}^3} \mathrm{d}^3 x |\langle \vec{x}|\psi_1 \rangle|^2.$$
 
  • #36
olgerm said:
You meant bras and kets are (generalized) components of vectors?

Kets are generalized vectors, and wave functions are their components.
 
  • #37
vanhees71 said:
bras are vectors and kets are co-vectors.

Isn't this backwards? Aren't bras covectors and kets vectors?

(I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.)
 
  • Like
Likes bhobba and weirdoguy
  • #38
PeterDonis said:
I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.
This.
 
  • Like
Likes bhobba
  • #39
Orodruin said:
This.

I want to point out while that is indeed the correct way of looking at it, when you study the rigorous mathematics of what's gong on (ie Rigged Hilbert Spaces) the above is only generally true for Hilbert Spaces, but on occasion we use elements from more general spaces where while still true things are more difficult. I became caught up in this issue when I learned QM and it diverted me unnecessarily from QM proper. An interesting diversion especially for those of a mathematical bent, but a diversion nonetheless.

If anyone REALLY wants to delve into this issue - it isn't easy - but the attachment contains an overview. Note the conclusion: The RHS fully justifies Dirac’s bra-ket formalism. In particular, there is a 1:1 correspondence between bras and kets.

However constructing the exact space is not trivial.

Thanks
Bill
 

Attachments

Last edited:
  • Like
Likes vanhees71
  • #40
PeterDonis said:
Isn't this backwards? Aren't bras covectors and kets vectors?

(I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.)
Yes sure :-((; ##\langle \phi|## is a linear form (co-vector) and a bra and ##|\psi \rangle## is a vector and it's a ket. Also note that only for proper normalizable Hilbert-space vectors bras and kets are dual. In QT you need the general objects of the "rigged Hilbert space" to make sense of the physicists' sloppy math concerning unbounded self-adjoint operators with continuous spectra. See @bhobba 's previous posting and the nice talk linked therein.
 
  • #41
I don't want to annoy you with basic questions, but I did not find the answer from 2 books that I read nor internet.

weirdoguy said:
Kets are generalized vectors, and wave functions are their components.
I assume that values of the wavefunction are components of the vector. How to ge i'th component of ket from wavefunction(what should be is argument)? element of vector can be described with just one number(index), but wavefunction has more arguments.
vanhees71 said:
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle.$$
If the vectors are othonormal then , it should be ##\langle \vec{x}|\psi \rangle=\sum_{i=0}(\langle \vec{x}|(i))*(|\psi\rangle(i)))##, but how can this sum equal to a function not a number?
 
  • #42
The space of possible wavefunctions forms a vector space. Thus the wavefunction when considered as an element of that vector space is a vector. That vector space is of a special type called a Hilbert space. A ket is just another notation for the wavefunction when consider as an element of a Hilbert space.
 
  • Like
Likes olgerm
  • #43
olgerm said:
If the vectors are othonormal then , it should be ##\langle \vec{x}|\psi \rangle=\sum_{i=0}(\langle \vec{x}|(i))*(|\psi\rangle(i)))##, but how can this sum equal to a function not a number?
It's not a sum because ##x## is continuous, you should be integrating.

However the LHS here involves only one basis bra so the sum/integral on the right is not needed.
 
  • #44
olgerm said:
I don't want to annoy you with basic questions, but I did not find the answer from 2 books that I read nor internet.I assume that values of the wavefunction are components of the vector. How to ge i'th component of ket from wavefunction(what should be is argument)? element of vector can be described with just one number(index), but wavefunction has more arguments.If the vectors are othonormal then , it should be ##\langle \vec{x}|\psi \rangle=\sum_{i=0}(\langle \vec{x}|(i))*(|\psi\rangle(i)))##, but how can this sum equal to a function not a number?
The vector product can be evaluated in terms of wave functions, using the completeness relation
$$\int_{\mathbb{R}^3} \mathrm{d}^3 x |\vec{x} \rangle \langle \vec{x}|=\hat{1},$$
i.e.,
$$\langle \psi|\phi \rangle=\int_{\mathbb{R}^3} \mathrm{d}^3 x \langle \psi|\vec{x} \rangle \langle x|\phi \rangle = \int_{\mathbb{R}^3} \mathrm{d}^3 x \psi^*(\vec{x}) \phi(\vec{x}).$$
Here you have the case of a representation in terms of generalized "basis vectors", i.e., the position "eigen vectors". They provide a "continuous" label ##\vec{x} \in \mathbb{R}^3##.

All this can be made mathematically rigorous in terms of the socalled "rigged Hilbert space" formalism, but that's not necessary to begin with QM. A good "normal" textbbook will do. My favorite is

J. J. Sakurai, S. Tuan, Modern Quantum Mechanics, Addison
Wesley (1993).
 
  • #45
DarMM said:
The space of possible wavefunctions forms a vector space. Thus the wavefunction when considered as an element of that vector space is a vector. That vector space is of a special type called a Hilbert space. A ket is just another notation for the wavefunction when consider as an element of a Hilbert space.
I think this formulation is the problem of the OP. One should clearly distinguish between the abstract ("representation free") vectors and the wave functions, which are the vectors in "position representation".

It's like in finite-dimensional vector spaces the difference between an abstract vector and its components with respect to a basis.
 
  • Like
Likes DarMM
  • #46
DarMM said:
The space of possible wavefunctions forms a vector space.
Vectorspace axioms should tell that any vector in that space multiplied with scalar should also belong to that vectorspace,but if I multiplied the wavefunction with 2 ##\Psi_2(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})=2*\Psi(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron}))## I would sum of all probabilities according to ##\Psi_2## is not 1 but 2. ##\int_{-\infty}^\infty(dx_{proton}*( \int_{-\infty}^\infty(dy_{proton}*( \int_{-\infty}^\infty(dz_{proton}*( \int_{-\infty}^\infty(dx_{electron}*( \int_{-\infty}^\infty(dy_{electron}*(\int_{-\infty}^\infty(dz_{electron}*(\Psi_2(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron}))))))=2##
 
  • #47
True, really it is the space of unnormalized wavefunctions that form a vector space. The actual space of quantum wavefunctions is sort of a unit sphere in this space.

Although note that wavefunctions differing by a phase are equivalent, so even this unit sphere over describes the space of quantum (pure) states.

The vector space structure is just very useful in calculations.
 
  • Like
Likes vanhees71
  • #48
PeterDonis said:
Isn't this backwards? Aren't bras covectors and kets vectors?

(I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.)
bhobba said:
when you study the rigorous mathematics of what's going on (ie Rigged Hilbert Spaces) [...] The RHS fully justifies Dirac’s bra-ket formalism. In particular, there is a 1:1 correspondence between bras and kets.
Only in finite-dimensional Hilbert spaces. There kets may be viewed as column vectors (matrices with one column) = vectors of ##C^N##, bras as row vectors (matrices with one row) = covectors = linear forms, and the inner product is just the matrix product of these.

But in order that the Schrödinger equation in the usual ket notation is allowed to have unnormalizable solutions (which Dirac's formalism allows), one must think of the bras as being smooth test functions, i.e. the elements of the nuclear space (the bottom of the triple of vector spaces of a RHS , a space of Schwartz functions) and the kets as the linear functionals on it (the top of the triple, a much bigger space of distributions). In this setting, bras are vectors and kets are covectors, and there is a big difference between these - there are many more kets than bras.

However, Dirac"s formalism is fully symmetric. Hence it is not quite matched by the RHS formalism. The latter does not have formulas such as ##\langle x|y\rangle=\delta(x,y)##.
 
Last edited:
  • Like
Likes DarMM and PeterDonis
  • #49
Again one should really stress in introductory lectures the difference between a vector, which is a basis-independent object. In physics it describes real-world quantities like velocities, accelerations, forces, fields like the electric and magnetic field or current densities etc, and components of the vector with respect to some basis. It's a one-to-one mapping between the vectors and its components given a basis.

In quantum theory the kets are vectors in an abstract Hilbert space (with the complex numbers as scalars). In non-relativistic QM with finitely many fundamental degrees of freedom (e.g., for a free particle position, momentum, and spin) the Hilbert space is the separable Hilbert space (there's only one separable Hilbert space modulo isomorphism).

Then there are linear forms on a vector space, i.e., linear maps from the vector space to the field of scalars. These linear forms build a vector space themselves, the dual space to the given vector space. In finite-dimensional vector spaces, given a basis, there's a one-to-one mapping between the vector space and its dual space, but not a basis-independent one. This changes if you introduce a non-degenerate fundamental form, i.e., a bilinear (or for complex vector spaces sesquilinear) form, where you get a basis-independent one-to-one-mapping between vectors and linear forms.

For the Hilbert space, where a scalar product (sesquilinear form) is defined you have to distinguish between the continuous linear forms (continuous wrt. to the metric of the Hilbert space induced in the usual way from the scalar product) and general linear forms. For the latter there's a one-to-one correspondence between the Hilbert space and its ("topological") dual, and in this way these two spaces are identified.

In QM you need the more general linear forms since you want to use "generalized eigenvectors" to describe a spectral representation of unbound essentially self-adjoint operators. This always happens when there are such operators with continuous spectra like position, momentum. One modern mathematically rigorous formulation is the rigged Hilbert space. There you have a domain of the self-adjoint operators like position and momentum, which is a dense sub-vector-space of the Hilbert space. The dual of this dense subspace is larger than the Hilbert space, i.e., it contains more linear forms than the bound linear forms on the Hilbert space.

Using a complete orthonormal set ##|u_n \rangle## you can map the abstract vectors to square-summable sequences ##(\psi_n)## with ##\psi_n =\langle u_n|\psi \rangle##. These sequences build the Hilbert space ##\ell^2##, and you can write the ##(\psi_n)## as infinite columns. The operators are then respresented by the corresponding matrix elements ##A_{mn}=\langle u_m|\hat{A}|u_n \rangle##, which you can arrange as a infinite##\times##infinite matrix. This is a matrix representation of QM, and was the first way how modern QM was discovered by Born, Jordan, and Heisenberg in 1925. The heuristics, provided by Heisenberg in his Helgoland paper, was to deal only with transition rates between states. Heisenberg had the discrete energy levels of atoms in mind but could demonstrate the principle first only using the harmonic oscillator as a model. Born immediately recognized that Heisenberg's "quantum math" was nothing else than matrix calculus in an infinite-dimensional vector space, and then in a quick sequence of papers by Born and Jordan, as well as Born, Jordan, and Heisenberg the complete theory was worked out (including the quantization of the electromagnetic field!). Many physicists were quite sceptical about the proper meaning of the infinitesimal vectors and matrices.

Then you can also use the generalized position eigenvectors ##|\vec{x} \rangle##, which leads to the mapping of the Hilbert-space vectors to square-integrable (integrable in the Lebesgue sense) functions, ##\psi(\vec{x})=\langle \vec{x}|\psi \rangle##. This is the Hilbert space ##\mathrm{L}^2## of square-integrable functions, and the corresponding representation is the 2nd form modern quantum theory has been discovered in 1926 by Schrödinger and is usually called "wave mechanics". Schrödinger very early has shown that "wave mechanics" and "matrix mechanics" are the same theory, just written in different representations. In Schrödinger's formulation, heuristically derived from the analogy between wave and geometrical optics in electromagnetism, the latter being the eikonal approximation of the former. Schrödinger used the argument backwards, considering the Hamilton-Jacobi partial differential equation as the eikonal approximation of a yet unknown wave equation for particles, which was just the mathematical consequence of the ideas brought forward in de Broglie's PhD thesis, which was favorably commented by Einstein as a further step to understande "wave-particle duality".

Almost at the same time Dirac came with the now favored abstract formulation, introducing q-numbers with a commutator algebra heuristically linked to the Poisson-bracket formulation of classical mechanics. This was dubbed "transformation theory", because the bra-ket formalism enables a simple calculus for transformations between different representations like the Fourier transformation between the position and the momentum representation of Schrödinger's wave mechanics.

Finally the entire edifice was made rigorous by von Neumann, recognizing the vector space as a Hilbert space and formulating a rigorous treatment of unbound operators. The physicists' sloppy ideas could then be made rigorous by G'elfand et al in terms of the "rigged Hilbert space" formalism. For the pracitioning theoretical physics you can get quite well along without this formalism, though it's always good to know about the limitations of it, and it's good to know at least some elements of this formalism. A good compromise between mathematical rigorousity and physicists' sloppyness is Ballentine's textbook.
 
  • Like
Likes bhobba
  • #50
vanhees71 said:
In quantum theory the kets are vectors in an abstract Hilbert space
No. Many typical Dirac kets - such as ##|x\rangle## - are not vectors in a Hilbert space!
 
  • Like
Likes dextercioby
  • #51
True, but you introduce them first as dual vectors of the nuclear space in the rigged-Hilbert space formulation. I'd have to look up the precise mathematical construction of the corresponding kets and how to justify equations like ##\langle x|x' \rangle=\delta(x-x')##, where ##\delta## is the Dirac-##\delta## distribution. I guess you find it easily in the standard literature like Galindo&Pascual or de la Madrid.
 
  • #52
So kets are points (aka elements aka vectors) of a vectorspace, which basevectors are degrees of freedom that the quantity the space is named for has?
For example ket korresponding to ##\psi(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})## may have ##|\mathbb{R}|^6## components (one to describe value of ##\psi## for every arguments of ##\psi##)?
 
  • #53
olgerm said:
So kets are points (aka elements aka vectors) of a vectorspace, which basevectors are degrees of freedom that the quantity the space is named for has?
For example ket korresponding to ##\psi(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})## may have ##|\mathbb{R}|^6## components (one to describe value of ##\psi## for every arguments of ##\psi##)?
We have some basis set of functions ##\phi_n## and the components of ##\psi## are the terms ##c_n## in the sum:
$$
\psi = \sum_{n}c_{n}\phi_{n}
$$
 
  • Like
Likes olgerm
  • #54
DarMM said:
We have some basis set of functions ##\phi_n## and the components of ##\psi## are the terms ##c_n## in the sum:
$$\psi = \sum_{n}c_{n}\phi_{n}$$
What functions are in this basis set?
 
  • #55
olgerm said:
What functions are in this basis set?
You can choose any set of orthogonal functions. It's a choice of basis. Just like you can choose any vectors to be your basis in linear algebra.
 
  • #56
A. Neumaier said:
However, Dirac"s formalism is fully symmetric. Hence it is not quite matched by the RHS formalism. The latter does not have formulas such as ##\langle x|y\rangle=\delta(x,y)##.

Yes of course.

What I think Madrid is saying is that given any ket say |a> a corresponding bra exists <a| not that <a|b> is always defined for an arbitrary |b> - because obviously it isn't. Its right at the beginning of the Gelfland Triple used to define them. In the middle is the Hilbert Space in which everything is fine. If we have a subspace of a Hilbert Space you can define all the linear functional's on that space that I will write as the bra <a|. Then we can define the corresponding ket |a> via <a|b> = complex conjugate <b|a>. So by definition there is a 1-1 correspondence. Its not quite what Dirac says in his book where the impression is given you can always define <a|b> given any bra and ket. The eigenbra and egienkets of momentum and position giving the Dirac Delta function is the obvious counter example.

Thanks
Bill
 
Last edited:
  • Like
Likes vanhees71
  • #57
DarMM said:
You can choose any set of orthogonal functions. It's a choice of basis. Just like you can choose any vectors to be your basis in linear algebra.

The issue comes when the basis vectors form a continuum. That's where you need Rigged Hilbert Spaces and the Nuclear Spectral Theorem, sometimes called the Gelfand-Maurin Theorem or Generalised eigenfunction Theorem.

Thanks
Bill
 
  • Like
Likes vanhees71
  • #58
bhobba said:
The issue comes when the basis vectors form a continuum. That's where you need Rigged Hilbert Spaces and the Nuclear Spectral Theorem, sometimes called the Gelfand-Maurin Theorem or Generalised eigenfunction Theorem.
I think we should keep in mind that such states like ##|x\rangle## aren't part of the actual Hilbert space of states. In terms of actual physical states the basis is always discrete.
 
  • #59
DarMM said:
I think we should keep in mind that such states like ##|x\rangle## aren't part of the actual Hilbert space of states. In terms of actual physical states the basis is always discrete.
Resonances are described by unnormalizable solutions of the SE called Gamov or Siegert states. They are not in the Hilbert space but are quite physical!
 
  • Like
Likes dextercioby
  • #60
A. Neumaier said:
Resonances are described by unnormalizable solutions of the SE called Gamov or Siegert states. They are not in the Hilbert space but are quite physical!
The literal resonance pole I assume. During the scattering the state is always an element of the Hilbert space.

I'm not saying that such things aren't useful for extracting physics. Similarly the analytic properties of the Wightman functions extended into complex tubes tells one much. However the actual Wightman functions physically are not functions on complex tubes.

I view this in the same way as using complex analysis to extract information more easily even though the actual situation is described by a real valued function. Or like instantons in QFT, strictly speaking they're not states but they do carry physical tunneling information.

I find students can get into all sorts of confusion by thinking things like ##|x\rangle## are actual states
 
Last edited:
  • Like
Likes vanhees71

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 18 ·
Replies
18
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 26 ·
Replies
26
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K