Bracket VS wavefunction notation in QM

In summary, the conversation revolves around the use of bracket notation and wavefunction notation in explaining quantum mechanics. The bracket notation involves solving the Schrödinger equation and applying the Fourier transform to get the wavefunction, while the wavefunction notation involves expanding a ket in eigenvalues of position. The use of complex variables and Fourier transforms is a standard practice, and understanding linear algebra and distribution theory is crucial in understanding these concepts. The conversation also touches on the topic of probability in quantum mechanics and the importance of avoiding negative probabilities.
  • #36
olgerm said:
You meant bras and kets are (generalized) components of vectors?

Kets are generalized vectors, and wave functions are their components.
 
Physics news on Phys.org
  • #37
vanhees71 said:
bras are vectors and kets are co-vectors.

Isn't this backwards? Aren't bras covectors and kets vectors?

(I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.)
 
  • Like
Likes bhobba and weirdoguy
  • #38
PeterDonis said:
I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.
This.
 
  • Like
Likes bhobba
  • #39
Orodruin said:
This.

I want to point out while that is indeed the correct way of looking at it, when you study the rigorous mathematics of what's gong on (ie Rigged Hilbert Spaces) the above is only generally true for Hilbert Spaces, but on occasion we use elements from more general spaces where while still true things are more difficult. I became caught up in this issue when I learned QM and it diverted me unnecessarily from QM proper. An interesting diversion especially for those of a mathematical bent, but a diversion nonetheless.

If anyone REALLY wants to delve into this issue - it isn't easy - but the attachment contains an overview. Note the conclusion: The RHS fully justifies Dirac’s bra-ket formalism. In particular, there is a 1:1 correspondence between bras and kets.

However constructing the exact space is not trivial.

Thanks
Bill
 

Attachments

  • RHS.pdf
    1 MB · Views: 155
Last edited:
  • Like
Likes vanhees71
  • #40
PeterDonis said:
Isn't this backwards? Aren't bras covectors and kets vectors?

(I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.)
Yes sure :-((; ##\langle \phi|## is a linear form (co-vector) and a bra and ##|\psi \rangle## is a vector and it's a ket. Also note that only for proper normalizable Hilbert-space vectors bras and kets are dual. In QT you need the general objects of the "rigged Hilbert space" to make sense of the physicists' sloppy math concerning unbounded self-adjoint operators with continuous spectra. See @bhobba 's previous posting and the nice talk linked therein.
 
  • #41
I don't want to annoy you with basic questions, but I did not find the answer from 2 books that I read nor internet.

weirdoguy said:
Kets are generalized vectors, and wave functions are their components.
I assume that values of the wavefunction are components of the vector. How to ge i'th component of ket from wavefunction(what should be is argument)? element of vector can be described with just one number(index), but wavefunction has more arguments.
vanhees71 said:
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle.$$
If the vectors are othonormal then , it should be ##\langle \vec{x}|\psi \rangle=\sum_{i=0}(\langle \vec{x}|(i))*(|\psi\rangle(i)))##, but how can this sum equal to a function not a number?
 
  • #42
The space of possible wavefunctions forms a vector space. Thus the wavefunction when considered as an element of that vector space is a vector. That vector space is of a special type called a Hilbert space. A ket is just another notation for the wavefunction when consider as an element of a Hilbert space.
 
  • Like
Likes olgerm
  • #43
olgerm said:
If the vectors are othonormal then , it should be ##\langle \vec{x}|\psi \rangle=\sum_{i=0}(\langle \vec{x}|(i))*(|\psi\rangle(i)))##, but how can this sum equal to a function not a number?
It's not a sum because ##x## is continuous, you should be integrating.

However the LHS here involves only one basis bra so the sum/integral on the right is not needed.
 
  • #44
olgerm said:
I don't want to annoy you with basic questions, but I did not find the answer from 2 books that I read nor internet.I assume that values of the wavefunction are components of the vector. How to ge i'th component of ket from wavefunction(what should be is argument)? element of vector can be described with just one number(index), but wavefunction has more arguments.If the vectors are othonormal then , it should be ##\langle \vec{x}|\psi \rangle=\sum_{i=0}(\langle \vec{x}|(i))*(|\psi\rangle(i)))##, but how can this sum equal to a function not a number?
The vector product can be evaluated in terms of wave functions, using the completeness relation
$$\int_{\mathbb{R}^3} \mathrm{d}^3 x |\vec{x} \rangle \langle \vec{x}|=\hat{1},$$
i.e.,
$$\langle \psi|\phi \rangle=\int_{\mathbb{R}^3} \mathrm{d}^3 x \langle \psi|\vec{x} \rangle \langle x|\phi \rangle = \int_{\mathbb{R}^3} \mathrm{d}^3 x \psi^*(\vec{x}) \phi(\vec{x}).$$
Here you have the case of a representation in terms of generalized "basis vectors", i.e., the position "eigen vectors". They provide a "continuous" label ##\vec{x} \in \mathbb{R}^3##.

All this can be made mathematically rigorous in terms of the socalled "rigged Hilbert space" formalism, but that's not necessary to begin with QM. A good "normal" textbbook will do. My favorite is

J. J. Sakurai, S. Tuan, Modern Quantum Mechanics, Addison
Wesley (1993).
 
  • #45
DarMM said:
The space of possible wavefunctions forms a vector space. Thus the wavefunction when considered as an element of that vector space is a vector. That vector space is of a special type called a Hilbert space. A ket is just another notation for the wavefunction when consider as an element of a Hilbert space.
I think this formulation is the problem of the OP. One should clearly distinguish between the abstract ("representation free") vectors and the wave functions, which are the vectors in "position representation".

It's like in finite-dimensional vector spaces the difference between an abstract vector and its components with respect to a basis.
 
  • Like
Likes DarMM
  • #46
DarMM said:
The space of possible wavefunctions forms a vector space.
Vectorspace axioms should tell that any vector in that space multiplied with scalar should also belong to that vectorspace,but if I multiplied the wavefunction with 2 ##\Psi_2(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})=2*\Psi(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron}))## I would sum of all probabilities according to ##\Psi_2## is not 1 but 2. ##\int_{-\infty}^\infty(dx_{proton}*( \int_{-\infty}^\infty(dy_{proton}*( \int_{-\infty}^\infty(dz_{proton}*( \int_{-\infty}^\infty(dx_{electron}*( \int_{-\infty}^\infty(dy_{electron}*(\int_{-\infty}^\infty(dz_{electron}*(\Psi_2(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron}))))))=2##
 
  • #47
True, really it is the space of unnormalized wavefunctions that form a vector space. The actual space of quantum wavefunctions is sort of a unit sphere in this space.

Although note that wavefunctions differing by a phase are equivalent, so even this unit sphere over describes the space of quantum (pure) states.

The vector space structure is just very useful in calculations.
 
  • Like
Likes vanhees71
  • #48
PeterDonis said:
Isn't this backwards? Aren't bras covectors and kets vectors?

(I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.)
bhobba said:
when you study the rigorous mathematics of what's going on (ie Rigged Hilbert Spaces) [...] The RHS fully justifies Dirac’s bra-ket formalism. In particular, there is a 1:1 correspondence between bras and kets.
Only in finite-dimensional Hilbert spaces. There kets may be viewed as column vectors (matrices with one column) = vectors of ##C^N##, bras as row vectors (matrices with one row) = covectors = linear forms, and the inner product is just the matrix product of these.

But in order that the Schrödinger equation in the usual ket notation is allowed to have unnormalizable solutions (which Dirac's formalism allows), one must think of the bras as being smooth test functions, i.e. the elements of the nuclear space (the bottom of the triple of vector spaces of a RHS , a space of Schwartz functions) and the kets as the linear functionals on it (the top of the triple, a much bigger space of distributions). In this setting, bras are vectors and kets are covectors, and there is a big difference between these - there are many more kets than bras.

However, Dirac"s formalism is fully symmetric. Hence it is not quite matched by the RHS formalism. The latter does not have formulas such as ##\langle x|y\rangle=\delta(x,y)##.
 
Last edited:
  • Like
Likes DarMM and PeterDonis
  • #49
Again one should really stress in introductory lectures the difference between a vector, which is a basis-independent object. In physics it describes real-world quantities like velocities, accelerations, forces, fields like the electric and magnetic field or current densities etc, and components of the vector with respect to some basis. It's a one-to-one mapping between the vectors and its components given a basis.

In quantum theory the kets are vectors in an abstract Hilbert space (with the complex numbers as scalars). In non-relativistic QM with finitely many fundamental degrees of freedom (e.g., for a free particle position, momentum, and spin) the Hilbert space is the separable Hilbert space (there's only one separable Hilbert space modulo isomorphism).

Then there are linear forms on a vector space, i.e., linear maps from the vector space to the field of scalars. These linear forms build a vector space themselves, the dual space to the given vector space. In finite-dimensional vector spaces, given a basis, there's a one-to-one mapping between the vector space and its dual space, but not a basis-independent one. This changes if you introduce a non-degenerate fundamental form, i.e., a bilinear (or for complex vector spaces sesquilinear) form, where you get a basis-independent one-to-one-mapping between vectors and linear forms.

For the Hilbert space, where a scalar product (sesquilinear form) is defined you have to distinguish between the continuous linear forms (continuous wrt. to the metric of the Hilbert space induced in the usual way from the scalar product) and general linear forms. For the latter there's a one-to-one correspondence between the Hilbert space and its ("topological") dual, and in this way these two spaces are identified.

In QM you need the more general linear forms since you want to use "generalized eigenvectors" to describe a spectral representation of unbound essentially self-adjoint operators. This always happens when there are such operators with continuous spectra like position, momentum. One modern mathematically rigorous formulation is the rigged Hilbert space. There you have a domain of the self-adjoint operators like position and momentum, which is a dense sub-vector-space of the Hilbert space. The dual of this dense subspace is larger than the Hilbert space, i.e., it contains more linear forms than the bound linear forms on the Hilbert space.

Using a complete orthonormal set ##|u_n \rangle## you can map the abstract vectors to square-summable sequences ##(\psi_n)## with ##\psi_n =\langle u_n|\psi \rangle##. These sequences build the Hilbert space ##\ell^2##, and you can write the ##(\psi_n)## as infinite columns. The operators are then respresented by the corresponding matrix elements ##A_{mn}=\langle u_m|\hat{A}|u_n \rangle##, which you can arrange as a infinite##\times##infinite matrix. This is a matrix representation of QM, and was the first way how modern QM was discovered by Born, Jordan, and Heisenberg in 1925. The heuristics, provided by Heisenberg in his Helgoland paper, was to deal only with transition rates between states. Heisenberg had the discrete energy levels of atoms in mind but could demonstrate the principle first only using the harmonic oscillator as a model. Born immediately recognized that Heisenberg's "quantum math" was nothing else than matrix calculus in an infinite-dimensional vector space, and then in a quick sequence of papers by Born and Jordan, as well as Born, Jordan, and Heisenberg the complete theory was worked out (including the quantization of the electromagnetic field!). Many physicists were quite sceptical about the proper meaning of the infinitesimal vectors and matrices.

Then you can also use the generalized position eigenvectors ##|\vec{x} \rangle##, which leads to the mapping of the Hilbert-space vectors to square-integrable (integrable in the Lebesgue sense) functions, ##\psi(\vec{x})=\langle \vec{x}|\psi \rangle##. This is the Hilbert space ##\mathrm{L}^2## of square-integrable functions, and the corresponding representation is the 2nd form modern quantum theory has been discovered in 1926 by Schrödinger and is usually called "wave mechanics". Schrödinger very early has shown that "wave mechanics" and "matrix mechanics" are the same theory, just written in different representations. In Schrödinger's formulation, heuristically derived from the analogy between wave and geometrical optics in electromagnetism, the latter being the eikonal approximation of the former. Schrödinger used the argument backwards, considering the Hamilton-Jacobi partial differential equation as the eikonal approximation of a yet unknown wave equation for particles, which was just the mathematical consequence of the ideas brought forward in de Broglie's PhD thesis, which was favorably commented by Einstein as a further step to understande "wave-particle duality".

Almost at the same time Dirac came with the now favored abstract formulation, introducing q-numbers with a commutator algebra heuristically linked to the Poisson-bracket formulation of classical mechanics. This was dubbed "transformation theory", because the bra-ket formalism enables a simple calculus for transformations between different representations like the Fourier transformation between the position and the momentum representation of Schrödinger's wave mechanics.

Finally the entire edifice was made rigorous by von Neumann, recognizing the vector space as a Hilbert space and formulating a rigorous treatment of unbound operators. The physicists' sloppy ideas could then be made rigorous by G'elfand et al in terms of the "rigged Hilbert space" formalism. For the pracitioning theoretical physics you can get quite well along without this formalism, though it's always good to know about the limitations of it, and it's good to know at least some elements of this formalism. A good compromise between mathematical rigorousity and physicists' sloppyness is Ballentine's textbook.
 
  • Like
Likes bhobba
  • #50
vanhees71 said:
In quantum theory the kets are vectors in an abstract Hilbert space
No. Many typical Dirac kets - such as ##|x\rangle## - are not vectors in a Hilbert space!
 
  • Like
Likes dextercioby
  • #51
True, but you introduce them first as dual vectors of the nuclear space in the rigged-Hilbert space formulation. I'd have to look up the precise mathematical construction of the corresponding kets and how to justify equations like ##\langle x|x' \rangle=\delta(x-x')##, where ##\delta## is the Dirac-##\delta## distribution. I guess you find it easily in the standard literature like Galindo&Pascual or de la Madrid.
 
  • #52
So kets are points (aka elements aka vectors) of a vectorspace, which basevectors are degrees of freedom that the quantity the space is named for has?
For example ket korresponding to ##\psi(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})## may have ##|\mathbb{R}|^6## components (one to describe value of ##\psi## for every arguments of ##\psi##)?
 
  • #53
olgerm said:
So kets are points (aka elements aka vectors) of a vectorspace, which basevectors are degrees of freedom that the quantity the space is named for has?
For example ket korresponding to ##\psi(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})## may have ##|\mathbb{R}|^6## components (one to describe value of ##\psi## for every arguments of ##\psi##)?
We have some basis set of functions ##\phi_n## and the components of ##\psi## are the terms ##c_n## in the sum:
$$
\psi = \sum_{n}c_{n}\phi_{n}
$$
 
  • Like
Likes olgerm
  • #54
DarMM said:
We have some basis set of functions ##\phi_n## and the components of ##\psi## are the terms ##c_n## in the sum:
$$\psi = \sum_{n}c_{n}\phi_{n}$$
What functions are in this basis set?
 
  • #55
olgerm said:
What functions are in this basis set?
You can choose any set of orthogonal functions. It's a choice of basis. Just like you can choose any vectors to be your basis in linear algebra.
 
  • #56
A. Neumaier said:
However, Dirac"s formalism is fully symmetric. Hence it is not quite matched by the RHS formalism. The latter does not have formulas such as ##\langle x|y\rangle=\delta(x,y)##.

Yes of course.

What I think Madrid is saying is that given any ket say |a> a corresponding bra exists <a| not that <a|b> is always defined for an arbitrary |b> - because obviously it isn't. Its right at the beginning of the Gelfland Triple used to define them. In the middle is the Hilbert Space in which everything is fine. If we have a subspace of a Hilbert Space you can define all the linear functional's on that space that I will write as the bra <a|. Then we can define the corresponding ket |a> via <a|b> = complex conjugate <b|a>. So by definition there is a 1-1 correspondence. Its not quite what Dirac says in his book where the impression is given you can always define <a|b> given any bra and ket. The eigenbra and egienkets of momentum and position giving the Dirac Delta function is the obvious counter example.

Thanks
Bill
 
Last edited:
  • Like
Likes vanhees71
  • #57
DarMM said:
You can choose any set of orthogonal functions. It's a choice of basis. Just like you can choose any vectors to be your basis in linear algebra.

The issue comes when the basis vectors form a continuum. That's where you need Rigged Hilbert Spaces and the Nuclear Spectral Theorem, sometimes called the Gelfand-Maurin Theorem or Generalised eigenfunction Theorem.

Thanks
Bill
 
  • Like
Likes vanhees71
  • #58
bhobba said:
The issue comes when the basis vectors form a continuum. That's where you need Rigged Hilbert Spaces and the Nuclear Spectral Theorem, sometimes called the Gelfand-Maurin Theorem or Generalised eigenfunction Theorem.
I think we should keep in mind that such states like ##|x\rangle## aren't part of the actual Hilbert space of states. In terms of actual physical states the basis is always discrete.
 
  • #59
DarMM said:
I think we should keep in mind that such states like ##|x\rangle## aren't part of the actual Hilbert space of states. In terms of actual physical states the basis is always discrete.
Resonances are described by unnormalizable solutions of the SE called Gamov or Siegert states. They are not in the Hilbert space but are quite physical!
 
  • Like
Likes dextercioby
  • #60
A. Neumaier said:
Resonances are described by unnormalizable solutions of the SE called Gamov or Siegert states. They are not in the Hilbert space but are quite physical!
The literal resonance pole I assume. During the scattering the state is always an element of the Hilbert space.

I'm not saying that such things aren't useful for extracting physics. Similarly the analytic properties of the Wightman functions extended into complex tubes tells one much. However the actual Wightman functions physically are not functions on complex tubes.

I view this in the same way as using complex analysis to extract information more easily even though the actual situation is described by a real valued function. Or like instantons in QFT, strictly speaking they're not states but they do carry physical tunneling information.

I find students can get into all sorts of confusion by thinking things like ##|x\rangle## are actual states
 
Last edited:
  • Like
Likes vanhees71
  • #61
very interesting that any function ##\mathbb{R}^A ->\mathbb{C}## is equal to sum of countable number of functions. I already found one option for basisvectors, so that it can be done.

##<base|[i_1](a_{arguments\ of\ wavefunction})=\delta(v(i_1)-a_{arguments\ of\ wavefunction})\\
v(i_1)[i_2]=\sum_{i_3=-\infty}^{\infty}(2^{i_3}*((\lfloor i_1*2^{-2*i_3*A+i_2} \rfloor\mod_2)+\sqrt{-1}*(\lfloor i_1*2^{-2*i_3*A+i_2+1} \rfloor\ mod_2)))##

##\delta## is a function that returns 1 if its argument is 0-vector and 0 otherwise.
 
Last edited:
  • #62
A. Neumaier said:
Resonances are described by unnormalizable solutions of the SE called Gamov or Siegert states. They are not in the Hilbert space but are quite physical!

True. That's a point Madrid is always making:
https://arxiv.org/pdf/quant-ph/0607168.pdf
I think he got it from Bohm (Arno Bohm - not the famous one).

While true its not what spiked my interest in RHS's which was simply to resolve Von-Neumann's scathing attack on Dirac. Obviously neither he or Hilbert could solve rigorously what Dirac did - and considering both of those great mathematicians reputation that's saying something. I discovered it took the combined effort of a number of great 20th century mathematicians like Grothendieck to do it. But to discover they were actually real physical states not just abstractions like a state with definite momentum was an actual shock.

I however do not think beginning students need to come to grips with this bit of exotica straight away which is the point DarMM is making - it can confuse. Step by easy step is always best especially with something like this.

I think DarMM like me has or once had an interest in stochastic modelling. It plays an important role in White Noise Theory where the RHS is called Hiida distributions.

Thanks
Bill
 
Last edited:
  • Informative
Likes DarMM
  • #63
That position or momentum eigenstates are not Hilbert-space states was known since Heisenberg published his uncertainty principle (though wrongly interpreted first but soon corrected by Bohr).

AFAIK Dirac's formalism, including the ##\delta## distribution (the earliest appearance of which is not due to Dirac but due to Sommerfeld around 1910 in some work on electrodynamics), was made rigorous by Schwartz et al. long before the rigged-Hilbert space formulation was discovered. Also I think von Neumann's treatment of the spectral theorem was already rigorous without using the rigged-Hilbert space formalism. All this triggered the discovery of functional analysis in the 1st half of the 20th century.
 
  • #64
Thanks for info. I learned here some basic things that are I did not found from book. Maybe they thought that it was obivous. I have now more clear feeling about kets, nut still do not understand everything.

Are these
olgerm said:
##<base|[i_1](a_{arguments\ of\ wavefunction})=\delta(v(i_1)-a_{arguments\ of\ wavefunction})\\
v(i_1)[i_2]=\sum_{i_3=-\infty}^{\infty}(2^{i_3}*((\lfloor i_1*2^{-2*i_3*A+i_2} \rfloor\mod_2)+\sqrt{-1}*(\lfloor i_1*2^{-2*i_3*A+i_2+1} \rfloor\ mod_2)))##
basevectors suitable to be basevectors of kets?
 
Last edited:
  • #65
It's hard to tell what your equation is supposed to mean, why don't you use conventional symbols?
 
  • #66
##<base|[i_1](a_{arguments\ of\ wavefunction})=\delta(v(i_1)-a_{arguments\ of\ wavefunction})\\

v(i_1)[i_2]=\sum_{i_3=-\infty}^{\infty}(2^{i_3}*((\lfloor i_1*2^{-2*i_3*A+i_2} \rfloor\mod_2)+\sqrt{-1}*(\lfloor i_1*2^{-2*i_3*A+i_2+1} \rfloor\ mod_2)))####\delta## is a function that returns 1 if its argument is 0-vector and 0 otherwise.

##\lfloor \rfloor## is floor function.

##<base|[i_1]## is i'th basevector.since ##\sigma## is not 0 only if ##v=a_{arguments\ of\ wavefunction}##, value of basefunction is 1 only in case of 1 choice of arguments and otherwise it's value is 0. How every ##i_2##'th component of v for ##i_1##'th basevectors is calculated is shown in 2. equation.
I tried to write it as easily as I could. I think I used now only conventional symbols.
 
Last edited:
  • #67
Or can you post an example of some commonly used basefunctions?
 
  • #68
HomogenousCow said:
It's hard to tell what your equation is supposed to mean, why don't you use conventional symbols?
I did use conventional symbols.
 
  • #69
I don't think so, besides what you wrote is completely unreadable and most of us don't even know what you're trying to do.
 
  • #70
weirdoguy said:
I don't think so, besides what you wrote is completely unreadable and most of us don't even know what you're trying to do.
Which symbol you do not understand?
 

Similar threads

  • Quantum Physics
Replies
5
Views
418
  • Quantum Physics
Replies
18
Views
1K
  • Quantum Physics
Replies
4
Views
794
Replies
8
Views
890
  • Quantum Physics
Replies
6
Views
1K
Replies
4
Views
990
  • Quantum Physics
Replies
26
Views
1K
Replies
1
Views
547
  • Quantum Physics
Replies
4
Views
919
  • Quantum Physics
Replies
1
Views
1K
Back
Top