I Bracket VS wavefunction notation in QM

olgerm
Gold Member
Messages
532
Reaction score
35
In some sources QM is explained using bracket notation. I quite understand algebra of bracket notation, but I do not understand how is this notation related with physically meaningful things? How is bracket notation related to wavefunction notation?
Could you tell me whether following is true:
to express QM-system in bracket notation you:
  1. solve schrödinger equation
    ##U_{System Potential Energy}(r_1,r_2,r_3,...,r_n,t)-\sum_{n=1}^N((\frac{d^2Ψ(r_1,r_2,r_3,...,r_n,t)}{dx_n^2}+\frac{d^2Ψ(r_1,r_2,r_3,...,r_n,t)}{dy_n^2}+\frac{d^2Ψ(r_1,r_2,r_3,...,r_n,t)}{dz_n^2})*\frac{ħ^2}{m_n})=i*ħ \frac{dΨ(r_1,r_2,r_3,...,r_n,t)}{dt}##
    and get wavefunction Ψ.
  2. apply Fourier transformation to wavefunction.##\Psi(x)=\alpha(1) \cdot sin(1 \cdot x)+\alpha(2) \cdot sin(2 \cdot x)+\alpha(3) \cdot sin(3 \cdot x)...=\sum_{k=0}^\infty(\alpha(k) \cdot sin(k \cdot x))##
    or
    ##\Psi(x)=\alpha(1) \cdot e^{-i \cdot 1 \cdot x}+\alpha(2) \cdot e^{-i \cdot 2 \cdot x}+\alpha(3) \cdot e^{-i \cdot 3 \cdot x}...=\sum_{k=0}^\infty(\alpha(k) \cdot e^{-i \cdot k \cdot x})##
    (Which one? How to make Fourier transformation to ##\Psi(t;x;y;z)##?)
  3. make bra of Fourier transformation results. ##<\psi|=(\alpha(0);\alpha(1);\alpha(2);\alpha(3);...)##
    and ket ##|\psi>=(\alpha^*(0);\alpha^*(1);\alpha^*(2);\alpha^*(3);...)##.
  4. to find momentum of a particle solve equation ##\hat{p}|\psi>=p_x \cdot |\psi>## aka ##-i\frac{\partial}{\partial x}|\psi>=p_x \cdot |\psi>## for ##p_x##.
    (in which cases the operator must be hermitian?)
 
Last edited:
Physics news on Phys.org
One comment on your calculations above is to use complex variables to do the Fourier transform, and also do it in 3 dimensions. Apart from multiple ## 2 \pi ## factors ## \hat{\Psi}(\vec{k})=\iiint \Psi(\vec{r}) e^{-i \vec{k} \cdot \vec{r}} \, d^3 \vec{r} ##. (Note: This complex variable form of the F.T. uses Euler's formula ## e^{ix}=cos(x)+i sin(x) ##). ## \\ ## I don't have any simple explanation for the comparison of the bracket vs. wavefunction formalisms, but some of the quantum mechanics textbooks describe it fairly well.
 
So in 3 dimensional space I get one bra and one ket from F.T. per every dimension? or elements of bras and kets are 3-dimensional vectors?
 
olgerm said:
So in 3 dimensional space I get one bra and one ket from F.T. per every dimension? or elements of bras and kets are 3-dimensional vectors?
The bra's and ket's are generally 3 dimensional, if they represent a state that has 3 dimensions. To get the wavefunction out of them, you do the following: ## \Psi(\vec{x})=<\vec{x}|\Psi> ##. The operator ## |\vec{x}><\vec{x}| ## integrated over all ## d^3 \vec{x} ## is the identity operator. ## \\ ## I can show you a couple of bits and pieces, but there are others on the Physics Forums website who have more Quantum Mechanics expertise. ## \\ ## Just one additional item that might be helpful: When you have ## <\Psi|\Psi> ##, you can throw in the identity operator in between and that gives ## <\Psi|\Psi>= \iiint <\Psi|\vec{x}><\vec{x}|\Psi> \, d^3 \vec{x}=\iiint \Psi^{*}(\vec{x}) \Psi(\vec{x}) \, d^3 \vec{x} ##. And you can similarly evaluate a form such as ## <\Phi|\Psi> ## with the use of the identity operator to get you from the braket notation to the wavefunction notation.
 
Last edited:
  • Like
Likes vanhees71, olgerm and bhobba
Can an electron position probability produce a negative value?
 
ivy said:
Can an electron position probability produce a negative value?

Of course not - the Kolmogorov axioms forbid it. If you are unfamiliar with them look them up.

In the early days of QM they got negative probabilities so they knew something was wrong. It turned out to be positive probabilities of antiparticles. If you are interested in that story start a new thread.

Thanks
Bill
 
Charles Link said:
I don't have any simple explanation for the comparison of the bracket vs. wavefunction formalisms, but some of the quantum mechanics textbooks describe it fairly well.

There is no simple explanation.

Its this. Suppose you have a general Ket |u> then we can expand it in eigenvalues of position so |u> = ∫ f(x) |x> dx. By definition f(x) is called the wave function. From the so called Born Rule its easy to show |f(x)|^2 is the probability of when observed you get position x.

But the math behind it is very advanced and deep. I gave an overview in this thread - but its probably gibberish unless you have studied linear algebra and preferably Hilbert spaces:
https://www.physicsforums.com/threads/rigged-hilbert-spaces-in-quantum-mechanics.917768/

I do recommend you study linear algebra and distribution theory. They are a must for any applied mathematician including physics. For linear algebra there is tons of stuff on the internet eg:
http://linear.ups.edu/
http://quantum.phys.cmu.edu/CQT/chaps/cqt03.pdf

Read them in the above order.

For distribution theory get a copy of:
https://www.amazon.com/dp/0521558905/?tag=pfamazon01-20

Its worth it for the section on Fourier transforms alone. Without it it becomes bogged down in issues of convergence. If I was a teacher of Fourier Transforms, which I am not, but if I was I would do by teaching Distribution Theory.

Thanks
Bill
 
Last edited:
  • Like
Likes vanhees71, olgerm and Charles Link
Charles Link said:
One comment on your calculations above is to use complex variables to do the Fourier transform, and also do it in 3 dimensions. Apart from multiple ## 2 \pi ## factors ## \hat{\Psi}(\vec{k})=\iiint \Psi(\vec{r}) e^{-i \vec{k} \cdot \vec{r}} \, d^3 \vec{r} ##.

Where are the multipliers, which relate every frequency and amplitude? These are noted as ##\alpha## in my post. I also put these multipliers into bra.
Is it just standard practice to use exponent form ( ##\Psi(x)=\sum_{k=0}^\infty(\alpha(k) \cdot e^{-i \cdot k \cdot x}) ##) in Fourier transformation, or by using other form (##\Psi(x)=\sum_{k=0}^\infty(\alpha(k) \cdot sin(k \cdot x))##) I got bra with which some QM formulas do not hold up?
 
Last edited:
olgerm said:
Is it just standard practice to use exponent form ( ##\Psi(x)=\sum_{k=0}^\infty(\alpha(k) \cdot e^{-i \cdot k \cdot x}) ##) in Fourier transformation, or by using other form (##\Psi(x)=\sum_{k=0}^\infty(\alpha(k) \cdot sin(k \cdot x))##) I got bra with which some QM formulas do not hold up?

The Fourier transform is defined in many sources - even good old Wikipedia:
https://en.wikipedia.org/wiki/Fourier_transform

If F is the Fourier transform of a function f(x) denoted by F(f(x)) is a transform from one function to another. Their is another similar transform called in inverse Fourier transformation F' and interestingly, provided some assumptions are made about f(x) then F'F = 1. Let F(f(x)) = f'(x) so you have f(x) = F'(f'(x)). Intuitively when you look at the equation it means, physically, you can decompose a function into the sum of a large number of wavelike functions. That is its intuitive meaning.

In QM I mentioned any state can be decomposed into eigenkets of position |u> = ∫f(x) |x> dx. It can be also be decomposed into eigenkets of momentum p, |u> = ∫f'(p) |p> dp. Interestingly f(x) and f'(p) are related by the Fourier transform. Their is an intuitive reason for it, but I will leave you, using the references I gave, figure that one out. You understand better what you nut out for yourself. Hint the wave-function of a state of definite momentum is e^ipx - now look at the Fourier transform and remember the intuitive interpretation I gave.

Thanks
Bill
 
Last edited:
  • Like
Likes olgerm and Charles Link
  • #10
Schrödinger transforms the electron matter wave into a probability wave (Greene, p. 105) then uses a spherical coordinate system (r,Θ,φ,) representation in a wave equation,
-(h2/2u)∇"Ψ(r,Θ,φ) + V(r,Θ,φ) + V(r,Θ,φ)Ψ(r,Θ,φ) = EΨ(r,Θ,φ)
but the original atomic electron matter wave could not be represented in a spherical coordinate system. Using a spherical coordinate system to represent an atomic electron of the particle-in-box transformation is a deception. Furthermore, Schrodinger's wave equation is used to derive the equations of the atomic orbitals that is based on Schrodinger's interfering probability waves but an electron position probability can only represent a positive value or zero and cannot depict a negative value that is required in representing destructive wave interference used to derive the equations of the atomic orbitals which proves the derivation of the atomic orbital equations using Schrodinger's wave equation is physically invalid. Does this not diametrically prove that quantum mechanism is invalid?
 
  • #11
@ivy The Q.M. commutator algebra with ## [\vec{x},\vec{p_x}]=i \hbar ## is kind of a strange bird, and I believe Max Born was awarded the Nobel Prize for this and other contributions. In any case, as much as some of the results and some of the Q.M. fundamentals are certainly not intuitive, the Q.M. has seen tremendous success at many levels and is not likely to be unseated.
 
Last edited:
  • #12
Charles Link said:
@ivy The Q.M. commutator algebra with ## [\vec{x},\vec{p_x}]=i \hbar ## is kind of a strange bird, and I believe Max Born was awarded the Nobel Prize for this and other contributions. In any case, as much as some of the results and some of the Q.M. fundamentals are certainly not intuitive, the Q.M. has seen tremendous success at many levels and is not likely to be unseated.

Heisenberg came up with the commutator stuff but later it was extended by Dirac - it was part of his not so well known q number approach that was later incorporated into his transformation theory that united all the various versions under the one formalism. Max Born came up with the appropriately named Born Rule that enshrined the probability part of QM. Mathematically the transformation theory using Dirac delta functions and the like was on shaky ground according to the math known at the time, and Von-Neumann came up with an approach that was mathematically rigorous. However through the work of a number of great 20th century mathematicians Dirac's approach was put on firm ground using what is called Rigged Hilbert Spaces rather than the Hilbert space approach of Von-Neumann. Here is a summary of it history about that time:
http://www.lajpe.org/may08/09_Carlos_Madrid.pdf

Thanks
Bill
 
  • Like
Likes Charles Link
  • #13
bhobba said:
Heisenberg came up with the commutator stuff but later it was extended by Dirac - it was part of his not so well known q number approach that was later incorporated into his transformation theory that united all the various versions under the one formalism. Max Born came up with the appropriately named Born Rule that enshrined the probability part of QM. Mathematically the transformation theory using Dirac delta functions and the like was on shaky ground according to the math known at the time, and Von-Neumann came up with an approach that was mathematically rigorous. However through the work of a number of great 20th century mathematicians Dirac's approach was put on firm ground using what is called Rigged Hilbert Spaces rather than the Hilbert space approach of Von-Neumann. Here is a summary of it history about that time:
http://www.lajpe.org/may08/09_Carlos_Madrid.pdf

Thanks
Bill
@bhobba Thank you. The paper is extremely good reading. Some of the mathematical details are beyond my present level, but it is interesting to read how the top physicists and mathematicians of that period resolved some very difficult mathematical puzzles in order to put Q.M. on solid footing.
 
  • Like
Likes bhobba
  • #14
Charles Link said:
@bhobba Some of the mathematical details are beyond my present level,

That's easily fixed.

It will be a long road, but to start with get a hold of the following:
https://www.amazon.com/dp/0521558905/?tag=pfamazon01-20

What it contains should be known to all applied mathematicians and physicists - its that important. It will enrich your reading of texts that use this sort of stuff freely without saying exactly what it is. Griffiths for example doesn't say exactly what is going on with the Dirac Delta function - simply he gives his personal guarantee that coming to grips with it (again without saying exactly what is going on) will enrich their physics a lot.

Thanks
Bill
 
Last edited:
  • Like
Likes Charles Link
  • #15
bhobba said:
That's easily fixed.

It will be a long road, but to start with get a hold of the following:
https://www.amazon.com/dp/0521558905/?tag=pfamazon01-20

What it contains should be known to all applied mathematicians and physicists - its that important. It will enrich your reading of texts that use this sort of stuff freely without saying exactly what it is. Grifffiths for example doesn't say exactly what is going on with the Dirac Delta function - simply he gives his personal guarantee that coming to grips with it (again with saying exactly what is going on) will enrich their physics a lot.

Thanks
Bill
My calculus and mathematics background is quite good, but when they start referring to things like isomorphisms, it is hard to follow all of the details. I expect that these were quite difficult mathematical puzzles though, or it wouldn't have taken these extremely brilliant people a couple of years to resolve them.
 
  • #16
Charles Link said:
My calculus and mathematics background is quite good, but when they start referring to things like isomorphisms, it is hard to follow all of the details.

Isomorphism is easy - I am surprised you don't know it.

Its from set theory - it simply means the elements of one set can be put into 1-1 corrsponence with another.

Its used a lot in that fascinating mathematical area of infinity which also ties in with your calculus:
http://www.math.helsinki.fi/logic/sellc-2010/ws/guangzhou-boban.pdf

Put simply a set is infinite if it can be put into 1-1 correspondence with a proper subset. Integers, reals and rationals are all infinite. But now for something really interesting - the rationals are a different type of infinity to the reals - this is Cantors famous diagonal argument. There are all sorts of infinities.

Enough said - its over to you now - as a homework helper I am sure you can nut it all out for yourself. Here is a good textbook - and cheap to:
https://www.amazon.com/dp/0070381593/?tag=pfamazon01-20

Thanks
Bill
 
  • #17
ivy said:
Furthermore, Schrodinger's wave equation is used to derive the equations of the atomic orbitals that is based on Schrodinger's interfering probability waves but an electron position probability can only represent a positive value or zero and cannot depict a negative value that is required in representing destructive wave interference used to derive the equations of the atomic orbitals which proves the derivation of the atomic orbital equations using Schrodinger's wave equation is physically invalid. Does this not diametrically prove that quantum mechanism is invalid?
It does not. You may be missing the distinction between probability amplitudes which are complex numbers so are neither positive nor negative, and probabilities which of course are restricted to zero and positive values.

Please start a new thread if you want to further explore your question - we're coming close to an attempted hijack of this thread.
 
  • #18
bhobba said:
The Fourier transform is defined in many sources - even good old Wikipedia:
https://en.wikipedia.org/wiki/Fourier_transform

If F is the Fourier transform of a function f(x) denoted by F(f(x)) is a transform from one function to another.
Fourier transformation of wavefunction is a function(that has uncountably possible values), but bra is vector(not function. and has countable elements). Isn't it Contradiction?
On Wikipedia the Forier transformation is defined only for one argument functions. How to get bra and ket from for 4 argument wavefunction ##\Psi(t;x;y;z)##? Should I use Fourier-Stieltjes transformation?
which convention should be used to get bra from wavefunction?

bhobba said:
In QM I mentioned any state can be decomposed into eigenkets of position |u> = ∫f(x) |x> dx. It can be also be decomposed into eigenkets of momentum p, |u> = ∫f'(p) |p> dp. Interestingly f(x) and f'(p) are related by the Fourier transform. Their is an intuitive reason for it, but I will leave you, using the references I gave, figure that one out. You understand better what you nut out for yourself. Hint the wave-function of a state of definite momentum is e^ipx - now look at the Fourier transform and remember the intuitive interpretation I gave.
Is that position ket and momentum ket are related by the Fourier transform because De Broigle formula ##\lambda=\frac{h}{p}##?
##F(f_x)(\lambda)## is amplitude of wavelike function with wavelenght ##\lambda##
and ##\frac{h}{F(f_x)(\lambda)}##is probability amplitude of momentum p.
 
  • #19
olgerm said:
Fourier transformation of wavefunction is a function(that has uncountably possible values), but bra is vector(not function. and has countable elements). Isn't it Contradiction?

No - because in Rigged Hilbert Spaces you can have basis that are indexed by real numbers - not just countable indices.

Thanks
Bill
 
  • Like
Likes olgerm
  • #20
olgerm said:
Is that position ket and momentum ket are related by the Fourier transform because De Broigle formula ##\lambda=\frac{h}{p}##?

You are on the right track. Further hint - what are wave-packets in De-Broglies (outdated) ideas made of?

Like I said with this one I am not going to spell it out.

Thanks
Bill
 
  • #21
I have learned something and I made some changes in my guess how to describe QM-system in dirac brecket notation.
to express QM-system in bracket notation:
  1. solve schrödinger equation
    ##U_{System Potential Energy}(r_1,r_2,r_3,...,r_n,t)-\sum_{n=1}^N((\frac{d^2 \cdot Ψ(r_1,r_2,r_3,...,r_n,t)}{dx_n^2}+\frac{d^2Ψ(r_1,r_2,r_3,...,r_n,t)}{dy_n^2}+\frac{d^2Ψ(r_1,r_2,r_3,...,r_n,t)}{dz_n^2}) \cdot \frac{ħ^2}{m_n})=i \cdot ħ \frac{dΨ(r_1,r_2,r_3,...,r_n,t)}{dt}##
    and get wavefunction Ψ.
  2. get bra by apllying Fourier-Stieltes transformation of wavefunction.##<\psi|=\int_{-\infty}^\infty(dt \cdot \int_{-\infty}^\infty(dx \cdot\int_{-\infty}^\infty (dy \cdot \int_{-\infty}^\infty(dz \cdot \Psi(t;x;y;z) \cdot e^{-i \cdot (\omega_0 \cdot t+\omega_1 \cdot x+\omega_2 \cdot y+\omega_3\cdot z)}))))##
    and ket by complex conjucating bra.
    (How convert wavefunction, with more than one particle, to bra and ket? Do I get one bra and on ket, which are more dimensional (Nparticles*Ddimensions of space) or one bra and one ket per every particle?)
  3. to find a parameter of the system, you solve equation ##\hat{O}|\psi>=O_{eigen} \cdot |\psi>##, where O is operator, that corresponds to the parameter.
    (in which cases the operator must be hermitian? does the example find inertia of single particle all of whole system? What if the parameter is not accurately determined?)
 
  • #22
I made a mistake in my last post. Instead
olgerm said:
##<\psi|=\int_{-\infty}^\infty(dt \cdot \int_{-\infty}^\infty(dx \cdot\int_{-\infty}^\infty (dy \cdot \int_{-\infty}^\infty(dz \cdot \Psi(t;x;y;z) \cdot e^{-i \cdot (\omega_0 \cdot t+\omega_1 \cdot x+\omega_2 \cdot y+\omega_3\cdot z)}))))##
should be
##<\Psi|=\int_{-\infty}^\infty(dt \cdot \int_{-\infty}^\infty(dx \cdot\int_{-\infty}^\infty (dy \cdot \int_{-\infty}^\infty(dz \cdot \Psi(t;x;y;z) \cdot e^{i \cdot (\omega \cdot t-k_x \cdot x-k_y \cdot y-k_z\cdot z)}))))##
and (inverse Fourier transformation)
##\Psi(t;x;y;z)=\int_{-\infty}^\infty(d \omega \cdot \int_{-\infty}^\infty(dk_x \cdot\int_{-\infty}^\infty (dk_y \cdot \int_{-\infty}^\infty(dk_z \cdot <\Psi|(\omega;k_x;k_y;k_z) \cdot e^{-i \cdot (\omega \cdot t-k_x \cdot x-k_y \cdot y-k_z\cdot z)}))))##.
Charles Link said:
##\hat{\Psi}(\vec{k})=\iiint \Psi(\vec{r}) e^{-i \vec{k} \cdot \vec{r}} \, d^3 \vec{r}##
Is my first equation same as Charles Link's in quoute?I am really thankful if anybody answers.
 
Last edited:
  • #23
I would simplify the problem by starting with one partice in one dimension instead of n particles in three dimensions.

I find it kind of hard to comment on what you write because you are coming from the end and I can't judge from your comments how familiar you are with linear algebra. The bra-ket formalism is mostly basic linear algebra with a few caveats because the vector spaces may be of infinite dimension. In the bra-ket formalism, the starting point is the Schrödinger equation in the form
i \hbar \frac{d}{dt}|\psi\rangle = H|\psi\rangle
Using the commutator [X,P]=i\hbar, we can deduce the wavefunction formalism as a special case. You can find this derivation in most textbooks on QM.
 
  • Like
Likes bhobba and vanhees71
  • #24
olgerm said:
##<\psi|=\int_{-\infty}^\infty(dt \cdot \int_{-\infty}^\infty(dx \cdot\int_{-\infty}^\infty (dy \cdot \int_{-\infty}^\infty(dz \cdot \Psi(t;x;y;z) \cdot e^{i \cdot (\omega \cdot t-k_x \cdot x-k_y \cdot y-k_z\cdot z)}))))##
and (inverse Fourier transformation)
##\Psi(t;x;y;z)=\int_{-\infty}^\infty(d \omega \cdot \int_{-\infty}^\infty(dk_x \cdot\int_{-\infty}^\infty (dk_y \cdot \int_{-\infty}^\infty(dk_z \cdot <\psi|(\omega;k_x;k_y;k_z) \cdot e^{-i \cdot (\omega \cdot t-k_x \cdot x-k_y \cdot y-k_z\cdot z)}))))##.
These expressions are wrong. If you have a bra (resp. a ket) vector on one side of an equation, you need to have one on the other side too. The connection between |\psi\rangle and \psi(x) is the inner product: \psi(x) = \langle x | \psi \rangle.

You are probably not getting many responses because these things are really basic and it is difficult to answer your question before you have learned the basics.
 
Last edited:
  • Like
Likes bhobba
  • #25
kith said:
I would simplify the problem by starting with one partice in one dimension instead of n particles in three dimensions.
My post was about 1 particle in 3 dimensional space. Since those equations were wrong, I do not rewrite these for 1 particle in 1 dimension.
As I said in my first post, I want to learn relation between wavefunction and dirac's bra-ket notation. Is it that (in one dimensional space with 1 particle) bra is Fourier transformation of wavefunction?

kith said:
The connection between |\psi\rangle and \psi(x) is the inner product: \psi(x) = \langle x | \psi \rangle.
It is probably stupid question, but I hope it is easy to answer. If x is (position) argument (a real number) of wavefuction then how can inner product with x and ket be calculated? Inner product should be between two functions or two vectors.

kith said:
these things are really basic and it is difficult to answer your question before you have learned the basics.
I quite understand linear algebra of bra-kets. I am trying to learn the basics here.
 
  • #26
Well, a web forum is not a good place to learn the very basics of a subject. Take a good modern textbook and learn it. My favorite introduction to non-relativistic QM is Sakurai, Modern Quantum Mechanics, Addison Wesley.

As has been stressed by several other posters here, the wave function are (generalized) components of vectors with respect to a complete orthonormalized set of generalized common eigenfunctions of a set of self-adjoint operators, representing a complete set of compatible observables. The observables are compatible, if the representing self-adjoint operators all commute with each other, and it's complete if for each possible set of common eigenvalues the eigenvector is uniquely determined (up to a non-zero factor).

E.g., for a spinless particle in three dimensions, a complete set of observables are the components of the position vector ##\vec{x} \in \mathbb{R}^3##. The eigenvectors are generalized ones, i.e., normalized to a ##\delta## distribution,
$$\langle \vec{x} | \vec{x}' \rangle=\delta^{(3)}(\vec{x}-\vec{x}').$$
They are complete in the sense that
$$\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} |\vec{x} \rangle \langle \vec{x}|=\hat{1}.$$
Each Hilbert-space vector ##|\psi \rangle## can thus reconstructed from the corresponding wave function
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle$$
via "inserting a unity operator":
$$|\psi \rangle=\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} |\vec{x} \rangle \langle \vec{x}|\psi \rangle=\int_{\mathbb{R}^3} \mathrm{d}^3 \vec{x} |\vec{x} \rangle \psi(\vec{x}).$$
One says that the wave function is the Hilbert-space vector in position representation.

If all this sounds unfamiliar to you, please read a good modern textbook. It's not possible to work out the details in an internet forum like this. Of course, we can help with specific questions, which might come up in your studies.
 
  • Like
Likes Physics Footnotes, bhobba and Charles Link
  • #27
olgerm said:
Is it that (in one dimensional space with 1 particle) bra is Fourier transformation of wavefunction?
No. Like a ket, a bra is an abstract vector. Vectors can be expressed in arbitrary bases. The collection of components of a vector with respect to a certain basis is not the vector itself.

olgerm said:
It is probably stupid question, but I hope it is easy to answer. If x is (position) argument (a real number) of wavefuction then how can inner product with x and ket be calculated?
|x\rangle is a vector, namely a (generalized) eigenvector of the position operator X. x is the corresponding eigenvalue.

olgerm said:
I quite understand linear algebra of bra-kets.
The two questions which I answered above above don't confirm this.

olgerm said:
I am trying to learn the basics here.
The basics of the bra-ket formalism don't start with the Schrödinger wave equation. Starting from the basics looks like this: ket vectors, operators, eigenvalues&eigenvectors, bases, inner products/bra vectors, position operator, momentum operator, Schrödinger equation, wavefunction, Schrödinger wave equation, Fourier transformation. Trying to learn the formalism by starting at the end doesn't seem like a good idea to me.
 
Last edited:
  • Like
Likes bhobba
  • #29
ftr said:
Somewhat clearer answer that vanhees71, but still his advice for a QM textbook with the chapters that start explaining Bra-ket still stands.

Indeed it is a good book.

The real answer is in Rigged Hilbert Spaces, but you need a good background not only in QM (I would advise Ballentine after Sakurai) but in functional analysis. Then you can attempt it.

Sakurai will however give you the basics, and Ballentine will take it a bit further by giving an inroduction to RHS's.

And as I an won't to say Distribution Theory is a must regardless:
https://www.amazon.com/dp/0521558905/?tag=pfamazon01-20

Thanks
Bill
 
  • Like
Likes vanhees71
  • #30
I think I have got some idea of what bras and kets are now. These are more similar to wavefunction than I thought. Just vectors that hold generalised-coordinates that would be arguments of wavefunction in wavefunction-notation. so that ##<\Psi_1|\Psi_1>=|\Psi(t,x_1,y_1,z_1,x_2,y_2,z_2)|^2##, where ##\Psi_1=[t,x_1,y_1,z_1,x_2,y_2,z_2]##.

And operators are like functions of bra vector elements that equals to value it is named for. for example Hamlitonian equals to total energy(as function of bra elements) and momentum operator equals to total momentum of system(as function of bra elements)(momentum is reserved).

You can correct me if I understood something wrong.

should not time be one of the generalized coordinates?
Is multipliyng bras with scalar multiplaying elements of bra vectors with scalar?
 
Last edited:
  • #31
I think, it's still a bit confused.

The bras are vectors of an abstract Hilbert space. In non-relativistic QM where you deal with systems with a finite amount of degrees of freedom it's the separable Hilbert space. It's unique up to isomorphism. That's why there were two different versions of QM first: Born, Jordan, and Heisenberg's matrix mechanics and Schrödinger's wave mechanics. But as Schrödinger showed very quickly both are the same theory, disinguished just by choosing different orthornomal systems (Born et al discrete sets like the harmonic-oscillator energy eigenstates; Schrödinger the (generalized) position eigenstates) and Dirac brought it in the representation-independent formulation with his bras and kets.

So there are kets ##|\psi \rangle## describing states and the (generalized eigenvectors) of observable operators like ##|\vec{x} \rangle##, which is a generalized set of orthonormal common eigenvectors of the position operators. The wave function a la Schrödinger are the components of the state ket wrt. this generalized basis.
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle.$$
 
  • Like
Likes bhobba
  • #32
olgerm said:
##<\Psi_1|\Psi_1>=|\Psi(t,x_1,y_1,z_1,x_2,y_2,z_2)|^2##
Is that True?
 
  • #33
vanhees71 said:
wave function are (generalized) components of vectors with respect to a complete orthonormalized set of generalized common eigenfunctions of a set of self-adjoint operators, representing a complete set of compatible observables.
You meant bras and kets are (generalized) components of vectors?

Is it correct to describe a quantum system that consists of electron and a proton with following ket:
##|my\_ket>=|x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron}>##
and then ##\hat{H}|my\_ket>=E_{pot}(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})+E_{kin}(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})=
\frac{q^2*k_{Coulumb}}{\sqrt{(x_{proton}-x_{electron})^2+(y_{proton}-y_{electron})^2+(z_{proton}-z_{electron})^2}}+\\
\frac{\hbar}{2*m}*(\frac{\partial^2 ?}{\partial x_{proton}^2}+\frac{\partial^2 ?}{\partial y_{proton}^2}+\frac{\partial^2 ?}{\partial z_{proton}^2}+\frac{\partial^2 ?}{\partial x_{electron}^2}+\frac{\partial^2 ?}{\partial y_{electron}^2}+\frac{\partial^2 ?}{\partial z_{electron}^2})##
 
  • #34
[Edit: Corrected in view of #37]
No, kets are vectors and bras are co-vectors. It's never right set a vector equal to its components wrt. some basis.
 
Last edited:
  • Like
Likes bhobba
  • #35
olgerm said:
Is that True?
No, because a vector product is a number, not a function.
$$\langle \psi_1|\psi_1 \rangle=\int_{\mathbb{R}^3} \mathrm{d}^3 x |\langle \vec{x}|\psi_1 \rangle|^2.$$
 
  • #36
olgerm said:
You meant bras and kets are (generalized) components of vectors?

Kets are generalized vectors, and wave functions are their components.
 
  • #37
vanhees71 said:
bras are vectors and kets are co-vectors.

Isn't this backwards? Aren't bras covectors and kets vectors?

(I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.)
 
  • Like
Likes bhobba and weirdoguy
  • #38
PeterDonis said:
I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.
This.
 
  • Like
Likes bhobba
  • #39
Orodruin said:
This.

I want to point out while that is indeed the correct way of looking at it, when you study the rigorous mathematics of what's gong on (ie Rigged Hilbert Spaces) the above is only generally true for Hilbert Spaces, but on occasion we use elements from more general spaces where while still true things are more difficult. I became caught up in this issue when I learned QM and it diverted me unnecessarily from QM proper. An interesting diversion especially for those of a mathematical bent, but a diversion nonetheless.

If anyone REALLY wants to delve into this issue - it isn't easy - but the attachment contains an overview. Note the conclusion: The RHS fully justifies Dirac’s bra-ket formalism. In particular, there is a 1:1 correspondence between bras and kets.

However constructing the exact space is not trivial.

Thanks
Bill
 

Attachments

Last edited:
  • Like
Likes vanhees71
  • #40
PeterDonis said:
Isn't this backwards? Aren't bras covectors and kets vectors?

(I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.)
Yes sure :-((; ##\langle \phi|## is a linear form (co-vector) and a bra and ##|\psi \rangle## is a vector and it's a ket. Also note that only for proper normalizable Hilbert-space vectors bras and kets are dual. In QT you need the general objects of the "rigged Hilbert space" to make sense of the physicists' sloppy math concerning unbounded self-adjoint operators with continuous spectra. See @bhobba 's previous posting and the nice talk linked therein.
 
  • #41
I don't want to annoy you with basic questions, but I did not find the answer from 2 books that I read nor internet.

weirdoguy said:
Kets are generalized vectors, and wave functions are their components.
I assume that values of the wavefunction are components of the vector. How to ge i'th component of ket from wavefunction(what should be is argument)? element of vector can be described with just one number(index), but wavefunction has more arguments.
vanhees71 said:
$$\psi(\vec{x})=\langle \vec{x}|\psi \rangle.$$
If the vectors are othonormal then , it should be ##\langle \vec{x}|\psi \rangle=\sum_{i=0}(\langle \vec{x}|(i))*(|\psi\rangle(i)))##, but how can this sum equal to a function not a number?
 
  • #42
The space of possible wavefunctions forms a vector space. Thus the wavefunction when considered as an element of that vector space is a vector. That vector space is of a special type called a Hilbert space. A ket is just another notation for the wavefunction when consider as an element of a Hilbert space.
 
  • Like
Likes olgerm
  • #43
olgerm said:
If the vectors are othonormal then , it should be ##\langle \vec{x}|\psi \rangle=\sum_{i=0}(\langle \vec{x}|(i))*(|\psi\rangle(i)))##, but how can this sum equal to a function not a number?
It's not a sum because ##x## is continuous, you should be integrating.

However the LHS here involves only one basis bra so the sum/integral on the right is not needed.
 
  • #44
olgerm said:
I don't want to annoy you with basic questions, but I did not find the answer from 2 books that I read nor internet.I assume that values of the wavefunction are components of the vector. How to ge i'th component of ket from wavefunction(what should be is argument)? element of vector can be described with just one number(index), but wavefunction has more arguments.If the vectors are othonormal then , it should be ##\langle \vec{x}|\psi \rangle=\sum_{i=0}(\langle \vec{x}|(i))*(|\psi\rangle(i)))##, but how can this sum equal to a function not a number?
The vector product can be evaluated in terms of wave functions, using the completeness relation
$$\int_{\mathbb{R}^3} \mathrm{d}^3 x |\vec{x} \rangle \langle \vec{x}|=\hat{1},$$
i.e.,
$$\langle \psi|\phi \rangle=\int_{\mathbb{R}^3} \mathrm{d}^3 x \langle \psi|\vec{x} \rangle \langle x|\phi \rangle = \int_{\mathbb{R}^3} \mathrm{d}^3 x \psi^*(\vec{x}) \phi(\vec{x}).$$
Here you have the case of a representation in terms of generalized "basis vectors", i.e., the position "eigen vectors". They provide a "continuous" label ##\vec{x} \in \mathbb{R}^3##.

All this can be made mathematically rigorous in terms of the socalled "rigged Hilbert space" formalism, but that's not necessary to begin with QM. A good "normal" textbbook will do. My favorite is

J. J. Sakurai, S. Tuan, Modern Quantum Mechanics, Addison
Wesley (1993).
 
  • #45
DarMM said:
The space of possible wavefunctions forms a vector space. Thus the wavefunction when considered as an element of that vector space is a vector. That vector space is of a special type called a Hilbert space. A ket is just another notation for the wavefunction when consider as an element of a Hilbert space.
I think this formulation is the problem of the OP. One should clearly distinguish between the abstract ("representation free") vectors and the wave functions, which are the vectors in "position representation".

It's like in finite-dimensional vector spaces the difference between an abstract vector and its components with respect to a basis.
 
  • Like
Likes DarMM
  • #46
DarMM said:
The space of possible wavefunctions forms a vector space.
Vectorspace axioms should tell that any vector in that space multiplied with scalar should also belong to that vectorspace,but if I multiplied the wavefunction with 2 ##\Psi_2(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron})=2*\Psi(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron}))## I would sum of all probabilities according to ##\Psi_2## is not 1 but 2. ##\int_{-\infty}^\infty(dx_{proton}*( \int_{-\infty}^\infty(dy_{proton}*( \int_{-\infty}^\infty(dz_{proton}*( \int_{-\infty}^\infty(dx_{electron}*( \int_{-\infty}^\infty(dy_{electron}*(\int_{-\infty}^\infty(dz_{electron}*(\Psi_2(x_{proton},y_{proton},z_{proton},x_{electron},y_{electron},z_{electron}))))))=2##
 
  • #47
True, really it is the space of unnormalized wavefunctions that form a vector space. The actual space of quantum wavefunctions is sort of a unit sphere in this space.

Although note that wavefunctions differing by a phase are equivalent, so even this unit sphere over describes the space of quantum (pure) states.

The vector space structure is just very useful in calculations.
 
  • Like
Likes vanhees71
  • #48
PeterDonis said:
Isn't this backwards? Aren't bras covectors and kets vectors?

(I guess since the two are dual you could adopt either convention; but I thought the usual convention was that bras are covectors and kets are vectors.)
bhobba said:
when you study the rigorous mathematics of what's going on (ie Rigged Hilbert Spaces) [...] The RHS fully justifies Dirac’s bra-ket formalism. In particular, there is a 1:1 correspondence between bras and kets.
Only in finite-dimensional Hilbert spaces. There kets may be viewed as column vectors (matrices with one column) = vectors of ##C^N##, bras as row vectors (matrices with one row) = covectors = linear forms, and the inner product is just the matrix product of these.

But in order that the Schrödinger equation in the usual ket notation is allowed to have unnormalizable solutions (which Dirac's formalism allows), one must think of the bras as being smooth test functions, i.e. the elements of the nuclear space (the bottom of the triple of vector spaces of a RHS , a space of Schwartz functions) and the kets as the linear functionals on it (the top of the triple, a much bigger space of distributions). In this setting, bras are vectors and kets are covectors, and there is a big difference between these - there are many more kets than bras.

However, Dirac"s formalism is fully symmetric. Hence it is not quite matched by the RHS formalism. The latter does not have formulas such as ##\langle x|y\rangle=\delta(x,y)##.
 
Last edited:
  • Like
Likes DarMM and PeterDonis
  • #49
Again one should really stress in introductory lectures the difference between a vector, which is a basis-independent object. In physics it describes real-world quantities like velocities, accelerations, forces, fields like the electric and magnetic field or current densities etc, and components of the vector with respect to some basis. It's a one-to-one mapping between the vectors and its components given a basis.

In quantum theory the kets are vectors in an abstract Hilbert space (with the complex numbers as scalars). In non-relativistic QM with finitely many fundamental degrees of freedom (e.g., for a free particle position, momentum, and spin) the Hilbert space is the separable Hilbert space (there's only one separable Hilbert space modulo isomorphism).

Then there are linear forms on a vector space, i.e., linear maps from the vector space to the field of scalars. These linear forms build a vector space themselves, the dual space to the given vector space. In finite-dimensional vector spaces, given a basis, there's a one-to-one mapping between the vector space and its dual space, but not a basis-independent one. This changes if you introduce a non-degenerate fundamental form, i.e., a bilinear (or for complex vector spaces sesquilinear) form, where you get a basis-independent one-to-one-mapping between vectors and linear forms.

For the Hilbert space, where a scalar product (sesquilinear form) is defined you have to distinguish between the continuous linear forms (continuous wrt. to the metric of the Hilbert space induced in the usual way from the scalar product) and general linear forms. For the latter there's a one-to-one correspondence between the Hilbert space and its ("topological") dual, and in this way these two spaces are identified.

In QM you need the more general linear forms since you want to use "generalized eigenvectors" to describe a spectral representation of unbound essentially self-adjoint operators. This always happens when there are such operators with continuous spectra like position, momentum. One modern mathematically rigorous formulation is the rigged Hilbert space. There you have a domain of the self-adjoint operators like position and momentum, which is a dense sub-vector-space of the Hilbert space. The dual of this dense subspace is larger than the Hilbert space, i.e., it contains more linear forms than the bound linear forms on the Hilbert space.

Using a complete orthonormal set ##|u_n \rangle## you can map the abstract vectors to square-summable sequences ##(\psi_n)## with ##\psi_n =\langle u_n|\psi \rangle##. These sequences build the Hilbert space ##\ell^2##, and you can write the ##(\psi_n)## as infinite columns. The operators are then respresented by the corresponding matrix elements ##A_{mn}=\langle u_m|\hat{A}|u_n \rangle##, which you can arrange as a infinite##\times##infinite matrix. This is a matrix representation of QM, and was the first way how modern QM was discovered by Born, Jordan, and Heisenberg in 1925. The heuristics, provided by Heisenberg in his Helgoland paper, was to deal only with transition rates between states. Heisenberg had the discrete energy levels of atoms in mind but could demonstrate the principle first only using the harmonic oscillator as a model. Born immediately recognized that Heisenberg's "quantum math" was nothing else than matrix calculus in an infinite-dimensional vector space, and then in a quick sequence of papers by Born and Jordan, as well as Born, Jordan, and Heisenberg the complete theory was worked out (including the quantization of the electromagnetic field!). Many physicists were quite sceptical about the proper meaning of the infinitesimal vectors and matrices.

Then you can also use the generalized position eigenvectors ##|\vec{x} \rangle##, which leads to the mapping of the Hilbert-space vectors to square-integrable (integrable in the Lebesgue sense) functions, ##\psi(\vec{x})=\langle \vec{x}|\psi \rangle##. This is the Hilbert space ##\mathrm{L}^2## of square-integrable functions, and the corresponding representation is the 2nd form modern quantum theory has been discovered in 1926 by Schrödinger and is usually called "wave mechanics". Schrödinger very early has shown that "wave mechanics" and "matrix mechanics" are the same theory, just written in different representations. In Schrödinger's formulation, heuristically derived from the analogy between wave and geometrical optics in electromagnetism, the latter being the eikonal approximation of the former. Schrödinger used the argument backwards, considering the Hamilton-Jacobi partial differential equation as the eikonal approximation of a yet unknown wave equation for particles, which was just the mathematical consequence of the ideas brought forward in de Broglie's PhD thesis, which was favorably commented by Einstein as a further step to understande "wave-particle duality".

Almost at the same time Dirac came with the now favored abstract formulation, introducing q-numbers with a commutator algebra heuristically linked to the Poisson-bracket formulation of classical mechanics. This was dubbed "transformation theory", because the bra-ket formalism enables a simple calculus for transformations between different representations like the Fourier transformation between the position and the momentum representation of Schrödinger's wave mechanics.

Finally the entire edifice was made rigorous by von Neumann, recognizing the vector space as a Hilbert space and formulating a rigorous treatment of unbound operators. The physicists' sloppy ideas could then be made rigorous by G'elfand et al in terms of the "rigged Hilbert space" formalism. For the pracitioning theoretical physics you can get quite well along without this formalism, though it's always good to know about the limitations of it, and it's good to know at least some elements of this formalism. A good compromise between mathematical rigorousity and physicists' sloppyness is Ballentine's textbook.
 
  • Like
Likes bhobba
  • #50
vanhees71 said:
In quantum theory the kets are vectors in an abstract Hilbert space
No. Many typical Dirac kets - such as ##|x\rangle## - are not vectors in a Hilbert space!
 
  • Like
Likes dextercioby
Back
Top