# Quantum Mechanics without Hilbert Space

#### Varon

Von Neumann developed the concept of Hilbert Space in Quantum Mechanics. Supposed he didn't introduce it and we didn't use Hilbert Space now. What are its counterpart in pure Schroedinger Equation in one to one mapping comparison?

In details. I know that "the states of a quantum mechanical system are vectors in a certain Hilbert space, the observables are hermitian operators on that space, the symmetries of the system are unitary operators, and measurements are orthogonal projections" But this concept was developed by Von Neumann. Before he developed Hilbert Space. What are their counterpart in the pure Schroedinger equation up to Born interpretation of the amplitude square as the probability that electron can be found there?

Please answer more in words or conceptual and not with dense mathematical equations. Thanks.

Related Quantum Physics News on Phys.org

#### Varon

Example. In wikipedia entry on the Schroedinger Equations. Not a single word about Hilbert Space is used.

http://en.wikipedia.org/wiki/Schrödinger_equation

In the book Deep Down Thing. Mathematical formulation of the Schroedinger equation is given in nice details but no Hilbert space is mentioned anywhere in the book.

This is also true for other books like Introducing Quantum Theory.

So it seems there is a disconnect between Hilbert Space and the original Schroedinger equations. As if SE can do away with Hilbert space entirely. Since Hilbert space is an add-on. What would be QM like without Hilbert space. (?)

#### Chopin

Example. In wikipedia entry on the Schroedinger Equations. Not a single word about Hilbert Space is used.

So it seems there is a disconnect between Hilbert Space and the original Schroedinger equations. As if SE can do away with Hilbert space entirely. Since Hilbert space is an add-on. What would be QM like without Hilbert space. (?)
QM without a Hilbert space would be very much like QM in its earliest days, before the Hilbert space formalism was developed. :)

To answer this question, one small correction needs to be made. The Schrodinger Equation describes the dynamics of a system (how things change over time), while the Hilbert space describes the state of a system (how it is at any given moment.) So the two aren't really equivalent. The real comparison to be made is between the Hilbert space and the wavefunction, which is a three-dimensional field of complex numbers that we denote by $$\Psi(x,y,z)$$.

The wavefunction can actually be considered shorthand for states out of a Hilbert space. When we talk about a wavefunction with $$\Psi(0,0,0) = 1$$, that's the same as talking about the Hilbert state where position = $$(0,0,0)$$. So the two are equivalent in a sense, but Hilbert spaces are more general than that. In addition to using a Hilbert state to describe a particle at a position, you can use it to describe a particle with a certain charge, or a certain momentum, or even something more abstract like which slit a particle goes through in the double-slit experiment. The wavefunction is just a convenient notation for labeling the Hilbert states that describe positions.

You can get through a lot of single-particle quantum mechanics without using Hilbert spaces. For instance, you can calculate the energy levels of the Hydrogen atom using just the Schrodinger Equation (that's how Schrodinger did it!) But if you want to describe particle interactions, or perturbation, or just talk in general about quantum superpositions, then wavefunctions don't cut it--you have to understand Hilbert spaces, and all of the linear algebra that goes with them.

#### Hurkyl

Staff Emeritus
Gold Member
If you're doing calculus in a vector space with an inner product and assuming things are "well-behaved", then you're working in a Hilbert space whether or not you explicitly acknowledge that fact.

#### Varon

QM without a Hilbert space would be very much like QM in its earliest days, before the Hilbert space formalism was developed. :)

To answer this question, one small correction needs to be made. The Schrodinger Equation describes the dynamics of a system (how things change over time), while the Hilbert space describes the state of a system (how it is at any given moment.) So the two aren't really equivalent. The real comparison to be made is between the Hilbert space and the wavefunction, which is a three-dimensional field of complex numbers that we denote by $$\Psi(x,y,z)$$.

The wavefunction can actually be considered shorthand for states out of a Hilbert space. When we talk about a wavefunction with $$\Psi(0,0,0) = 1$$, that's the same as talking about the Hilbert state where position = $$(0,0,0)$$. So the two are equivalent in a sense, but Hilbert spaces are more general than that. In addition to using a Hilbert state to describe a particle at a position, you can use it to describe a particle with a certain charge, or a certain momentum, or even something more abstract like which slit a particle goes through in the double-slit experiment. The wavefunction is just a convenient notation for labeling the Hilbert states that describe positions.

You can get through a lot of single-particle quantum mechanics without using Hilbert spaces. For instance, you can calculate the energy levels of the Hydrogen atom using just the Schrodinger Equation (that's how Schrodinger did it!) But if you want to describe particle interactions, or perturbation, or just talk in general about quantum superpositions, then wavefunctions don't cut it--you have to understand Hilbert spaces, and all of the linear algebra that goes with them.
Thanks this is the clearest answer ever after days of agonizing about it.. :)

I was reading this book Introducing Quantum Theory by McEvoy. It says:

"Thus, the solution of Schroedinger's equation - the wave function for the system - was replaced by an infinite series - the wave functions of the individual states - which are natural harmonics of each other. That is to say, their frequencies are related in the ratio of whole numbers, or integers.

The method is shown by the graphs below. The bold curve indicates the initial function which is then replaced by the sum of the infinite series of the harmonic periodic waves.

Schroedinger's remarkable discovery was that the replacement waves described the individual states of the quantum system and their amplitudes gave the relative importance of that particular state to the whole system."

Question. Are the replacement waves (or fourier components) describing the individual states of the quantum system equivalent to the basis vectors in Hilbert Space? If not. What is the equivalent of replacement waves (or fourier components) in Hilbert Space?

#### Chopin

Are the replacement waves (or fourier components) describing the individual states of the quantum system equivalent to the basis vectors in Hilbert Space?
Yes. Or more accurately, they are one possible basis for the Hilbert space. Just like any vector space, there are infinitely many sets of vectors which can serve as a basis for the space, each as good as any other. We may choose one which is especially convenient for us at the time, but we may later decide to span the space with a new set of vectors. This ability to look at things from multiple directions is the most important concept to understand about QM (in some ways, it's really the only thing to understand.) It is also exactly the same as any other vector space, so if you have experience with linear algebra, chances are you've already puzzled through this concept.

There's really nothing special about the SE that causes its solutions to work like this. It is a theorem that the solutions of any linear differential equation form a vector space--if you add two of them together or multiply by a scalar, the result is again a solution of the equation. The most natural set of basis vectors to use are those corresponding to the harmonics, in exactly the same way as the solutions to a classical standing wave equation are most easily described by a set of Fourier modes. The fact that a Hilbert space can be constructed out of a quantum mechanical system is just a consequence of the SE being linear (or alternatively, the other way around.)

#### Matterwave

Gold Member
Some theories exist best in certain mathematical languages, but it doesn't mean the theory cannot take some other form. Another example of this from QM, is Einstein's Special Theory of Relativity. Einstein formulated it, and it was only later that Minkowski formulated the theory in his 4 dimensional language (with the Minkowski metric, and 4 vectors, etc).

#### dextercioby

Homework Helper
In my mind, out of all topological vector spaces it's easiest to work in a Hilbert space. The scalar product offers the natural environment to set up a probabilistic interpretation. I cannot conceive QM without probability (densities), hence without a pre-Hilbert structure of the state space. Completion of this space with respect to the strong topology is a mathematical <must>, as some useful theorems require it.

#### Varon

Yes. Or more accurately, they are one possible basis for the Hilbert space. Just like any vector space, there are infinitely many sets of vectors which can serve as a basis for the space, each as good as any other. We may choose one which is especially convenient for us at the time, but we may later decide to span the space with a new set of vectors. This ability to look at things from multiple directions is the most important concept to understand about QM (in some ways, it's really the only thing to understand.) It is also exactly the same as any other vector space, so if you have experience with linear algebra, chances are you've already puzzled through this concept.
I was asking whether the replacement waves (or fourier components) describing the individual states of the quantum system is equivalent to the basis vectors in Hilbert Space, and you said only one possible basis. Why not all basis since the fourier has infinite series and it can map to the Hilbert space infinite dimensions. Both are infinite. Or are you saying that the Fourier components chosen is the basis chosen by the measurements? But before measurement, can't we map the infinite basis vectors to all the fourier infinite components? They can fit since both are infinite.

There's really nothing special about the SE that causes its solutions to work like this. It is a theorem that the solutions of any linear differential equation form a vector space--if you add two of them together or multiply by a scalar, the result is again a solution of the equation. The most natural set of basis vectors to use are those corresponding to the harmonics, in exactly the same way as the solutions to a classical standing wave equation are most easily described by a set of Fourier modes. The fact that a Hilbert space can be constructed out of a quantum mechanical system is just a consequence of the SE being linear (or alternatively, the other way around.)
http://www4.ncsu.edu/unity/lockers/users/f/felder/public/kenny/papers/psi.html

In the paper above is detailed review for those who finished quantum mechanics class. Nowhere is Hilbert Space mentioned. So HS formalism is only taken up by in more advanced class?

About Hilbert space. I'm quite familiar with this aspect having spent time reading on the Preferred Basis problem. I just want now to understand what's it connection to Fourier series.
So when the ray collapses to the basis vectors. Is the collapse counterpart in Fourier the choosing of one or a few of the component waves (akin to the choosing of a basis in the wave function collapse)? Also you said earlier that quantum superpositions need Hilbert space and Fourier series don't work. But isn't it that the main wave in fourier is the sum of all component waves. So why can't we say the component waves is like the basis vectors and the ray is like the main Fourier wave (so quantum superposition should also be possible in Fourier waves)?

Last edited:

#### Chopin

I cannot conceive QM without probability (densities), hence without a pre-Hilbert structure of the state space.
Very true, although it's important to note that the wavefunction was not always viewed as a probability measure. When Schrodinger first developed the SE, he viewed $$\Psi$$ as a classical charge density, and you can get a ways into QM (like atomic energy levels) with that assumption. It's only when you get into things like scattering experiments that you have to start viewing it as a probability density.

I was asking whether the replacement waves (or fourier components) describing the individual states of the quantum system is equivalent to the basis vectors in Hilbert Space, and you said only one possible basis. Why not all basis since the fourier has infinite series and it can map to the Hilbert space infinite dimensions. Both are infinite. Or are you saying that the Fourier components chosen is the basis chosen by the measurements? But before measurement, can't we map the infinite basis vectors to all the fourier infinite components? They can fit since both are infinite.
Any basis for the Hilbert space must have infinite cardinality, since it's infinite-dimensional. But you can always form a new basis for a space out of linear combinations of the old one. For instance, say we have vectors $$V_0, V_1, V_2, ...$$ that form the basis of the space. We can construct new vectors $$W_0 = V_0 + V_1, W_1 = V_1 + V_2, W_2 = V_2 + V_3, ...$$, and the result will also be a basis for the space. This works exactly like finite-dimensional vector spaces that are studied in linear algebra.

The reason that that specific basis is used when talking about the SE is that each of the components has a definite energy value (technically, because the basis is comprised of the eigenvectors of the Hamiltonian.) This means you can determine the energy level of any wave just by taking a weighted average of the energies of each component wave.

http://www4.ncsu.edu/unity/lockers/users/f/felder/public/kenny/papers/psi.html

In the paper above is detailed review for those who finished quantum mechanics class. Nowhere is Hilbert Space mentioned. So HS formalism is only taken up by in more advanced class?
My understanding is that the Hilbert space concept was largely developed by Dirac, as a way of formalizing the work that had thus far gone on in the development of QM. As you can see by the paper you have linked to, you can do quite a bit without referring to it. But you're using it implicitly all along, by virtue of the fact that the solutions of the SE combine linearly. You're just not calling it out using vector space terminology. Dirac's innovation was developing a notation that made it easy to do so, and reinterpreting the complex magnitude of the wavefunction as an inner product between rays in the Hilbert space.

About Hilbert space. I'm quite familiar with this aspect having spent time reading on the Preferred Basis problem. I just want now to understand what's it connection to Fourier series.
So when the ray collapses to the basis vectors. Is the collapse counterpart in Fourier the choosing of one or a few of the component waves (akin to the choosing of a basis in the wave function collapse)? Also you said earlier that quantum superpositions need Hilbert space and Fourier series don't work. But isn't it that the main wave in fourier is the sum of all component waves. So why can't we say the component waves is like the basis vectors and the ray is like the main Fourier wave (so quantum superposition should also be possible in Fourier waves)?
We can and do say that. The Fourier series is just forming a linear combination of basis states, which is exactly what the Hilbert space is. The two are isomorphic to each other--the Fourier decomposition of any wave (classical or quantum) can be viewed as expanding the wave into a basis defined by the harmonics. The Hilbert space formalism is just an extension of this concept that lets you do more advanced things with it.

#### unusualname

Even the complex numbers with norm |psi|^2 (ie psi.psi* or inner product psi1.psi2*) is a hilbert space.

So you cannot do QM without hilbert space, because nature has complex probability amplitudes, 'nuff said.

#### JesseM

Varon, are you familiar with the notion that if you have a position wavefunction, you can find the amplitudes of different momentum eigenstates by expressing the position wavefunction as a Fourier series, where each term in the series is a momentum eigenstate? In terms of the position representation, each momentum eigenstate corresponds to a uniform probability of finding the particle anywhere in all of space, but with the phase of wavefunction varying like a sine wave (the "phase" corresponds to the direction in the complex plane that the complex amplitude is pointing, if you consider the amplitude of finding the particle at each possible position in space for a given momentum eigenstate). So, only in the momentum eigenstate does the particle have a single definite wavelength. When you think of it this way, you can show that something like the "uncertainty principle" would apply even to ordinary classical waves (the more localized a wavepacket is in space, the greater the spread in wavelengths in the Fourier series), although obviously classical waves aren't waves of probability so the interpretation of the quantum uncertainty principle is a bit different. Anyway, this gives you a possibly more intuitive way of understanding why position and momentum don't "commute" that just involves picturing waves in space, not matrices.

Here's a good basic intro with some pictures:

http://webs.morningside.edu/slaven/physics/uncertainty/uncertainty5.html [Broken]

Last edited by a moderator:

#### Chopin

Even the complex numbers with norm |psi|^2 (ie psi.psi* or inner product psi1.psi2*) is a hilbert space.

So you cannot do QM without hilbert space, because nature has complex probability amplitudes, 'nuff said.
Ha. Well said.

#### Varon

Varon, are you familiar with the notion that if you have a position wavefunction, you can find the amplitudes of different momentum eigenstates by expressing the position wavefunction as a Fourier series, where each term in the series is a momentum eigenstate? In terms of the position representation, each momentum eigenstate corresponds to a uniform probability of finding the particle anywhere in all of space, but with the phase of wavefunction varying like a sine wave (the "phase" corresponds to the direction in the complex plane that the complex amplitude is pointing, if you consider the amplitude of finding the particle at each possible position in space for a given momentum eigenstate). So, only in the momentum eigenstate does the particle have a single definite wavelength. When you think of it this way, you can show that something like the "uncertainty principle" would apply even to ordinary classical waves (the more localized a wavepacket is in space, the greater the spread in wavelengths in the Fourier series), although obviously classical waves aren't waves of probability so the interpretation of the quantum uncertainty principle is a bit different. Anyway, this gives you a possibly more intuitive way of understanding why position and momentum don't "commute" that just involves picturing waves in space, not matrices.

Here's a good basic intro with some pictures:

http://webs.morningside.edu/slaven/physics/uncertainty/uncertainty5.html [Broken]
Yes I'm familiar with how HUP can be modeled by pure waves. In fact this is shown in details in the book Deep Down Things. But it never mention about Hilbert Space. so this position with many momentum component waves making it up. in Hilbert version, is each momentum wave equivalent to one axis component vector or axis in Hilbert space?

I was reading another book on the history of QM. Schroedinger actually thought the wave was physical. These days, we are told the wave function is not in 3D space because the Hilbert Space is multi dimensional, but if we model it by pure Fourier, each component even infinite is still in 3D space. So there is still possibility Schrodinger is right that the wave is in 3D? Why not?

Last edited by a moderator:

#### Varon

Very true, although it's important to note that the wavefunction was not always viewed as a probability measure. When Schrodinger first developed the SE, he viewed $$\Psi$$ as a classical charge density, and you can get a ways into QM (like atomic energy levels) with that assumption. It's only when you get into things like scattering experiments that you have to start viewing it as a probability density.

Any basis for the Hilbert space must have infinite cardinality, since it's infinite-dimensional. But you can always form a new basis for a space out of linear combinations of the old one. For instance, say we have vectors $$V_0, V_1, V_2, ...$$ that form the basis of the space. We can construct new vectors $$W_0 = V_0 + V_1, W_1 = V_1 + V_2, W_2 = V_2 + V_3, ...$$, and the result will also be a basis for the space. This works exactly like finite-dimensional vector spaces that are studied in linear algebra.

The reason that that specific basis is used when talking about the SE is that each of the components has a definite energy value (technically, because the basis is comprised of the eigenvectors of the Hamiltonian.) This means you can determine the energy level of any wave just by taking a weighted average of the energies of each component wave.
Are you referring to Fourier when you mentioned that the component has a definite enegy value? Or can you model this by Hilbert space too that each axis has a definite energy value? If so, it is possible to totally do away with Fourier and use pure Hilbert space even when dealing with eigenvectors of the Hamiltonian? Meaning you only focus on certain basis vectors and ignore the rest?

My understanding is that the Hilbert space concept was largely developed by Dirac, as a way of formalizing the work that had thus far gone on in the development of QM. As you can see by the paper you have linked to, you can do quite a bit without referring to it. But you're using it implicitly all along, by virtue of the fact that the solutions of the SE combine linearly. You're just not calling it out using vector space terminology. Dirac's innovation was developing a notation that made it easy to do so, and reinterpreting the complex magnitude of the wavefunction as an inner product between rays in the Hilbert space.

We can and do say that. The Fourier series is just forming a linear combination of basis states, which is exactly what the Hilbert space is. The two are isomorphic to each other--the Fourier decomposition of any wave (classical or quantum) can be viewed as expanding the wave into a basis defined by the harmonics. The Hilbert space formalism is just an extension of this concept that lets you do more advanced things with it.
what advanced things that Hilbert space can do that Fourier cant? Mathematically? Why cant you use pure Fourier to describe the double slit experiment?

#### Chopin

Are you referring to Fourier when you mentioned that the component has a definite enegy value? Or can you model this by Hilbert space too that each axis has a definite energy value? If so, it is possible to totally do away with Fourier and use pure Hilbert space even when dealing with eigenvectors of the Hamiltonian? Meaning you only focus on certain basis vectors and ignore the rest?
You're always going to be taking the eigenvectors of the Hamiltonian and considering them the set of base states. In an infinite square well, those happen to form a Fourier series, but they aren't always. For instance, if you analyze something like a Coulomb potential, you'll again find a set of constant-energy solutions, which look sort of like sine waves, but their amplitudes and frequencies will vary as you move out from the center of the potential. These waves can't be considered a Fourier decomposition anymore, because they're not just sine waves, but you can still form all valid solutions to the equation by combining them linearly.

So it's the same idea as a Fourier series, where you have a set of base states that you combine together, but the math doesn't work out like a Fourier series anymore (you aren't adding sine waves, you're adding crazy Bessel functions or something.) This is what I mean when I say that a Hilbert space is a generalization of a Fourier series--the elements of a Fourier series are actually also a Hilbert space, even though you might not think about them in those terms.

what advanced things that Hilbert space can do that Fourier cant? Mathematically? Why cant you use pure Fourier to describe the double slit experiment?
The double slit thing was probably a bad example. A better example of something that a Hilbert space can do that's beyond the Schrodinger wavefunction's capability is handling particles with spin. Say you have a single electron which can be either spin up or spin down. According to the laws of quantum mechanics, this means it can also be in a superposition of these two states--for instance, it could be 75% spin up and 25% spin down. We can describe this by naming two states, perhaps $$U$$ for Up and $$D$$ for Down. Then our new state would be $$X = .75U + .25D$$. You can see that we're making linear combinations of states here too, just like we would in a Fourier series if we said that a wavefunction was 75% fundamental and 25% 1st harmonic. It's just applying the same sort of concept to a different domain of states.

Last edited:

#### JesseM

Yes I'm familiar with how HUP can be modeled by pure waves. In fact this is shown in details in the book Deep Down Things. But it never mention about Hilbert Space. so this position with many momentum component waves making it up. in Hilbert version, is each momentum wave equivalent to one axis component vector or axis in Hilbert space?
Yes, but only in the momentum basis, a momentum wave (momentum eigenstate) would be a sum of an infinite number of different vectors in the position basis (where each basis vector corresponds to a wavefunction where the amplitude is perfectly localized to a single point, so you just have a spike at that point and zero amplitude everywhere else). Similarly each position basis vector (position eigenstate) is the sum of an infinite number of different momentum eigenstates. I talked a bit more about the idea of decomposing a state vector into a sum of basis vectors in [post=3250764]this post[/post].
I was reading another book on the history of QM. Schroedinger actually thought the wave was physical. These days, we are told the wave function is not in 3D space because the Hilbert Space is multi dimensional, but if we model it by pure Fourier, each component even infinite is still in 3D space. So there is still possibility Schrodinger is right that the wave is in 3D? Why not?
Schrödinger originally thought this way but although the idea might seem to work for single-particle wavefunctions, he soon realized it doesn't work for multiparticle wavefunctions. There's a good history of these ideas in a book called The Infamous Boundary by David Wick, in the chapter "Revolution, Part II: Schrödinger's Waves"...on p. 34 Wick writes:

"Schrödinger still hopes his wave might represent a spread-out electron. But there is a second difficulty: for two or more particles, more than three numbers are needed to describe their locations, so his wave exists in a fictitious space of more than three dimensions. Schrödinger has not forgotten this point; he even emphasizes it a few sentences later:
It has been stressed repeatedly that the [wave]-function itself cannot and must not in general be interpreted directly in terms of three-dimensional space however the one-electron problem leads towards this...
"Unfortunately for Schrödinger, this last fact is fatal to his view."

#### Varon

Yes, but only in the momentum basis, a momentum wave (momentum eigenstate) would be a sum of an infinite number of different vectors in the position basis (where each basis vector corresponds to a wavefunction where the amplitude is perfectly localized to a single point, so you just have a spike at that point and zero amplitude everywhere else). Similarly each position basis vector (position eigenstate) is the sum of an infinite number of different momentum eigenstates. I talked a bit more about the idea of decomposing a state vector into a sum of basis vectors in [post=3250764]this post[/post].

Schrödinger originally thought this way but although the idea might seem to work for single-particle wavefunctions, he soon realized it doesn't work for multiparticle wavefunctions. There's a good history of these ideas in a book called The Infamous Boundary by David Wick, in the chapter "Revolution, Part II: Schrödinger's Waves"...on p. 34 Wick writes:

"Schrödinger still hopes his wave might represent a spread-out electron. But there is a second difficulty: for two or more particles, more than three numbers are needed to describe their locations, so his wave exists in a fictitious space of more than three dimensions. Schrödinger has not forgotten this point; he even emphasizes it a few sentences later:

"Unfortunately for Schrödinger, this last fact is fatal to his view."
What exactly are the 3 numbers to describe the position? Axis? Hence if 2 particles then 6 axis. But this assumption is based on modeling it in Hilbert space. If you use plain Fourier, the 2 particles are still located in 3D space. You must include more waves and the waves are all in 3d!

Btw.. The integers called quantum numbers by Bohr, Sommerfield and Heisenberg which are said "to be related in a natural way to the numbers if nodes in a vibrating system"' what are their equivalent in Hilbert space via basis vectors?

#### Varon

You're always going to be taking the eigenvectors of the Hamiltonian and considering them the set of base states. In an infinite square well, those happen to form a Fourier series, but they aren't always. For instance, if you analyze something like a Coulomb potential, you'll again find a set of constant-energy solutions, which look sort of like sine waves, but their amplitudes and frequencies will vary as you move out from the center of the potential. These waves can't be considered a Fourier decomposition anymore, because they're not just sine waves, but you can still form all valid solutions to the equation by combining them linearly.

So it's the same idea as a Fourier series, where you have a set of base states that you combine together, but the math doesn't work out like a Fourier series anymore (you aren't adding sine waves, you're adding crazy Bessel functions or something.) This is what I mean when I say that a Hilbert space is a generalization of a Fourier series--the elements of a Fourier series are actually also a Hilbert space, even though you might not think about them in those terms.
Why did you mention about the coulomb potential? Are you giving it as example that it can only be modeled by Hilbert space and not by fourier?

A separate question, I know that Hilbert space is like phase space where a ray or point describes the whole quantum state. So you mean in the Schroedinger equation, not every basis is used or they only put the base states. This means it depends on your application what kind of data to put in the basis? For example. When calculating for kinetic energy of an electron, you supply a certain base state. When calculating for position, you put certain state, etc? I thought that one simply put the complete state. Why. What would happen if you put the complete state or use complete basis instead of just one basis?

The double slit thing was probably a bad example. A better example of something that a Hilbert space can do that's beyond the Schrodinger wavefunction's capability is handling particles with spin. Say you have a single electron which can be either spin up or spin down. According to the laws of quantum mechanics, this means it can also be in a superposition of these two states--for instance, it could be 75% spin up and 25% spin down. We can describe this by naming two states, perhaps $$U$$ for Up and $$D$$ for Down. Then our new state would be $$X = .75U + .25D$$. You can see that we're making linear combinations of states here too, just like we would in a Fourier series if we said that a wavefunction was 75% fundamental and 25% 1st harmonic. It's just applying the same sort of concept to a different domain of states.

Last edited:

#### JesseM

What exactly are the 3 numbers to describe the position? Axis? Hence if 2 particles then 6 axis. But this assumption is based on modeling it in Hilbert space. If you use plain Fourier, the 2 particles are still located in 3D space. You must include more waves and the waves are all in 3d!
But the wavefunction itself assigns a single amplitude to each combination of positions for both particles, it doesn't give a separate set of amplitudes for each particle individually to be found at various positions.
Btw.. The integers called quantum numbers by Bohr, Sommerfield and Heisenberg which are said "to be related in a natural way to the numbers if nodes in a vibrating system"' what are their equivalent in Hilbert space via basis vectors?
Are you talking about the quantum numbers for electrons in an atom? If so I think the idea is that the numbers characterize different stationary states for the electron, each of which is an eigenstate for energy and one where the probabilities of finding the electron in different positions don't change with time (though the complex amplitude can change, meaning the phase of the amplitude vector can change while the magnitude doesn't, the wiki article has some graphic illustrations and some more mathematical details are http://itl.chem.ufl.edu/3417_s98/lectures/super_1.html [Broken]). Some details on what the numbers represent individually here. And for each set of quantum numbers you can plot the distinctive shape of the probability density function in space (often they just plot a surface of constant probability density), see here or here.

Last edited by a moderator:

#### Chopin

Why did you mention about the coulomb potential? Are you giving it as example that it can only be modeled by Hilbert space and not by fourier?
I think part of your confusion may come from the fact that you're using "Fourier" to talk about two different things. First, you're using it to describe a series of sinusoidal waves, with frequencies that are integral multiples of each other. Second, you're using it to describe the act of modeling a system with a wavefunction $$\Psi$$, solving the Schrodinger Equation to find various solutions $$\Psi_0, \Psi_1, \Psi_2, ...$$, and then finally observing that you can build any solution out of a linear combination of these solutions.

For the infinite square well, those solutions happen to be a bunch of sine waves, with frequencies that are integral multiples of each other. Hooray, it's a Fourier series! That means that you can model any solution to the equation as a combination of these waves, which is basically just the spectral decomposition of the function.

What I was trying to illustrate with my Coulomb potential, though, is that the solutions to the equation need not be a bunch of sinusoids. For the Coulomb potential, you can still find a series of solutions $$\Psi_0, \Psi_1, \Psi_2, ...$$, but they're not going to be sinusoids, they're going to be some god-awful set of Bessel functions or something (math gurus, is that right? If it isn't, it's some god-awful set of some other kind of function, at least, so my point is the same.) So the set of solutions no longer forms a Fourier series, so you can't just take the spectral decomposition of the solution anymore. Therefore, in Sense 1 from above, we can no longer talk about this being a Fourier series.

However, your Sense 2 from above still applies--we can form any solution to the equation by linearly combining these basis solutions. This comes from the fact that the SE is linear, so those solutions $$\Psi_0, \Psi_1, \Psi_2, ...$$ form a vector space, which comprises all of the solutions to the equation. The neat thing about this vector space is that you can pick any basis you want, and it will be possible to expand out a function in terms of it. If you're interested in finding energy levels, it will be convenient to find a basis for which each vector has a definite energy. If you're interested in finding position, it will be convenient to find one for which each vector has a definite position, etc. This vector space is the Hilbert space--there's nothing scary or mystical about it, it's just a way of enumerating all of the different solutions to the equation, and makes explicit the fact that any linear combination of them is again a solution.

#### Varon

But the wavefunction itself assigns a single amplitude to each combination of positions for both particles, it doesn't give a separate set of amplitudes for each particle individually to be found at various positions.
Why cant you use Fourier which is only located in 3d in modelling the amplitude of positions for both particles? Unless you mean Fourier can be in more than 3d too?

Schroedinger is a genius. He could have easily thought that two electrons would produce more than 3 space. But maybe he thinks Fourier is enough to describe two particles,

Are you talking about the quantum numbers for electrons in an atom? If so I think the idea is that the numbers characterize different stationary states for the electron, each of which is an eigenstate for energy and one where the probabilities of finding the electron in different positions don't change with time (though the complex amplitude can change, meaning the phase of the amplitude vector can change while the magnitude doesn't, the wiki article has some graphic illustrations and some more mathematical details are http://itl.chem.ufl.edu/3417_s98/lectures/super_1.html [Broken]). Some details on what the numbers represent individually here. And for each set of quantum numbers you can plot the distinctive shape of the probability density function in space (often they just plot a surface of constant probability density), see here or here.
Im taiking of n as the size of the orbital, k as the shape of the orbit and m as the direction in which the orbit is point. My book Introducing Quantum Theory mentions them and the book never talk about Hilbert Space so wonder how they are arranged in Hilbert space.

Last edited by a moderator:

#### Chopin

Why cant you use Fourier which is only located in 3d in modelling the amplitude of positions for both particles? Unless you mean Fourier can be in more than 3d too?
Because it doesn't work that way. If you want to write a wavefunction for two particles, you write it as $$\Psi(x_0,y_0,z_0,x_1,y_1,z_1)$$, which means "the probability of finding particle 0 at $$(x_0,y_0,z_0)$$ AND particle 1 at $$(x_1,y_1,z_1)$$." Therefore, you're now performing calculations in a 6-dimensional space. This is a concept that took me forever to wrap my head around, and led to a lot of confusion until I did.

Im taiking of n as the size of the orbital, k as the shape of the orbit and m as the direction in which the orbit is point. My book Introducing Quantum Theory mentions them and the book never talk about Hilbert Space so wonder how they are arranged in Hilbert space.
I'm going to go ahead and bust out the bra-ket notation here, to give some real examples of how this works. If we want to describe the orbitals as vectors in a Hilbert space, we might label them as $$|\psi_{nkm}\rangle$$, so the vector $$|\psi_{032}\rangle$$ would represent the orbital with n=0, k=3, m=2.

Using this notation, we can now describe a particle which is in a superposition of these orbitals, by saying something like $$|\phi\rangle = .75 |\psi_{012}\rangle + .25 |\psi_{321}\rangle$$.

So far, this is just a slightly different way of writing our states, and doesn't really add anything useful. Where it really gets powerful, though, is if you want to test an arbitrary vector to see what states it's in. Suppose I give you a vector $$|\phi\rangle$$, and you want to know what the probability is that it's in state $$|\psi_{021}\rangle$$. You can take the inner product of $$|\phi\rangle$$ and $$|\psi_{021}\rangle$$, which we denote as $$\langle\psi_{021}|\phi\rangle$$, and that value squared will give you the probability you're looking for.

You can do the same thing with wavefunctions, but it's much more cumbersome. The equivalent calculation using a wave function is $$\int{\psi_{021}(x)^*\phi(x) dx}$$, which is a lot harder to carry out. The Hilbert space just gives you a way to talk about these things without worrying about what their shape in space looks like. That makes it much easier to write out the calculations.

#### Varon

I think part of your confusion may come from the fact that you're using "Fourier" to talk about two different things. First, you're using it to describe a series of sinusoidal waves, with frequencies that are integral multiples of each other. Second, you're using it to describe the act of modeling a system with a wavefunction $$\Psi$$, solving the Schrodinger Equation to find various solutions $$\Psi_0, \Psi_1, \Psi_2, ...$$, and then finally observing that you can build any solution out of a linear combination of these solutions.

For the infinite square well, those solutions happen to be a bunch of sine waves, with frequencies that are integral multiples of each other. Hooray, it's a Fourier series! That means that you can model any solution to the equation as a combination of these waves, which is basically just the spectral decomposition of the function.

What I was trying to illustrate with my Coulomb potential, though, is that the solutions to the equation need not be a bunch of sinusoids. For the Coulomb potential, you can still find a series of solutions $$\Psi_0, \Psi_1, \Psi_2, ...$$, but they're not going to be sinusoids, they're going to be some god-awful set of Bessel functions or something (math gurus, is that right? If it isn't, it's some god-awful set of some other kind of function, at least, so my point is the same.) So the set of solutions no longer forms a Fourier series, so you can't just take the spectral decomposition of the solution anymore. Therefore, in Sense 1 from above, we can no longer talk about this being a Fourier series.

However, your Sense 2 from above still applies--we can form any solution to the equation by linearly combining these basis solutions. This comes from the fact that the SE is linear, so those solutions $$\Psi_0, \Psi_1, \Psi_2, ...$$ form a vector space, which comprises all of the solutions to the equation. The neat thing about this vector space is that you can pick any basis you want, and it will be possible to expand out a function in terms of it. If you're interested in finding energy levels, it will be convenient to find a basis for which each vector has a definite energy. If you're interested in finding position, it will be convenient to find one for which each vector has a definite position, etc. This vector space is the Hilbert space--there's nothing scary or mystical about it, it's just a way of enumerating all of the different solutions to the equation, and makes explicit the fact that any linear combination of them is again a solution.
Thanks. I'm only thinking of Sense1 my background of Fourier came from music. So you are saying there is a Sense2 in which you add the solutions even if they dont involve sine waves? What is the particular concept called? I thought Fourier only involves sine waves and superpositions of them.

About Hilbert space being mystical. Lol. Hilbert space is the home of Schrodinger cat in a ghostly superposition of being dead or alive. Before measurement, we dont even know what occurs...whether*branches are splitted into Many worlds or Bohmian mechanics with instantaneous wave function or observers having the power to collapse the wave function, etc. So yes Virginia, Hilbert space is a mystical place. :)

Last edited:

#### Varon

Because it doesn't work that way. If you want to write a wavefunction for two particles, you write it as $$\Psi(x_0,y_0,z_0,x_1,y_1,z_1)$$, which means "the probability of finding particle 0 at $$(x_0,y_0,z_0)$$ AND particle 1 at $$(x_1,y_1,z_1)$$." Therefore, you're now performing calculations in a 6-dimensional space. This is a concept that took me forever to wrap my head around, and led to a lot of confusion until I did.

I'm going to go ahead and bust out the bra-ket notation here, to give some real examples of how this works. If we want to describe the orbitals as vectors in a Hilbert space, we might label them as $$|\psi_{nkm}\rangle$$, so the vector $$|\psi_{032}\rangle$$ would represent the orbital with n=0, k=3, m=2.

Using this notation, we can now describe a particle which is in a superposition of these orbitals, by saying something like $$|\phi\rangle = .75 |\psi_{012}\rangle + .25 |\psi_{321}\rangle$$.

So far, this is just a slightly different way of writing our states, and doesn't really add anything useful. Where it really gets powerful, though, is if you want to test an arbitrary vector to see what states it's in. Suppose I give you a vector $$|\phi\rangle$$, and you want to know what the probability is that it's in state $$|\psi_{021}\rangle$$. You can take the inner product of $$|\phi\rangle$$ and $$|\psi_{021}\rangle$$, which we denote as $$\langle\psi_{021}|\phi\rangle$$, and that value squared will give you the probability you're looking for.

You can do the same thing with wavefunctions, but it's much more cumbersome. The equivalent calculation using a wave function is $$\int{\psi_{021}(x)^*\phi(x) dx}$$, which is a lot harder to carry out. The Hilbert space just gives you a way to talk about these things without worrying about what their shape in space looks like. That makes it much easier to write out the calculations.
Thanks for the info. Note that wiki said:

"The modern usage of the term wave function refers to a complex vector or function, i.e. an element in a complex Hilbert space"

Therefore I think you must mention "Hilbertless wave function" to refer to this cumbersome method. Or maybe there is a more correct term for it?

### Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving