# ODE derived from Schrodinger's Equation (Harmonic Oscillator)

Many quantum physics/chemistry books use Schrodinger's equation to derive a differential equation which describes the possible wavefunctions of the system. One form of it is this:

$$\frac{d^{2}\psi}{dx^{2}}$$ + ($$\lambda$$ - a$$^{2}$$x$$^{2}$$)$$\psi$$ = 0

"a" and lambda are constants. Most books solve this by "assuming" that the solution is a product of a power series (polynomial) and a gaussian type function. Is there a more "rigorous" way to approach this problem without making such assumptions? Does this ODE have a name? I'd like to look more into it.

Thanks!

HallsofIvy
Homework Helper
What's not rigorous about that? It's obvious that any solution to that equation (it is a linear equation with variable coefficients) is analytic and so there does exist a power series for the solution.

Most people don't apply the method of power series to that equation directly. What many books do is assume that the solutions to that equation have the form:

f(x) = g(x)h(x), where g(x) is unknown and h(x) = e^(-x^2).

They then plug f(x) into the general ODE, when then yields Hermite's differential equation. I was wondering if there is another way of solving this ODE without making such assumptions.

Mute
Homework Helper
Since one expects the ground state to share the same symmetry as the potential, a reasonable guess for the ground state would be a gaussian; from there you could try f(x)exp[-x^2/2]. That's certainly a valid method of solving the equation. However, as you're looking for something more mechanical (the following comes almost verbatim from Mike Stone/Paul Goldbart's notes for the Math Methods course at UIUC):

Consider the operator $\hat{\mathcal{H}} = -\partial_x^2 + x^2$. The Harmonic Oscillator ODE is then $\hat{\mathcal{H}}\psi = E\psi$.

Consider also the operators $Q = \partial_x +x$ and $Q^\dagger = -\partial_x + x$ and notice that $\mathcal{H} = Q^\dagger Q + 1$ and $\mathcal{H} = QQ^\dagger - 1$.

Suppose $\phi$ is an eigenfunction of $Q^\dagger Q$ with eigenvalue $\lambda$. It follows that $Q\phi$ is an eigenfunction of $QQ^\dagger$ with the same eigenvalue:

$$Q^\dagger Q\phi = \lambda \phi$$

Now apply the operator Q to both sides:

$$QQ^\dagger (Q\phi) = \lambda (Q\phi)$$.

Now, there are two ways $Q\phi$ could fail to be an eigenfunction. One is that it is the zero function, but then this means that the LHS is zero and hence the eigenvalue was zero too. Conversely, the eigenvalue could be have been zero to start with. But then this means that the inner product $\left<\phi,Q^\dagger Q\phi\right> = \left<Q\phi,Q\phi\right> = 0$, and hence $Q\phi$ was zero. Accordingly, $QQ^\dagger$ and $Q^\dagger Q$ have the same spectrum except for any possible zero eigenvalues. Now, of course, there are zero eigenvalues. Solving the zero eigenvalue problems, you see that $Q^\dagger Q \phi = 0$ is solved by $\phi_0 = e^{-xx^2/2}$, which is normalizable. The other ordering of the pair also has a zero eigenvalue solution, but it is not normalizable and so you throw it out.

Now, using the relation between H and the Q's, you see that $\phi_0$ is an eigenfunction of H with eigenvalue 1, and an eigenfunction of $QQ^\dagger$ with eigenvalue 2. Accordingly, $Q^\dagger \phi_0$ is an eigenfunction of $Q^\dagger Q$ with eigenval 2 and H with eigenval 3. Keep iterating this process to get $\phi_n = (Q^\dagger)^n \phi_0$ is an eigenfunction of H with eigenvalue 2n + 1. I guess it turns out you can write $Q^\dagger = -e^{x^2/2}\partial_x e^{-x^2/2}$, and in that way you generate the Hermite Polynomials.

So that's one way to determine the solution 'mechanically'.

you know what i don't understand: why whenever we assume a taylor series solution to the diff eq, it's a taylor series centered at 0. why are we justified in assuming the radius of convergence of the series is infinite?

HallsofIvy
Homework Helper
you know what i don't understand: why whenever we assume a taylor series solution to the diff eq, it's a taylor series centered at 0. why are we justified in assuming the radius of convergence of the series is infinite?
??? Then you haven't done a enough such problems! Neither of those "assumptions" is true nor is generally made.

If you have a linear equation with variable coefficients, AND initial conditions at $x_0$ then you would assume a Taylor's series centered at $x_0$. Perhaps you have done to many problems where the initial conditions are at x= 0. Of course, if you are looking for the "general solution", it doesn't really matter and x= 0 may be simplest.

In addition, we are NOT "justified in assuming the radius of convergence of the series is infinite". We CAN prove that a solution to a linear differential equation with variable coefficients is analytic in any interval in which the leading coefficient is not 0 and the other coefficients are continuous. Thus we ARE justified in assuming that the radius of convergence is from $x_0$ to the nearest point where either the leading coefficient is 0 or where one of the coefficients is not continuous.

??? Then you haven't done a enough such problems! Neither of those "assumptions" is true nor is generally made.

If you have a linear equation with variable coefficients, AND initial conditions at $x_0$ then you would assume a Taylor's series centered at $x_0$. Perhaps you have done to many problems where the initial conditions are at x= 0. Of course, if you are looking for the "general solution", it doesn't really matter and x= 0 may be simplest.

In addition, we are NOT "justified in assuming the radius of convergence of the series is infinite". We CAN prove that a solution to a linear differential equation with variable coefficients is analytic in any interval in which the leading coefficient is not 0 and the other coefficients are continuous. Thus we ARE justified in assuming that the radius of convergence is from $x_0$ to the nearest point where either the leading coefficient is 0 or where one of the coefficients is not continuous.
so supposing all these problems i've seen solved yield the "general solution" and make use of the initial condition that x=0, how do i now shift the solution to $x_0$?