Time development of a wavefunction

In summary, The time evolution of a state can either be a pure phase OR a superposition of pure phases, depending on the state itself.
  • #1
Demon117
165
1
I took QM last year and I was reading an article by T.W. Marshall entitled Random Electrodynamics in which he describes ensembles of uncharged particles which satisfy the Liouville equation. Anyway, he introduces a wave function given by

[tex]\psi (x,0)= (1/a^{2}\pi)^{1/2}exp(-x^{2}/2a^{2}+ip_{0}x/\hbar)[/tex]

In terms of time development of this wave function one must use Schrodinger's equation to find the time dependence of this wave function. After trying and trying to do this I hit a brick wall many times. I've looked for some examples of how this is done and have come short. Does anyone know of a good reference where they show this process step by step?
 
Physics news on Phys.org
  • #2
You won't get anywhere unless you specify the hamiltonian operator for the system. After all, the Schrödinger equation reads [itex]\hat{H}\psi(x,t)=i\hbar \frac{\text{d}}{\text{d}t}\psi(x,t)[/itex].
 
  • #3
Thank you. I realize this of course, but I am not looking for a solution here. I am looking for articles, texts, anything that will help me. In answering your question the Hamiltonian is

[itex]\hat{H}=-\frac{\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}} + V(x)[/itex]

where [itex]V(x)=a + bx + cx^{2}[/itex]

The minimum uncertainty packet, which at t = 0 has mean position x = 0 and mean momentum [itex]p=p_{0}[/itex] is given by the wave function in my previous post.
 
  • #4
Chapter 2 of Griffiths' "Introduction to Quantum Mechanics" should help you. Harmonic oscillator has a similar Hamitonian; it's analytic solution should be similar to the solution for this Hamiltonian.
 
  • #5
I thought the Hamiltonian is all you need.

Now your time evolution is

[tex]\psi(x,t) = \psi(x,0)e^\frac{iHt}{\hbar}[/tex]
 
  • #6
@LostConjugate: What? No! Hamiltonian is all you need but to solve the time-independent Schrodinger equation for the energy eigenstates and then you write the wavefunction as a linear combination of the these states.
 
  • #7
saim_ said:
@LostConjugate: What? No! Hamiltonian is all you need but to solve the time-independent Schrodinger equation for the energy eigenstates and then you write the wavefunction as a linear combination of the these states.

I thought the time dependence was just an overall phase though. You would only need the eigenstates if you want to know the possible energy levels right?
 
  • #8
@LostConjugate: Yes, but you don't get the eigenstates just by putting the Hamiltonian in the exponent; you have to solve the time independent equation. The time dependence is introduced by multiplying that exponential you wrote up there but it is multiplied by each energy eigenstate separately, with its corresponding probability amplitude and the energy eigenvalue in the power of the exponent... This is tough and lengthy to explain; read this page instead:

http://www.google.com.pk/search?q="...s=org.mozilla:en-US:official&client=firefox-a
 
  • #9
Oh, I just re-referenced my textbook. Your right the entire wave function at some point in time is an energy eigenstate with a phase. Interesting.
 
  • #10
LostConjugate said:
I thought the time dependence was just an overall phase though. You would only need the eigenstates if you want to know the possible energy levels right?

The time dependence is only a phase *if* you are dealing with an eigenstate of the Hamiltonian ... the amplitude will also be time-dependent if your wavefunction is a superposition state. However, the equation you gave in your first post is (almost) correct in either case. The only issue is that the propagator (the exponential of the Hamiltonian) should go to the left of the t=0 wavefunction ... the propagator is an *operator*, which acts on quantum states from the left.

So you have [itex]\psi(x,t)=exp[-\frac{i\hat{H}t}{\hbar}]\psi(x,0)[/itex].

If [itex]\psi[/itex] is an eigenstate, then the result of applying the propagator is just [itex]exp[-\frac{iEt}{\hbar}]\psi(x)[/itex]

to give the time-dependent phase factor you expected.
 
  • #11
saim_ said:
@LostConjugate: Yes, but you don't get the eigenstates just by putting the Hamiltonian in the exponent; you have to solve the time independent equation. The time dependence is introduced by multiplying that exponential you wrote up there but it is multiplied by each energy eigenstate separately, with its corresponding probability amplitude and the energy eigenvalue in the power of the exponent... This is tough and lengthy to explain; read this page instead:

http://www.google.com.pk/search?q="...s=org.mozilla:en-US:official&client=firefox-a
If you know the wavefunction at any point in time (which we will call t=0), and you know the Hamiltonian, then the expression (similar to what LostConjugate posted), [itex]\psi(x,t)=exp[-\frac{i\hat{H}t}{\hbar}]\psi(x,0)[/itex] will give you the time-evolution of the wavefunction. The operator [itex]exp[-\frac{i\hat{H}t}{\hbar}][/itex] is called the propagator of the wavefunction. (A caveat ... if the Hamiltonian is time-dependent, then the Hamiltonians at different times need to commute for the simple propagator expression above to hold.).

So, it depends on what you mean by "You have to solve the time-dependent Schrodinger equation." The treatment I describe effectively does the same thing, but is much more useful than trying to solve the differential equation explicitly in many situations. Many quantum dynamics techniques use precisely this approach.
 
Last edited:
  • #12
SpectraCat said:
The operator [itex]exp[-\frac{i\hat{H}t}{\hbar}][/itex] is called the propagator of the wavefunction.
Oh, thanks; didn't know this was some other method of solution. Hope this doesn't create confusion for the OP.
 
  • #13
Now i am a bit confused. How can the time evolution of the state be either an overall phase of the entire state OR just one of the eigenstates?
 
  • #14
LostConjugate said:
Now i am a bit confused. How can the time evolution of the state be either an overall phase of the entire state OR just one of the eigenstates?

If the state is a pure eigenstate of the hamiltonian, the time dependence will be purely a phase factor. However, if your state is a superposition of at least two energy eigenstates, each of these has it's own independent time evolution which is a pure phase factor. However, if the time dependence is different for the different energy eigenstates (they correspond to different eignvalues) the time evolution of their sum will not be purely a phase factor.
 
  • #15
espen180 said:
If the state is a pure eigenstate of the hamiltonian, the time dependence will be purely a phase factor. However, if your state is a superposition of at least two energy eigenstates, each of these has it's own independent time evolution which is a pure phase factor. However, if the time dependence is different for the different energy eigenstates (they correspond to different eignvalues) the time evolution of their sum will not be purely a phase factor.

In other words

[tex] \psi(x,t) = e^\frac{iHt}{\hbar} \psi(x,0) = e^\frac{iHt}{\hbar} \sum a_n \phi(x) [/tex]

Is not always true since the time dependence may be out of phase for some eigenstates?
 
  • #16
So assuming the potential is V= 0, my differential equation would be

[itex]i\hbar\frac{d}{dt}\psi(x,t)=\frac{-\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}\psi(x,0)[/itex]

At which point I would expand this and solve the differential equation. This seems more difficult then what I remember.

Upon expansion I get

[itex]i\hbar\frac{d}{dt}\psi(x,t)=\frac{-\hbar^{2}}{2m}\psi(x,0)((\frac{ip_{0}}{\hbar}-\frac{x}{a^{2}})^{2}-\frac{1}{a^{2}})[/itex]

Tell me if I am going in the right direction.
 
  • #17
LostConjugate said:
In other words

[tex] \psi(x,t) = e^\frac{iHt}{\hbar} \psi(x,0) = e^\frac{iHt}{\hbar} \sum a_n \phi(x) [/tex]

Is not always true since the time dependence may be out of phase for some eigenstates?

No, that mathematical expression is always correct. Remember that the propagator is an operator, so in order to apply it, you need to pull it into the sum and apply it to each of the eigenstates. In such a case, you will get a *different* time-dependent phase factor for each term in the sum. This means that over time, the different frequency components will beat against each other, resulting in a time-dependent amplitude modulation of the quantum superposition state represented by the sum.
 
  • #18
SpectraCat said:
No, that mathematical expression is always correct. Remember that the propagator is an operator, so in order to apply it, you need to pull it into the sum and apply it to each of the eigenstates. In such a case, you will get a *different* time-dependent phase factor for each term in the sum. This means that over time, the different frequency components will beat against each other, resulting in a time-dependent amplitude modulation of the quantum superposition state represented by the sum.

Ah, ok. So what does it mean to say that the entire state as a function of time can be equal to only one of the eigenstates with the propagator?

[tex] \psi(x,t) = e^\frac{iHt}{\hbar} \phi(x) [/tex]

?
 
  • #19
LostConjugate said:
Ah, ok. So what does it mean to say that the entire state as a function of time can be equal to only one of the eigenstates with the propagator?

[tex] \psi(x,t) = e^\frac{iHt}{\hbar} \phi(x) [/tex]

?

I don't understand your confusion ... how is that any different than the sum expression you posted earlier where only one coefficient is non-zero? In such a case, then there is only one time-dependent frequency, so there is no beating, and no modulation of the amplitude. There is just the normal time-dependent phase factor that you would expect for a stationary-state solution of the Schrodinger equation.
 
  • #20
LostConjugate said:
In other words

[tex] \psi(x,t) = e^\frac{iHt}{\hbar} \psi(x,0) = e^\frac{iHt}{\hbar} \sum a_n \phi(x) [/tex]

Is not always true since the time dependence may be out of phase for some eigenstates?

What I mean is [tex] \partial_t\psi(x,t) = -\frac{i}{\hbar}\hat{H}\psi(x,t)=-\frac{i}{\hbar}\hat{H}\sum_{n} a_n\phi_n(x,t)=-\frac{i}{\hbar} \sum_n a_n E_n \phi_n(x,t) \not\propto \psi(x,t)[/tex] unless all but one of the [itex]E_n[/itex]'s are zero, in which we have a pure energy eigenstate, or unless all the [itex]E_n[/itex]'s are equal, in which case we have energy level degeneracy.

Then, the fact that [itex] \partial_t\psi(x,t) \not\propto \psi(x,t)[/itex] rules out the possibility of the time dependence being simply a phase factor.
 
  • #21
SpectraCat said:
I don't understand your confusion ... how is that any different than the sum expression you posted earlier where only one coefficient is non-zero? In such a case, then there is only one time-dependent frequency, so there is no beating, and no modulation of the amplitude. There is just the normal time-dependent phase factor that you would expect for a stationary-state solution of the Schrodinger equation.

Oh. I just never think of using Psi as a state where all the prob amps are zero except one. I always expect if Psi is used it means that the state is a state of probability.
 
  • #22
espen180 said:
What I mean is [tex] \partial_t\psi(x,t) = -\frac{i}{\hbar}\hat{H}\psi(x,t)=-\frac{i}{\hbar}\hat{H}\sum_{n} a_n\phi_n(x,t)=-\frac{i}{\hbar} \sum_n a_n E_n \phi_n(x,t) \not\propto \psi(x,t)[/tex] unless all but one of the [itex]E_n[/itex]'s are zero, in which we have a pure energy eigenstate, or unless all the [itex]E_n[/itex]'s are equal, in which case we have energy level degeneracy.

Then, the fact that [itex] \partial_t\psi(x,t) \not\propto \psi(x,t)[/itex] rules out the possibility of the time dependence being simply a phase factor.

But that is the rate of change of the wave function. I thought we were just talking about the wave function's values evolved over time.

I would never expect the velocity of my car to be identically equal to it's position.
 
  • #23
The time evolution is found by integrating its rate of change. Information about the rate of change translates to information about time dependence.
 
  • #24
matumich26 said:
So assuming the potential is V= 0, my differential equation would be

[itex]i\hbar\frac{d}{dt}\psi(x,t)=\frac{-\hbar^{2}}{2m}\frac{d^{2}}{dx^{2}}\psi(x,0)[/itex]

At which point I would expand this and solve the differential equation. This seems more difficult then what I remember.

Upon expansion I get

[itex]i\hbar\frac{d}{dt}\psi(x,t)=\frac{-\hbar^{2}}{2m}\psi(x,0)((\frac{ip_{0}}{\hbar}-\frac{x}{a^{2}})^{2}-\frac{1}{a^{2}})[/itex]

Tell me if I am going in the right direction.

No .. you can't use the form of the wavefunction at a particular time on one side of the equation, and the general time-dependent form on the other. Solving this type of problem is not always possible to do analytically. One way to go about it is to expand the wavefunction in the eigenstates of the given Hamiltonian, including the time-dependent phase factors in the expansion:

[itex]\psi(x,t)=\sum_n{a_n\phi_n(x)}e^{i\omega_n t}[/itex] where [itex]\omega_n=\frac{E_n}{\hbar}[/itex], and En is the energy of the n-th eigenstate.

Since your initial wavefunction is at t=0, all of the phase factors will be equal to 1 at that time. You can solve for the expansion coefficients using the integrals:

[itex]a_n=<\phi_n(x)|\psi(x,0)>[/itex]

If you are lucky, or your Hamiltonian is particularly simple, then the integrals will be solvable analytically.

In the special case of a free-particle (i.e. when V(x)=0, ), then you can just take the Fourier transform of the t=0 wavefunction, which is equivalent to expanding in the continuous basis of momentum eigenstates.

All of this should be described in some detail in most quantum textbooks. You might also want to look here: http://galileo.phys.virginia.edu/classes/751.mf1i.fall02/CoherentStates.htm, for an elaboration of how minimum uncertainty wavefunctions behave with time.
 
  • #25
SpectraCat said:
No .. you can't use the form of the wavefunction at a particular time on one side of the equation, and the general time-dependent form on the other. Solving this type of problem is not always possible to do analytically. One way to go about it is to expand the wavefunction in the eigenstates of the given Hamiltonian, including the time-dependent phase factors in the expansion:

[itex]\psi(x,t)=\sum_n{a_n\phi_n(x)}e^{i\omega_n t}[/itex] where [itex]\omega_n=\frac{E_n}{\hbar}[/itex], and En is the energy of the n-th eigenstate.

Since your initial wavefunction is at t=0, all of the phase factors will be equal to 1 at that time. You can solve for the expansion coefficients using the integrals:

[itex]a_n=<\phi_n(x)|\psi(x,0)>[/itex]

If you are lucky, or your Hamiltonian is particularly simple, then the integrals will be solvable analytically.

In the special case of a free-particle (i.e. when V(x)=0, ), then you can just take the Fourier transform of the t=0 wavefunction, which is equivalent to expanding in the continuous basis of momentum eigenstates.

All of this should be described in some detail in most quantum textbooks. You might also want to look here: http://galileo.phys.virginia.edu/classes/751.mf1i.fall02/CoherentStates.htm, for an elaboration of how minimum uncertainty wavefunctions behave with time.

That actually clears up a lot of misconceptions that I once had. I will give this a shot and see what happens. I appreciate it!
 
  • #26
Ha Ha Ha. . . . I actually ran into a problem:cry:, what does the [itex]\phi_{n}(x)[/itex] represent in these calculations. I guess I could just look it up but so many books have different notation. I guess its better to ask questions than remain ignorant.
 
  • #27
matumich26 said:
Ha Ha Ha. . . . I actually ran into a problem:cry:, what does the [itex]\phi_{n}(x)[/itex] represent in these calculations. I guess I could just look it up but so many books have different notation. I guess its better to ask questions than remain ignorant.

Well, it was kind of implied in my post, since I suggested you expand in the basis of eigenstates of the Hamiltonian. So the [itex]\phi_{n}(x)[/itex] are the eigenstates of whatever Hamiltonian you are using.
 

What is the time development of a wavefunction?

The time development of a wavefunction refers to the change in a wavefunction over time. In quantum mechanics, a wavefunction describes the probability amplitude of a particle's position, momentum, and other physical properties.

How is the time development of a wavefunction calculated?

The time development of a wavefunction is calculated using the Schrödinger equation, which is a fundamental equation in quantum mechanics. It describes how a wavefunction changes over time and is used to predict the behavior of quantum systems.

What factors affect the time development of a wavefunction?

The time development of a wavefunction is affected by various factors, such as the initial conditions of the system, the potential energy of the system, and any external forces acting on the system. These factors can change the shape and behavior of the wavefunction over time.

Why is the time development of a wavefunction important in quantum mechanics?

The time development of a wavefunction is important in quantum mechanics because it allows us to make predictions about the behavior of quantum systems. By understanding how a wavefunction changes over time, we can better understand the behavior of particles at the quantum level and make accurate predictions about their properties.

Can the time development of a wavefunction be observed?

No, the time development of a wavefunction cannot be directly observed. In quantum mechanics, the wavefunction is a mathematical description of a particle's behavior and does not have a physical representation. However, the effects of the wavefunction can be observed through experiments and measurements.

Similar threads

Replies
8
Views
895
Replies
5
Views
1K
  • Quantum Physics
Replies
5
Views
433
  • Quantum Physics
Replies
21
Views
1K
  • Quantum Physics
Replies
8
Views
2K
Replies
17
Views
1K
Replies
1
Views
573
  • Quantum Physics
Replies
19
Views
1K
  • Quantum Physics
Replies
3
Views
723
Replies
1
Views
770
Back
Top