Physical interpretation of phase in solutions to Schrodinger's Eqn?

In summary: I'm sorry, I can't remember how to do this exactly. The probability amplitude for a given point in space is still just a 1, but now we have to sum over all the different possible values of x for which that amplitude might be 1.In summary, the phase component of a wavefunction which is not an energy eigenstate of the Hamiltonian has no physical observable that can tell us anything about the location in phase space at which we took our measurement.
  • #1
Dfault
22
2
TL;DR Summary
Can any observable experiment reveal the exact phase for a wavefunction in an energy eigenstate, or is the only thing that carries physical significance the *relative* phase *differences* between energy eigenstates?
Hello all,

So I've been working through the solutions to some simple introductory problems for the Schrodinger Equation like the infinite square well, and I'm trying to make sense of how to think about the phase component. For simplicity's sake, let's start off by assuming we've measured an electron in the infinite square well to have the ground-state energy ## E_1 = \frac {\pi^2\hbar^2} {2ma^2}##. The ground-state solution to the Time-Independent Schrodinger Equation is:

## \psi_1(x) = c_1 \sin(\frac {\pi} {a}x) ##

and to add in the time dependence to find ## \Psi_1(x,t)##, all we have to do is multiply by a factor of ##e^-i\frac {E_1t} {\hbar}##.

To find the probability distribution, we'd multiply ## \Psi_1^* (x,t) \Psi_1 (x,t) ##, and we'd find that the complex exponential cancels out: the probability distribution does not evolve in time, which is what we'd expect for a "stationary state" since we started off by demanding that the electron we measure (or an ensemble of electrons, for that matter) should be prepared with the ground-state energy - an energy eigenstate of the Hamiltonian. Furthermore, when we try to sandwich the position, momentum, or kinetic energy operators between ## \Psi_1^* (x,t)## and ## \Psi_1 (x,t) ## and integrate with respect to dx, we find that the complex exponential part of the wavefunction still cancels out in the end: evidently, for a lone energy eigenstate of our equation, that phase component ##e^-i\frac {E_1t} {\hbar}## cannot be observed directly. Is this correct?

I like to imagine that the ##e^-i\frac {E_1t} {\hbar}## part of the wavefunction is a rotation of the unit vector through the complex plane so I can think of it kind of like this: for a given spot along the x-axis, there's a certain probability amplitude for our wavefunction - for example, for our ground-state solution right in the middle of the square well, we have an amplitude equal to ## c_1 ##. Over the course of time, that probability amplitude "sloshes back and forth" between a real component and an imaginary component, like water going back and forth between two buckets: at time ##t = 0##, 100% of the probability amplitude for that point in space is in the "real bucket," then at time ## t = \frac {\pi\hbar} {4E_1}##, 50% of it exists in the "real bucket" and 50% in the "imaginary bucket," then at time ## t = \frac {\pi\hbar} {2E_1}##, 100% of it exists in the "imaginary bucket," and so on. When we go to make any physical observation of an electron in the ground energy eigenstate, we'll catch the wavefunction at some random time t in its rotation through phase space - but where exactly won't matter, because all of our physical observables for an energy eigenstate will just ask us for the magnitude of the unit vector, which is always 1: no matter what angle the unit vector has in the complex plane at a given moment in time, the square of its projection onto the real axis plus the square of its projection onto the imaginary axis will always just produce an answer of 1.

If we now look at the first excited state for the infinite square well, we'd get a full sine wave as our solution to the time-independent Schrodinger equation:

##\psi_2(x) = c_2 \sin(\frac {2\pi} {a}x) ##

and to add in the time dependence to find ## \Psi_2(x,t)##, all we have to do is multiply by a factor of ##e^-i\frac {4E_1t} {\hbar}## since the energy eigenstates of the infinite square well increase by a factor of ##n^2##. This time, our phasor is rotating at four times the rate of the ground state's phasor: the probability amplitude's rotating through the complex plane four times as fast as before. Still though, as with our ground state, since we're again looking at an energy eigenstate, then we'll again have no physical observable that can tell us anything about where in the phase cycle we were when we took our measurement.

So far so good, except that a person who has looked at no other examples other than energy eigenstates would be tempted to ask "why bother writing the phase component at all? It seems like it has no bearing on any physical measurement we make, anyway." The answer seems to present itself in cases where we're looking at something that's not in a single energy eigenstate: imagine for a second that we're looking at some ##\psi(x)## which is some linear combination of ##\psi_1(x)## and ##\psi_2(x)##. For the sake of simplicity, I'll absorb the relative strengths of each of these two components into the coefficients of ##\psi_1(x)## and ##\psi_2(x)## themselves so we can just write ##\psi(x) = \psi_1(x) + \psi_2(x)##. That gives us a full solution to the Time Dependent Schrodinger Equation of

##\Psi(x,t) = \psi_1(x)e^-i\frac {E_1t} {\hbar} + \psi_2(x)e^-i\frac {4E_1t} {\hbar}##

Now when we try to find the probability distribution ##\Psi^*(x,t) \Psi(x,t)##, we get:

##\Psi^*(x,t) \Psi(x,t) = \{ \psi_1(x)e^i\frac {E_1t} {\hbar} + \psi_2(x)e^i\frac {4E_1t} {\hbar} \} \{ \psi_1(x)e^-i\frac {E_1t} {\hbar} + \psi_2(x)e^-i\frac {4E_1t} {\hbar} \}##

##= \psi_1(x)^2 + \psi_2(x)^2 + \psi_1(x)\psi_2(x)\{ e^i\frac {4E_1t - E_1t} {\hbar} + e^{-i}\frac {4E_1t - E_1t} {\hbar} \} ##

##= \psi_1(x)^2 + \psi_2(x)^2 + \psi_1(x)\psi_2(x)\{ 2cos(\frac{4E_1t - E_1t}{\hbar}) \} ##

Now the phase carries physical significance - or at least, the phase difference between two energy eigenstates does: it acts as the "driving agent" behind the time-varying portion of our solution. It seems that this time-varying portion has a frequency proportional to the difference between energy levels between our two eigenstates. (That kind of makes sense: if the two energy levels were the same, we would expect the time-varying portion to disappear.)

So if we imagine some arbitrary solution to the time-dependent Schrodinger equation as being "composed of" different proportions of energy eigenstates, we can imagine that each of those energy eigenstates that "makes up" the arbitrary solution carries with it its own phase, and that the differences in these phases is what's responsible for each component wavefunction interfering with each other constructively or destructively at a particular point ##x## on our axis at some particular time ##t## to produce the overall time-dependence of the wavefunction. Is that a good way to think about phase? Is it a quantity which, for an energy eigenstate, carries no physical significance, but whose existence can be inferred indirectly by looking at the interference pattern produced by the relative differences in phase between two or more energy eigenstates?
 
Last edited:
Physics news on Phys.org
  • #2
Dfault said:
##\Psi(x,t) = \psi_1(x)e^-i\frac {E_1t} {\hbar} + \psi_2(x)e^-i\frac {4E_1t} {\hbar}##
The general solution involving the first two eigenstates is: $$\Psi(x,t) = c_1\psi_1(x)e^{-i\frac {E_1t} {\hbar}} + c_2\psi_2(x)e^{-i\frac {4E_1t} {\hbar}}$$The magnitude of the coefficients ##c_1, c_2## determine how much of each eigenstate is in the superposition. And this also affects the amount of interference. If either ##c_1## or ##c_2## is small, then the interference term is likewise small.

The frequency of the interference is determined by ##E_1 - E_2##. Note that for the infinite square well, if the difference in energy levels is an even number, then the interference term vanishes. But, not if the difference is odd.

Dfault said:
So if we imagine some arbitrary solution to the time-dependent Schrodinger equation as being "composed of" different proportions of energy eigenstates, we can imagine that each of those energy eigenstates that "makes up" the arbitrary solution carries with it its own phase, and that the differences in these phases is what's responsible for each component wavefunction interfering with each other constructively or destructively at a particular point ##x## on our axis at some particular time ##t## to produce the overall time-dependence of the wavefunction. Is that a good way to think about phase? Is it a quantity which, for an energy eigenstate, carries no physical significance, but whose existence can be inferred indirectly by looking at the interference pattern produced by the relative differences in phase between two or more energy eigenstates?
That sounds about right.
 
  • Like
Likes Dfault
  • #3
Okay, I'm with you so far, but one thing is bothering me: it seems, then, that the direction we chose our phasors to rotate in was somewhat arbitrary. If all you can measure is the relative difference in phase between different energy eigenstates, then it seems like all that matters is that you stay consistent with the direction of your phase rotation. So instead of treating the time-dependent portion of the ground state as ##e^{-i}\frac {E_1t} {\hbar}## and the time-dependent portion of the first excited state as ##e^{-i}\frac {4E_1t} {\hbar}## with both phasors rotating clockwise in the complex plane, couldn't we have just as easily declared both to be rotating counter-clockwise in the complex plane with the ground state's time-dependent portion equal to ##e^i\frac {E_1t} {\hbar}## and the first excited state's to be ##e^i\frac {4E_1t} {\hbar}##? Does the sign of the ##i## in the complex exponential matter as long as you're consistent with your sign convention?
 
  • #4
I was working through some examples and it seems like, if you were trying to solve Schrodinger's equation in some "alternate universe" where the phasors rotated counter-clockwise instead of clockwise, you could do it: you'd just have to set up the left side of Schrodinger's equation to read ##-i\hbar \frac {d\Psi} {dt} ## instead of ##i\hbar \frac {d\Psi} {dt} ##. Your operators would all have to be the complex conjugate of what they are normally, so the momentum operator would switch from ##-i\hbar \frac {d\Psi} {dx} ## to become ##i\hbar \frac {d\Psi} {dx} ##. The solution you'd get from Schrodinger's equation would be ## \Psi^*## instead of ## \Psi##, but it doesn't seem like it would produce any observably different results from our "normal" universe: for some given operator ##O##, the "alternate universe" would calculate the expectation value as ## \int \Psi O^* \Psi^* \, dx## instead of ## \int \Psi^* O \Psi \, dx##, but they both produce the same answer in the end. Is that right?
 
  • #5
Dfault said:
couldn't we have just as easily declared both to be rotating counter-clockwise in the complex plane with the ground state's time-dependent portion equal to ##e^i\frac {E_1t} {\hbar}## and the first excited state's to be ##e^i\frac {4E_1t} {\hbar}##? Does the sign of the ##i## in the complex exponential matter as long as you're consistent with your sign convention?
That's not a solution to the Schroedinger equation for energy ##E_1##. That's a solution for energy ##-E_1##. You can't just drop the minus sign.
 
  • Like
Likes Dfault
  • #6
Dfault said:
I was working through some examples and it seems like, if you were trying to solve Schrodinger's equation in some "alternate universe" where the phasors rotated counter-clockwise instead of clockwise, you could do it: you'd just have to set up the left side of Schrodinger's equation to read ##-i\hbar \frac {d\Psi} {dt} ## instead of ##i\hbar \frac {d\Psi} {dt} ##. Your operators would all have to be the complex conjugate of what they are normally, so the momentum operator would switch from ##-i\hbar \frac {d\Psi} {dx} ## to become ##i\hbar \frac {d\Psi} {dx} ##. The solution you'd get from Schrodinger's equation would be ## \Psi^*## instead of ## \Psi##, but it doesn't seem like it would produce any observably different results from our "normal" universe: for some given operator ##O##, the "alternate universe" would calculate the expectation value as ## \int \Psi O^* \Psi^* \, dx## instead of ## \int \Psi^* O \Psi \, dx##, but they both produce the same answer in the end. Is that right?
What you've done there is equivalent to time reversal ##t \rightarrow -t##. That's our universe but modelling the behaviour of the system backwards in time.
 
  • Like
Likes Dfault
  • #7
Dfault said:
Okay, I'm with you so far, but one thing is bothering me: it seems, then, that the direction we chose our phasors to rotate in was somewhat arbitrary. If all you can measure is the relative difference in phase between different energy eigenstates, then it seems like all that matters is that you stay consistent with the direction of your phase rotation. So instead of treating the time-dependent portion of the ground state as ##e^{-i}\frac {E_1t} {\hbar}## and the time-dependent portion of the first excited state as ##e^{-i}\frac {4E_1t} {\hbar}## with both phasors rotating clockwise in the complex plane, couldn't we have just as easily declared both to be rotating counter-clockwise in the complex plane with the ground state's time-dependent portion equal to ##e^i\frac {E_1t} {\hbar}## and the first excited state's to be ##e^i\frac {4E_1t} {\hbar}##? Does the sign of the ##i## in the complex exponential matter as long as you're consistent with your sign convention?
No! You want to solve the time-dependent Schrödinger equation,
$$\mathrm{i} \hbar \partial_t \psi=\hat{H} \psi.$$
For an energy eigenstate as initial state the solution obviously reads
$$\psi(t,\vec{x})=\psi_E(\vec{x}) \exp(-\mathrm{i} E t/\hbar),$$
i.e., the time-dependent phase factor is unique concerning the sign.

To know, what's "physics" in the wave function, it's a good idea to remember that not the vectors of the Hilbert space ##|\psi \rangle## represent pure states but the corresponding statistical operator ##\hat{\rho}=|\psi \rangle \langle \psi|##. In the position representation the matrix elements of the statistical operator are
$$\rho(t,\vec{x},\vec{x}')=\langle \vec{x}|\hat{\rho}|\vec{x}' \rangle=\psi_E(\vec{x}) \psi_E^*(\vec{x}').$$
As you see, the overall time-dependent phase factor cancels, and this tells you that the energy eigenstates are in fact the stationary states of the system. Also any overall position-independent phase factor implicit in ##\psi_E(\vec{x})## cancels, i.e., it's irrelevant for the physical information about the system described by the state.
 
  • Like
Likes Dfault
  • #8
Dfault said:
Summary:: Can any observable experiment reveal the exact phase for a wavefunction in an energy eigenstate, or is the only thing that carries physical significance the *relative* phase *differences* between energy eigenstates?
When you look at the continuity equation of Schrödinger you will notice that the phase term takes the role of the potential of the velocity of the probability current. The absolute value of a potential is kind of physically meaningless.

The Madelung equations are considered equivalent to Schrödinger, yet it gets rid of the global phase factor. So yes, I guess it is just a mathematical artifact that comes from how the equation is formulated - more precisely the linearization.

Dfault said:
=ψ1(x)2+ψ2(x)2+ψ1(x)ψ2(x){2cos(4E1t−E1tℏ)}
If you are looking for physical interpretation, that oscillator term can also be considered within another context found here. Since the probability in Schrödinger doesn't behave at all like that of a classical charged point particle but closer to what you expect a charge gas to do (it has strong self interaction which a probability of a classical particle cannot have), if you couple that classically to the Maxwell equations you get a de-facto classical model of hydrogen that quantisises all the same, and actually models spontanious emission correctly predicting the emitted wavelength for which that term is responsible.

Usually attempts of classical analogs are rather bad, but this one has the funny property of actually working and even beating simple QT by describing the emission physics (in an intuitive way) which it doesn't. But Schrödinger-Maxwell equation can be obtained from full QED, so perhaps that isn't so surprising.

I am wondering if there is more to that classical interpretation which is also the origin of Schrödingers equation in a sense: Schrödingers original interpretation
 
Last edited:
  • Like
Likes Dfault
  • #9
PeroK said:
That's not a solution to the Schroedinger equation for energy . That's a solution for energy . You can't just drop the minus sign.

vanhees71 said:
No! You want to solve the time-dependent Schrödinger equation,

For an energy eigenstate as initial state the solution obviously reads
Hahah, yeah, I noticed it produces a negative energy (or a sign-change flip-flop) that can only be resolved by changing all the ##i##'s to ##-i##'s in the operators. Whoops!

PeroK said:
What you've done there is equivalent to time reversal . That's our universe but modelling the behaviour of the system backwards in time.
Huh, interesting. Does that carry any deeper meaning? Does it imply that the rules for quantum mechanics would produce the same results moving backwards in time, or is it just an artifact of the calculation like Killtech said?

Killtech said:
I am wondering if there is more to that classical interpretation which is also the origin of Schrödingers equation in a sense: Schrödingers original interpretation
I've been interested in learning the historical context of how Schrödinger arrived at his equation for a while now since most of my textbooks gloss over the history and just present Schrödinger's equation as an axiom. I did find what looks like a pretty comprehensive history of the origins of quantum mechanics by Jagdish Mehra, but it's in six volumes and they're all about three or four hundred pages, so I may have bitten off a bit more than I can chew there 😅
 
  • #10
Dfault said:
I've been interested in learning the historical context of how Schrödinger arrived at his equation for a while now
I have read about this, but I'm not sure I remember exactly correctly. If I remember correctly it was a bit of trial and error with different ansatzes. I tried to find the article I've read, but I couldn't find it. But I found a very interesting page here, with a video:

History and derivation of the Schrödinger equation
Dr. Wolfgang P. Schleich
University of Ulm / Texas A&M University

When Erwin Schrödinger was challenged by Peter Debye in a colloquium in Zürich in 1925 to propose a wave equation for matter he understandably faced a tremendous challenge. Therefore, it is not surprising that he first proposed several equations before he settled for the one that we call today the time-dependent Schrödinger equation. Unfortunately, he did not provide much motivation for his choice. In the present talk we provide a brief history of the birth of the Schrödinger equation and review our work on this topic which centers around three characteristic features of quantum mechanics: (i) it displays a symmetric coupling between the amplitude and the phase of the quantum wave, (ii) it allows for more freedom in phase than the one given by the classical action, and (iii) it allows for gauge invariance.

Link:https://www.pppl.gov/events/history-and-derivation-schr%C3%B6dinger-equation

I'm actually going to watch the video myself, because I'm also interested. :smile:
 
Last edited:
  • Like
Likes Dfault, Killtech and vanhees71
  • #11
DennisN said:
I have read about this, but I'm not sure I remember exactly correctly. If I remember correctly it was a bit of trial and error with different ansatzes. I tried to find the article I've read, but I couldn't find it. But I found a very interesting page here, with a video:

History and derivation of the Schrödinger equation
Dr. Wolfgang P. Schleich
University of Ulm / Texas A&M University

When Erwin Schrödinger was challenged by Peter Debye in a colloquium in Zürich in 1925 to propose a wave equation for matter he understandably faced a tremendous challenge. Therefore, it is not surprising that he first proposed several equations before he settled for the one that we call today the time-dependent Schrödinger equation. Unfortunately, he did not provide much motivation for his choice. In the present talk we provide a brief history of the birth of the Schrödinger equation and review our work on this topic which centers around three characteristic features of quantum mechanics: (i) it displays a symmetric coupling between the amplitude and the phase of the quantum wave, (ii) it allows for more freedom in phase than the one given by the classical action, and (iii) it allows for gauge invariance.

Link:https://www.pppl.gov/events/history-and-derivation-schr%C3%B6dinger-equation

I'm actually going to watch the video myself, because I'm also interested. :smile:
Interesting! The video was pretty useful, though I'm afraid I don't have the background in variational calculus / the Hamiltonian reformulation of classical mechanics to quite follow Schrödinger's thought process yet. I've started reading Robert Weinstock's Calculus of Variations with Applications to Physics & Engineering since it starts off by teaching the Euler-Lagrange equation, then moves on to the Hamiltonian, and ends with a walkthrough of Schrödinger's equation. It's a pretty useful book if anyone's interested, but if anyone here has another favorite book for introducing Hamiltonians etc. I'm also open to suggestions.
 
  • Like
Likes vanhees71 and DennisN
  • #12
Schrödinger's approach was using an analogy between the relation of ray optics ("particle aspect" of em. waves) and wave optics ("field aspect" of em. waves) to find a description of particles in terms of a wave equation to make de Broglie's idea of "wave-particle duality" concrete. Famously when he gave a talk about de Broglie's waves at the univeristy of Zürich, Debye told him, if you talk about waves you should have a wave equation. Then Schrödinger went to vacation with some anonymous muse and famously created "wave mechanics" in 1926.

The idea is that to derive ray optics from the more comprehensive wave optics (i.e., from the Maxwell equations of the em. field) you have to use the eikonal approximation, which is from a mathematical point of view a singular perturbation theory. The formal expansion parameter is the "typical wavelength" ##\lambda## of the radiation under consideration in comparison to the typical extensions ##L## of the objects around. The small parameter is ##\lambda/L##, and the eikonal approximation leads in leading order to an equation for the "eikonal" that looks like the Hamilton-Jacobi partial differential equation (HJPDE) for trajectories of (in the vacuum massless) particles. For particles Schrödinger simply approached the problem in the opposite direction, i.e., looking for wave equations leading in the leading eikonal approximation to the HJPDE of the particle-equation of motion. First he tried this for massive relativistic particles, leading to what is today called the Klein-Gordon equation, but this lead to the prediction of a wrong hydrogen spectrum and thus he turned to the non-relativistic case first leading to the Schrödinger equation.

The problem, however, was to establish a physical meaning for the "field" described by this equation. Schrödinger's first idea was to interpret it as a classical field, i.e., an electron should be described as a field quantity, and ##|\psi|^2## should be proportional to the density (for charge or mass), but this was in obvious contradiction to the detection of electrons as point-like objects. Shortly thereafter Born, in the famous footnote in his paper about potential-scattering theory, proposed the probabilistic interpretation of the wave function, leading to the well-known debates about the foundations of QT.
 
  • Informative
Likes DennisN
  • #13
vanhees71 said:
The problem, however, was to establish a physical meaning for the "field" described by this equation. Schrödinger's first idea was to interpret it as a classical field, i.e., an electron should be described as a field quantity, and ##|\psi|^2## should be proportional to the density (for charge or mass), but this was in obvious contradiction to the detection of electrons as point-like objects.
Is it an obvious contradiction though? The interpretation works astonishingly well for the hydrogen atom though, along of removing all the contradictions of classical particle self interaction in one go.

More importantly that interpretation does necessarily lead to a non linear classical field theory because how classical charge couples to the electric field. The issue with that is that a solution of a free electron for that case is still not known up to this day, let alone in Schrödingers time. So it couldn't be clear by then if it actually contradicts the observation. On the other hand some solutions for non linear PDEs were already known back then, some of which actually produced particle wave duality in the form of solitions. So on what basis was this contradition actually established? Because for me it sounds like the very opposite of being obvious.

And going further, in semiclassical Dirac-Maxwell it was proven that solition solutions do exist... so you have a theory leading to vortex like localized stable solitons and a remainder of a dispersed field from Schrödingers equations which time evolution is technically local but gets arbitrarily fast the more dispersed it becomes (i.e. superluminous in nature) and which is interpreted as a pilot wave in Bohmian mechanics... and both interact. Does not sound entirely off to me, at least not enough to push the idea aside entirely.
 

1. What is the physical significance of the phase in Schrodinger's equation?

The phase in Schrodinger's equation represents the oscillatory behavior of a quantum system. It is related to the probability amplitude of finding a particle at a particular location and time.

2. How does the phase affect the behavior of a quantum system?

The phase determines the interference patterns of a quantum system. It can lead to constructive or destructive interference, resulting in different probabilities for the particle's position and momentum.

3. Can the phase of a quantum system be measured?

No, the phase of a quantum system is a complex number and cannot be directly measured. However, the relative phase between two quantum systems can be measured through interference experiments.

4. How does the phase change in different solutions to Schrodinger's equation?

The phase can change depending on the potential energy function in Schrodinger's equation. In free space, the phase remains constant, but in the presence of a potential, it can vary and affect the behavior of the quantum system.

5. What is the role of the phase in quantum mechanics?

The phase is a fundamental concept in quantum mechanics and is essential for understanding the behavior of particles at the quantum level. It allows us to make predictions about the behavior of quantum systems and has been experimentally verified through various experiments.

Similar threads

Replies
7
Views
561
Replies
17
Views
1K
Replies
2
Views
572
Replies
1
Views
853
Replies
3
Views
820
  • Quantum Physics
Replies
1
Views
740
Replies
5
Views
910
Replies
10
Views
334
Replies
5
Views
841
Replies
4
Views
1K
Back
Top