Quantized vs. Continuous Variables

In summary, Pauling argues that the wavefunction will always be positive and will go to infinity if you take the derivative of it at a point beyond the range [b, a].
  • #1
Tac-Tics
816
7
I'm trying to follow an argument in Pauling's Introduction to QM w/applications to chemistry. He is a bit hand-wavy at parts, and I wanted to see if anyone could clarify.

So we start off with the time-dependent Schrodinger equation. He makes the assumption that [tex]\Psi[/tex], our wavefunction, can be decomposed as the product of two functions, [tex]\psi[/tex], a function of position, and [tex]\phi[/tex], a function of time.

My first question is why can he make that assumption. From the look of it, it seems he's assuming that the position determines the magnitude of the wavefunction and the time determines the complex angle. However, he doesn't state it directly and he doesn't explain why that assumption is legitimate. My only guess is that since we're dealing with a single electron, there is no possibility for it to interfere with anything.

So, with that assumption, he continues to simplify the schrodinger equation into a more manageable form. He reduces it to

[tex]\frac{d^2\psi}{dx^2} = \frac{2m}{\hbar}(V(x) - W)\psi(x)[/tex]

Where W is an arbitrary energy level.

We assume that V(x) is a function such that V(x) grows to infinity at either infinity. We then choose the largest and greatest values for x, called a and b, such that V(a) = W and V(b) = W. That is, where the energy potential equals the energy level.

It's clear from the modified schrodinger equation that the concavity of [tex]\psi[/tex] will reflect the value of psi outside the range [b, a]. That is, if [tex]\psi(x)[/tex] is positive for any x > a, then [tex]\psi[/tex] is concave up at all points x > a.

(However, if I'm right in assuming that [tex]\psi[/tex] is the magnitude of the wavefunction, it would *always* be positive... but Pauling says [tex]\psi[/tex] can be negative at this point...)

So that's as far as I understand well. The next part, he loses me.

Choose any point c and a value for [tex]\frac{d\psi}{dx}(c)[/tex]. He claims that having chosen these values, [tex]\psi[/tex] will either diverge to infinity, negative infinity, or approach zero (at least... that seems to be his argument).

I can't quite understand his reasoning for this. He gives three examples. In one, the derivative is positive at c, and the function diverges to infinity. In the next, the derivative is negative and fall to negative infinity. The last looks as if the derivative were closer to zero (he doesn't specify the value required), and the function asymptotically approaches zero.

It's frustrating I can't follow his argument, because the idea is fascinating (READ: he has a really cool diagram in the next section that essentially explains why electrons are "bound" to atoms... where the energy levels above a certain threshold (the ionization energy?) become continuous).

Anyway, if anyone could offer some help, I'd be super greatful.
 
Physics news on Phys.org
  • #2
This is standard QM, your questions should be answered by almost any modern textbook.
 
  • #3
Avodyne said:
This is standard QM, your questions should be answered by almost any modern textbook.

So could you help me out in understanding it?
 
  • #4
The first part (assumption the the wavefunction can be decomposed into X(x)*T(x)) is a standard trick for solving partial differential equations. The resulting time-ind. Schrodinger equation is an eigenvalue problem. By solving it, we get all the different eigenfunctions which we assume form a basis of all possible solutions.

So, if you write down any arbitrary function, f(x,t), you can decompose it into a sum (or integral) of functions that go like (exp(-iEt))*(X(x)), where the X(x) are eigenfunctions of the time-ind. Schrodinger equation.

Alternately, you get the same time-independent equation if you take the Fourier-transform (w.r.t. time) of the time-dependent equation. So the solutions X(x) are the Fourier components of the total wavefunction.
 
  • #5
Dyad said:
So, if you write down any arbitrary function, f(x,t), you can decompose it into a sum (or integral) of functions that go like (exp(-iEt))*(X(x)), where the X(x) are eigenfunctions of the time-ind. Schrodinger equation.

Is there a name for this decomposition technique?

Pardon my ignorance. I haven't studied PDEs very much. Would this be the same technique described in:

http://tutorial.math.lamar.edu/Classes/DE/SeparationofVariables.aspx
 
  • #6
If you are referring to "separation of variables", yes, that is exactly what Dyad was talking about.
 
  • #7
So, validity of the separation aside, what about the rest? How does the first-derivative of psi combined with the energy level W dictate the behavior of the graph?
 
  • #8
I don't quite understand the rest of the question. Perhaps you can show us Pauling's text/picture. Let's still have a crack at it.

let's say c > a. Then we know from the time-ind. Schrodinger equation (TISE) that the wavefunction is concave up and will stay that way for any c > a.

Recall that the derivative of a function is its slope. If the slope of the wavefunction at c is also positive, the wavefunction is not only concave up but also increasing. Try drawing a function with these properties. It will go to infinity. Try to draw a function with positive concavity but negative slope; you'll get a function that goes to minus infinity. So, unless, the derivative of the wavefunction vanishes, the wavefunction will blow up, as will its probability distribution. This "blowing up" is unphysical, so we reject its possibility.

(This is all fairly hand-wavy.)
 
  • #9
Here is essentially the diagram he has listed.

The potential energy as a function of x is one such that it increases to infinity in either direction, so a V(x) = x^2 or V(x) = e^|x| or something to that nature (he doesn't give an actual function... just a description of V).

I can also understand how a function with a negative first and second derivative necessarily falls to negative infinity and similarly for a positive first and second derivative. But knowing the first derivative is negative at c doesn't seem to tell you much, because the TISE shows that the second derivative will be positive at c.

Maybe I'm missing the point. Maybe the author is just giving a descriptive example of the Schrodinger equation. But it's frustrating, because it seems so interesting and he *almost* explains it to the point where you can understand it!
 

Attachments

  • diagram1.png
    diagram1.png
    10.4 KB · Views: 458
Last edited:
  • #10
It's been awhile since I read that book, but I think what he was trying to show here is that you have to choose a function that behaves "properly" in order to solve the wave function. I.e. if you choose either of the two functions that proceed to infinity it won't work, but if you choose the third case (where it limits out at a definitive number, in this case zero) then it is a valid function that you can use to solve the wave function. Something like that, although someone feel free to correct me if I am wrong (it's been years).
 
  • #11
Renge Ishyo said:
It's been awhile since I read that book, but I think what he was trying to show here is that you have to choose a function that behaves "properly" in order to solve the wave function. I.e. if you choose either of the two functions that proceed to infinity it won't work, but if you choose the third case (where it limits out at a definitive number, in this case zero) then it is a valid function that you can use to solve the wave function. Something like that, although someone feel free to correct me if I am wrong (it's been years).

Yeah. I think I got distracted by the way he presented it. It looks like the next chapter talks about the harmonic oscillator, where he goes into greater detail. (Whether or not I can make full sense of that chapter is uncertain. I'm still not used to physicspeak).
 
  • #12
Yes, it must have to do with the allowed functions. If you choose an energy that is not an eigenvalue, you're wavefunction won't satisfy the boundary conditions--it'll blow up. Indeed, when you start doing problems, you'll see that the explicit calculation of the eigenvalues is done by considering the boundary conditions (infinite square well is simplest).

By the way, that is a pretty old book (not bad, mind you, just old). If you like old books, there are many by Slater (Feynman's ugrad mentor). If you'd like something newer, and perhaps much more accessible, try Griffith's.
 

1. What is the difference between quantized and continuous variables?

Quantized variables are discrete and can only take on specific values, while continuous variables can take on any value within a certain range.

2. Can you give an example of a quantized variable?

An example of a quantized variable is the number of children in a family. It can only take on whole number values (1, 2, 3, etc.) and cannot be a fraction or decimal.

3. What is an example of a continuous variable?

Height is an example of a continuous variable. It can take on any value within a certain range (e.g. 5.5 feet, 5.6 feet, 5.7 feet, etc.) and can also be measured with greater precision (e.g. 5.634 feet).

4. How are quantized and continuous variables used in science?

Quantized variables are often used in fields such as genetics and statistics, where data is collected and analyzed in discrete categories. Continuous variables are used in fields such as physics and engineering, where precise measurements and calculations are necessary.

5. Can a variable be both quantized and continuous?

No, a variable can only be one or the other. However, some variables can be approximated as either quantized or continuous depending on the level of precision needed for a particular analysis or experiment.

Similar threads

Replies
4
Views
871
  • Quantum Physics
Replies
11
Views
1K
  • Quantum Physics
Replies
8
Views
2K
Replies
3
Views
1K
Replies
4
Views
990
  • Quantum Physics
Replies
3
Views
1K
  • Quantum Physics
Replies
4
Views
2K
Replies
2
Views
978
Replies
1
Views
1K
Replies
41
Views
8K
Back
Top