Mastering Perturbation Theory for Nuclear Engineering Students

AI Thread Summary
The discussion centers on the challenges nuclear engineering students face in understanding perturbation theory, a mathematical method used to approximate solutions to non-linear equations, particularly in differential and integral equations. Participants explain that perturbation theory involves expressing solutions as power series, where small parameters allow for approximations of complex problems. An example illustrates how to apply this method to a quadratic equation, demonstrating its effectiveness in yielding better approximations than initial guesses. Concerns are raised about using perturbation methods with non-small parameters, yet some participants find their results align closely with numerical solutions from MATLAB, suggesting broader applicability. The conversation emphasizes the importance of understanding perturbation theory for advanced applications in quantum mechanics and engineering.
phrozenfearz
Messages
9
Reaction score
0
Perturbation Theory Help!

Hello physicsforums.com,

The last two weeks of my nuclear engineering course covered a mathematical topic known as 'perturbation theory'. It was offered as a 'method to solve anything' with; the problem is, however, that nobody in my class understands it.

Basic google searching has not yielded any great results, so I turn to the wise physicsforums.com community to perhaps help give new perspective or recommend some relatively easy to follow readings.

Thanks in advance!
 
Mathematics news on Phys.org


Well, it's a method to approximately solve anything! Specifically, it is a method to approximately solve non-linear equations, in particular functional equations like differential equations or integral equations. The "WKB" method used in quantum mechanics is a perturbation method.

The basic idea of "perturbation" theory is to write the solution to your problem (typically, a differential equation or integral equation although it will work for other kinds of problems) as a power series, y= y_0+ \epsilon y_1+ \epsilon^2 y_2+ \cdot\cdot\cdot where "\epsilon" is some small number inherent in your problem. Write out both sides of your equation as power series in \epsilon and set coefficients of the same powers of \epsilon equal. The \epsilon^0 term gives the solution to the approximate linear problem, y_0, and the other equations will give solutions in terms of previous solutions- that is, y_1, y_2 in terms of y_0 and y_1, etc.

Here's a trivial example: Imagine that we know how to solve equations of the form x^2= a by just taking the square root but we don't know the "quadratic formula".

Now, we want to solve the equation x^2+ \epsilon x- 4= 0 where \epsilon is a very, very small (positive) number. We could argue that since \epsilon is small, that equation is very nearly x^2= 4 which has solutions 2 and -2 and so our equation must have solutions very close to 2 and -2.

Is that true? If it is, how could we prove it is true? And how could we use that information to get a better approximation to the true solution?

Let x= x_0+ x_1\epsilon+ x_2\epsilon^2+ \cdot\cdot\cdot, a power series in \epsilon. We will assume that \epsilon is small enough that we can ignore \epsilon^3 (assuming that \epsilon was small enough to ignore \epsilon^1 would give y_0 the linear solution).

If x= x_0+ x_1\epsilon+ x_2\epsilon^2, then x^2= x_0^+ 2x_0x_1\epsilon+ 2x_0x_2\epsilon^2+ x_1^2\epsilon^2 where I have dropped the terms 2x_1x_2\epsilon^3 and x_2^2\epsilon^4 since they are of higher than second dergee.

x^2+\epsilon x- 4, then is x_0^2+ 2x_0x_1\epsilon+ 2x_0x_2\epsilon^2+ x_1^2\epsilon^2+ x_0\epsilon+ x_1\epsilon^2- 4 where I have dropped the term x_2\epsilon^3 from \epsilon x as it is, again, of higher than degree 2.

The equation becomes (x_0^2- 4)+ (2x_0x_1+ x_0)\epsilon+ (2x_0x_2+ x_1^2)\epsilon^2= 0. Equating corresponding components, we have x_0^2- 4= 0, 2x_0x_1+ x_0= x_0(2x_1+ 1)= 0 and 2x_0x_2+ x_2^2= 0.

x_0^2- 4= 0 gives x_0= 2 or x_0= -2. Since that is not 0, we can divide both sides of x_0(2x_1+ 1)= 0 by x_0 and get x_1= -\frac{1}{2} for both values of x_0.

If x_0= 2 and x_1= -1/2, then the third equation is 4x_2+ \frac{1}{4}= 0 so x_2= -1/16.

If x_0= -2 and [iterx]x_1= -1/2[/itex], then the third equation is -4x_2+ \frac{1}{4}= 0 so x_2= 1/16.

That is, our two solutions are x= 2- (1/2)\epsilon- (1/16)\epsilon^2 and x= -2- (1/2)\epsilon+ (1/16)\epsilon^2.

If, for example, \epsilon= .001, then those solutions are 2- .0005- 0.0000000625= 1.9994999375 and -2- .0005+ 0.0000000625= -2.0004999375.<br /> <br /> We can use the quadratic formula (which we were pretending we did not know) to actually solve x^2+ .001x- 4= 0, getting x= (-.001\pm\sqrt{0.000001+ 16})/2[/quote] which gives x= 1.99950006 and -2.00050006 so what we got using &amp;quot;perturbation theory&amp;quot; was certainly better than 2 and -2 and also show that 2 and -2 &lt;b&gt;are&lt;/b&gt; good first approximations.&lt;br /&gt; &lt;br /&gt; A equation of the form \epsilon x^2+ 2x- 4= 0 is a much harder problem. Here, just ignoring \epsilon, the equation becomes 2x- 4= 0 which has the &lt;b&gt;single&lt;/b&gt; solution x= 2 while we expect a quadratic equation like this to have two solutions. For this problem we need &amp;quot;singular perturbation&amp;quot; which is a whole different story!
 


thanks for your help!

I had an additional question to what you have explained. I have used the perturbation method for solving a set of non-linear differential equations where \epsilon was not small, however the terms x_0,x_1,x_2...etc. get progressively smaller.

I have worked out the equations and they seem to match up quite accurately to the solution found by using ode45 in MATLAB (while having time-varying parameters in the function).

I am trying to find some sort of reassurance that this is ok. Everything that I have found all suggests that epsilon must be small.
 


Many of the ab initio quantum chemistry methods use perturbation theory directly or are closely related methods. Møller–Plesset perturbation theory uses the difference between the Hartree–Fock Hamiltonian and the exact non-relativistic Hamiltonian as the perturbation. The zero-order energy is the sum of orbital energies. The first-order energy is the Hartree–Fock energy and electron correlation is included at second-order or higher. Calculations to second, third or fourth order are very common and the code is included in most ab initio quantum chemistry programs. A related but more accurate method is the coupled cluster method.
 
Thread 'Video on imaginary numbers and some queries'
Hi, I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance! Question 1: Around 4:22, the video says the following. So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Back
Top