# The solution of a nonlinear equation in Schutz's book page 211 2nd edition

• I
• MathematicalPhysicist
In summary, the author argues that if ##g## is nearly 1, i.e ##g(u)\approx 1+\epsilon(u)##, then the solution for ##f(u)## can be obtained by substituting ##g## back into the equation. This leads to the approximation ##f(u)\approx 1-\epsilon(u)##. The derivative of this equation with respect to ##u## is then used to deduce the solution for ##f##. This is done by ignoring terms quadratic or higher in ##\epsilon## and its derivatives. The resulting equation is then solved using the ansatz of a power series in ##u##, leading to a recurrence relation for the coefficients.
MathematicalPhysicist
Gold Member
TL;DR Summary
On page 211 in equation (9.32) we have a nonlinear equation that ##f## and ##g## should satisfy, which is ##\ddot{f}/f+\ddot{g}/g=0##.
The suggested solution in the book doesn't make sense, can you help me understand it?
Continuing the summary, the author argues that if ##g## is nearly 1, i.e ##g(u)\approx 1+\epsilon(u)##, one obtains the solution:
##f(u)\approx 1-\epsilon(a)##.
The derivative in the summary, i.e the dots represent derivatives with respect to ##u##.

Then how to deduce the solution for ##f##?
If I plug ##g## back to the equation in the summary I get:
$$\ddot{f}+(\ddot{\epsilon}/(1+\epsilon(u)))f=0$$
Don't see how to continue from here, he talks about Fourier representation, but I don't follow his reasoning.

Thanks!

Perhaps because it's an approximation we get: ##\ddot{f}+\ddot{\epsilon}f\approx 0##, but I am not sure.

Since ##g \approx 1 + \epsilon## then ##g^{-1} \approx 1 - \epsilon##. Substituting this in you get ##\ddot f / f + \ddot \epsilon (1 - \epsilon) = 0##. Terms with ##\epsilon## and its derivatives should be ignored. You get ##\ddot f / f + \ddot \epsilon = 0##. It follows that ##f = 1 - \epsilon## solves this last equation.

kent davidge said:
Terms with ϵ\epsilon and its derivatives should be ignored.

Not quite; as you state this, it would mean the only remaining terms would be ##\ddot{f} / f = 0##.

What you mean is that terms quadratic or higher in ##\epsilon## and its derivatives should be ignored. That gets rid of the troublesome ##\ddot{\epsilon} \epsilon## term (and also gets rid of a similar troublesome term when ##\ddot{f} / f## is computed).

Ok, I think I see it now.
##f\approx 1-\epsilon(u)##, makes for ##\ddot{f} \approx -\ddot{\epsilon}##, and:
##\ddot{f}/f =-\ddot{\epsilon}/(1-\epsilon) \approx -\ddot{\epsilon}\cdot(1+\epsilon)##; so by adding this with ##\ddot{\epsilon}/(1+\epsilon) \approx \ddot{\epsilon} (1-\epsilon)##, we neglect the ##\epsilon## term and the second derivative of ##\epsilon## gets cancelled.

Last edited:
I have another question.
In the book he guessed ##f## by knowing ##g##, but if I were given this ##g## then how would arrive at ##f## without guessing that I need to expand geometrically the denominator of ##f##?

I mean I would have: ##\ddot{f}+\ddot{\epsilon}f=0##
then I would multiply by ##\dot{f}## and integrate, I would get:
##\dot{\dot{f}^2}+\int \ddot{\epsilon}\dot{f^2}=E##, that's a difficult equation to solve, if it's even possible analytically.
It seems like a lucky guess and a lot of neglecting terms...
Not something my mathematical part of mathematicalphysicist will like... :-)

weirdoguy
Well I can use the ansatz of power series in ##u##, i.e ##f(u)=\sum_n a_n u^n## and ##\epsilon(u) = \sum_n b_n u^n##, and then to differentiate both ##f## and ##\epsilon## twice, and to plug back to the ODE.

I'll get some recurrence relation of ##a_n##'s and ##b_n##'s.

## 1. What is a nonlinear equation?

A nonlinear equation is an equation that contains at least one nonlinear term, meaning that it cannot be expressed as a linear combination of its variables. This makes it more complex and difficult to solve compared to a linear equation.

## 2. What is the significance of solving nonlinear equations?

Solving nonlinear equations is important in many fields of science, including physics, engineering, and economics. These equations often represent real-world problems and finding their solutions can help us understand and predict the behavior of systems and phenomena.

## 3. How does Schutz's book approach solving nonlinear equations?

Schutz's book uses a method called the Newton-Raphson method, which is an iterative process that approximates the solution of a nonlinear equation by repeatedly improving upon an initial guess. It is a commonly used and efficient method for solving nonlinear equations.

## 4. Can the Newton-Raphson method always find a solution to a nonlinear equation?

No, the Newton-Raphson method may not always find a solution to a nonlinear equation. It relies on an initial guess that is close enough to the actual solution, and if the guess is too far off, the method may fail to converge. In these cases, other methods may need to be used.

## 5. Are there any limitations to using the Newton-Raphson method for solving nonlinear equations?

Yes, there are some limitations to using the Newton-Raphson method. It may not work for equations with multiple solutions or for equations with discontinuous derivatives. It also requires the equation to be differentiable, which may not always be the case for all nonlinear equations.

Replies
10
Views
425
Replies
4
Views
1K
Replies
9
Views
1K
Replies
36
Views
4K
Replies
1
Views
999
Replies
47
Views
3K
Replies
28
Views
3K
Replies
6
Views
856
Replies
15
Views
1K
Replies
0
Views
874