The solution of a nonlinear equation in Schutz's book page 211 2nd edition

  • Context: Undergrad 
  • Thread starter Thread starter MathematicalPhysicist
  • Start date Start date
  • Tags Tags
    Book Nonlinear
Click For Summary
SUMMARY

The discussion focuses on solving a nonlinear equation as presented in Schutz's book, specifically on page 211 of the second edition. The key insight is that if the function ##g(u)## is approximated as ##g(u) \approx 1 + \epsilon(u)##, then the solution for ##f(u)## can be expressed as ##f(u) \approx 1 - \epsilon(a)##. Participants clarify that terms involving higher orders of ##\epsilon## and its derivatives can be neglected, leading to the simplified equation ##\ddot{f} \approx -\ddot{\epsilon}##. The conversation also touches on the challenges of deriving ##f## from ##g## without prior knowledge of the functional forms.

PREREQUISITES
  • Understanding of nonlinear differential equations
  • Familiarity with Taylor series expansion
  • Knowledge of Fourier representation techniques
  • Proficiency in calculus, particularly second derivatives
NEXT STEPS
  • Study the method of power series for solving differential equations
  • Explore the application of Fourier series in solving nonlinear equations
  • Learn about perturbation methods in mathematical physics
  • Investigate the implications of neglecting higher-order terms in approximations
USEFUL FOR

Mathematical physicists, applied mathematicians, and students studying nonlinear dynamics will benefit from this discussion, particularly those interested in differential equations and approximation methods.

MathematicalPhysicist
Science Advisor
Gold Member
Messages
4,662
Reaction score
372
TL;DR
On page 211 in equation (9.32) we have a nonlinear equation that ##f## and ##g## should satisfy, which is ##\ddot{f}/f+\ddot{g}/g=0##.
The suggested solution in the book doesn't make sense, can you help me understand it?
Continuing the summary, the author argues that if ##g## is nearly 1, i.e ##g(u)\approx 1+\epsilon(u)##, one obtains the solution:
##f(u)\approx 1-\epsilon(a)##.
The derivative in the summary, i.e the dots represent derivatives with respect to ##u##.

Then how to deduce the solution for ##f##?
If I plug ##g## back to the equation in the summary I get:
$$\ddot{f}+(\ddot{\epsilon}/(1+\epsilon(u)))f=0$$
Don't see how to continue from here, he talks about Fourier representation, but I don't follow his reasoning.

Thanks!
 
Physics news on Phys.org
Perhaps because it's an approximation we get: ##\ddot{f}+\ddot{\epsilon}f\approx 0##, but I am not sure.
 
Since ##g \approx 1 + \epsilon## then ##g^{-1} \approx 1 - \epsilon##. Substituting this in you get ##\ddot f / f + \ddot \epsilon (1 - \epsilon) = 0##. Terms with ##\epsilon## and its derivatives should be ignored. You get ##\ddot f / f + \ddot \epsilon = 0##. It follows that ##f = 1 - \epsilon## solves this last equation.
 
kent davidge said:
Terms with ϵ\epsilon and its derivatives should be ignored.

Not quite; as you state this, it would mean the only remaining terms would be ##\ddot{f} / f = 0##.

What you mean is that terms quadratic or higher in ##\epsilon## and its derivatives should be ignored. That gets rid of the troublesome ##\ddot{\epsilon} \epsilon## term (and also gets rid of a similar troublesome term when ##\ddot{f} / f## is computed).
 
Ok, I think I see it now.
##f\approx 1-\epsilon(u)##, makes for ##\ddot{f} \approx -\ddot{\epsilon}##, and:
##\ddot{f}/f =-\ddot{\epsilon}/(1-\epsilon) \approx -\ddot{\epsilon}\cdot(1+\epsilon)##; so by adding this with ##\ddot{\epsilon}/(1+\epsilon) \approx \ddot{\epsilon} (1-\epsilon)##, we neglect the ##\epsilon## term and the second derivative of ##\epsilon## gets cancelled.
 
Last edited:
I have another question.
In the book he guessed ##f## by knowing ##g##, but if I were given this ##g## then how would arrive at ##f## without guessing that I need to expand geometrically the denominator of ##f##?

I mean I would have: ##\ddot{f}+\ddot{\epsilon}f=0##
then I would multiply by ##\dot{f}## and integrate, I would get:
##\dot{\dot{f}^2}+\int \ddot{\epsilon}\dot{f^2}=E##, that's a difficult equation to solve, if it's even possible analytically.
It seems like a lucky guess and a lot of neglecting terms...
Not something my mathematical part of mathematicalphysicist will like... :-)
 
  • Like
Likes   Reactions: weirdoguy
Well I can use the ansatz of power series in ##u##, i.e ##f(u)=\sum_n a_n u^n## and ##\epsilon(u) = \sum_n b_n u^n##, and then to differentiate both ##f## and ##\epsilon## twice, and to plug back to the ODE.

I'll get some recurrence relation of ##a_n##'s and ##b_n##'s.
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
5K
Replies
4
Views
2K
  • · Replies 47 ·
2
Replies
47
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
7K
  • · Replies 0 ·
Replies
0
Views
1K
Replies
1
Views
1K
  • · Replies 11 ·
Replies
11
Views
1K
Replies
2
Views
2K