Why Does Iteration Method Give Asymptotic Solution for x = \epsilon \log(1/x)?

  • Context: Graduate 
  • Thread starter Thread starter rsq_a
  • Start date Start date
  • Tags Tags
    Asymptotics
Click For Summary

Discussion Overview

This discussion revolves around the asymptotic behavior of the equation x = ε log(1/x) as x approaches 0. Participants explore the validity of using the iteration method for finding solutions and the implications of initial guesses in this context.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant expresses concern that the initial guess x = ε is not a good approximation, arguing that it is "infinitely bad" as ε tends to 0.
  • Another participant suggests that a more appropriate starting point would be x = ε ln(1/ε) + f₁(ε), where f₁(ε) is asymptotically negligible compared to the leading order term.
  • A participant clarifies that the iterative method was used correctly with the formula xₙ₊₁ = ε log(1/xₙ) and acknowledges that while the initial guess works, the reasoning behind it is unclear.
  • Some participants propose that proving the iteration is contractive for a range of initial values could provide insights into the method's reliability.
  • Another participant suggests using a dominant balance argument to explore the leading order behavior, indicating that both extreme assumptions fail and hinting at a potential middle ground for a better guess.

Areas of Agreement / Disagreement

Participants generally disagree on the appropriateness of the initial guess x = ε and the method's application. Multiple competing views remain regarding the best approach to find the asymptotic solution.

Contextual Notes

Participants highlight the limitations of using simplistic power series expansions and the need for careful consideration of asymptotic behavior in their calculations. The discussion remains open-ended regarding the validity of initial assumptions and the iterative method's effectiveness.

rsq_a
Messages
103
Reaction score
1
(I don't believe this is classified as a 'homework question' since the solution is provided. Plus it's not really my homework. Apologies if I'm mistaken.)

This question involves the asymptotic behaviour of [itex]x = \epsilon \log(1/x)[/itex] as x tends to 0.

The lecturer provided a solution using the method of iteration. Basically,

As x tends to 0, the logarithm varies much more slowly than x. So the solution is going to be roughly at x = epsilon

Afterwards, it is shown that the iterative provides the leading order behaviour of

[tex]x \sim \epsilon\log(1/\epsilon)[/tex]

I found this to be deeply disturbing. The truth is that x is NOT "roughly" around x = epsilon. It's a whole factor of log(1/epsilon) away. I suppose this deals with the semantics of how you're using the word "roughly", but the fact is that your initial guess for the iterative method is not the correct leading order solution! In fact, as epsilon tends to 0, your first guess is INFINITELY bad!

Putting semantics aside, I'm trying to teach someone this question and motivate WHY we should be putting x = epsilon into the iterative method.

I can't find a reason.
 
Last edited:
Physics news on Phys.org
Well, to insert x=epsilon is just plain false, of reasons mentioned by yourself.

This CANNOT be represented by a simplistic power series expansion, which is what your lecturer did wrongly.

Rather, we need a clever start!

That start happens to be to set [tex]x=\epsilon\ln(\frac{1}{\epsilon})+f_{1}(\epsilon)[/tex] where we must require that f is asymptotically negligible to the leading order term.
If our calculations turn out wrong for that requirement, we made a wrong initial guess.

Let us see where this leads us:
[tex]\epsilon\ln(\frac{1}{\epsilon})+f_{1}(\epsilon)\sim{-}\epsilon\ln((\epsilon\ln(\frac{1}{\epsilon}))(1+\frac{f_{1}(\epsilon)}{\epsilon\ln(\frac{1}{\epsilon})}))\approx{-}\epsilon\ln\epsilon-\epsilon\ln(\ln(\frac{1}{\epsilon}))-\frac{f_{1}(\epsilon)}{\ln(\frac{1}{\epsilon})}[/tex]
Thus, we gain another asymptotic requirement on f (unless this is compatible with our first requirement, our initial hypothesis about leading order will have been proved wrong!):
[tex]f_{1}(\epsilon)\sim{-}\epsilon\ln(\ln(\frac{1}{\epsilon}))-\frac{f_{1}(\epsilon)}{\ln(\frac{1}{\epsilon})}[/tex]
which can be simplified to:
[tex]f_{1}(\epsilon)\sim(\epsilon\ln(\frac{1}{\epsilon}))(\frac{-\ln(\ln(\frac{1}{\epsilon}))}{1+\ln(\frac{1}{\epsilon})}[/tex]

The expression for f is, indeed, asymptotically tinier than our assumed leading order term, so all is well..:smile:
 
arildno said:
Well, to insert x=epsilon is just plain false, of reasons mentioned by yourself.

This CANNOT be represented by a simplistic power series expansion, which is what your lecturer did wrongly.

Hi, thanks for your reply. However...

You misunderstood my (badly written) explanation. The person who wrote up the answer did not use a power series expansion. The iterative method was used with,

[tex]x_{n+1} = \epsilon \log(1/x_n)[/tex]

and the initial guess [tex]x_0 = \epsilon[/tex].

This does work.

However, my qualm was with their use of [tex]x_0 = \epsilon[/tex]. It works, but the reason why eludes me.
 
Well, perhaps you might be able to prove the iteration is contractive for a wide range of initial values; I'm uncertain as to how to conduct that proof.
 
arildno said:
Well, perhaps you might be able to prove the iteration is contractive for a wide range of initial values; I'm uncertain as to how to conduct that proof.

Is there any way to 'see', a priori, that the leading order term is going to scale like [tex]\epsilon \log(1/\epsilon)[/tex]?
 
Well, we can make a dominant balance argument:
1. Assume that x is as epsilon as epsilon goes to zero.
Then, the equation would tell us that [tex]-\epsilon\sim\epsilon\ln(\epsilon)[/tex]
which is totally false, the right-hand side is dominant with respect to the left-hand side.

2. Assume that x is as log(1/e) as e goes to zero. In this case, the left hand side would dominate completely.

Thus, having established two wrong choices, each failing at a different side, we might speculate if we should try something "in between" those two pitfalls.

The product of the two behaviours might be our natural third guess, which happens to work.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 10 ·
Replies
10
Views
6K
  • · Replies 3 ·
Replies
3
Views
8K
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
12K