Why Does Iteration Method Give Asymptotic Solution for x = \epsilon \log(1/x)?

  • Thread starter Thread starter rsq_a
  • Start date Start date
  • Tags Tags
    Asymptotics
rsq_a
Messages
103
Reaction score
1
(I don't believe this is classified as a 'homework question' since the solution is provided. Plus it's not really my homework. Apologies if I'm mistaken.)

This question involves the asymptotic behaviour of x = \epsilon \log(1/x) as x tends to 0.

The lecturer provided a solution using the method of iteration. Basically,

As x tends to 0, the logarithm varies much more slowly than x. So the solution is going to be roughly at x = epsilon

Afterwards, it is shown that the iterative provides the leading order behaviour of

x \sim \epsilon\log(1/\epsilon)

I found this to be deeply disturbing. The truth is that x is NOT "roughly" around x = epsilon. It's a whole factor of log(1/epsilon) away. I suppose this deals with the semantics of how you're using the word "roughly", but the fact is that your initial guess for the iterative method is not the correct leading order solution! In fact, as epsilon tends to 0, your first guess is INFINITELY bad!

Putting semantics aside, I'm trying to teach someone this question and motivate WHY we should be putting x = epsilon into the iterative method.

I can't find a reason.
 
Last edited:
Physics news on Phys.org
Well, to insert x=epsilon is just plain false, of reasons mentioned by yourself.

This CANNOT be represented by a simplistic power series expansion, which is what your lecturer did wrongly.

Rather, we need a clever start!

That start happens to be to set x=\epsilon\ln(\frac{1}{\epsilon})+f_{1}(\epsilon) where we must require that f is asymptotically negligible to the leading order term.
If our calculations turn out wrong for that requirement, we made a wrong initial guess.

Let us see where this leads us:
\epsilon\ln(\frac{1}{\epsilon})+f_{1}(\epsilon)\sim{-}\epsilon\ln((\epsilon\ln(\frac{1}{\epsilon}))(1+\frac{f_{1}(\epsilon)}{\epsilon\ln(\frac{1}{\epsilon})}))\approx{-}\epsilon\ln\epsilon-\epsilon\ln(\ln(\frac{1}{\epsilon}))-\frac{f_{1}(\epsilon)}{\ln(\frac{1}{\epsilon})}
Thus, we gain another asymptotic requirement on f (unless this is compatible with our first requirement, our initial hypothesis about leading order will have been proved wrong!):
f_{1}(\epsilon)\sim{-}\epsilon\ln(\ln(\frac{1}{\epsilon}))-\frac{f_{1}(\epsilon)}{\ln(\frac{1}{\epsilon})}
which can be simplified to:
f_{1}(\epsilon)\sim(\epsilon\ln(\frac{1}{\epsilon}))(\frac{-\ln(\ln(\frac{1}{\epsilon}))}{1+\ln(\frac{1}{\epsilon})}

The expression for f is, indeed, asymptotically tinier than our assumed leading order term, so all is well..:smile:
 
arildno said:
Well, to insert x=epsilon is just plain false, of reasons mentioned by yourself.

This CANNOT be represented by a simplistic power series expansion, which is what your lecturer did wrongly.

Hi, thanks for your reply. However...

You misunderstood my (badly written) explanation. The person who wrote up the answer did not use a power series expansion. The iterative method was used with,

x_{n+1} = \epsilon \log(1/x_n)

and the initial guess x_0 = \epsilon.

This does work.

However, my qualm was with their use of x_0 = \epsilon. It works, but the reason why eludes me.
 
Well, perhaps you might be able to prove the iteration is contractive for a wide range of initial values; I'm uncertain as to how to conduct that proof.
 
arildno said:
Well, perhaps you might be able to prove the iteration is contractive for a wide range of initial values; I'm uncertain as to how to conduct that proof.

Is there any way to 'see', a priori, that the leading order term is going to scale like \epsilon \log(1/\epsilon)?
 
Well, we can make a dominant balance argument:
1. Assume that x is as epsilon as epsilon goes to zero.
Then, the equation would tell us that -\epsilon\sim\epsilon\ln(\epsilon)
which is totally false, the right-hand side is dominant with respect to the left-hand side.

2. Assume that x is as log(1/e) as e goes to zero. In this case, the left hand side would dominate completely.

Thus, having established two wrong choices, each failing at a different side, we might speculate if we should try something "in between" those two pitfalls.

The product of the two behaviours might be our natural third guess, which happens to work.
 
Back
Top