(I don't believe this is classified as a 'homework question' since the solution is provided. Plus it's not really my homework. Apologies if I'm mistaken.)

This question involves the asymptotic behaviour of [itex]x = \epsilon \log(1/x)[/itex] as x tends to 0.

The lecturer provided a solution using the method of iteration. Basically,

Afterwards, it is shown that the iterative provides the leading order behaviour of

[tex]x \sim \epsilon\log(1/\epsilon)[/tex]

I found this to be deeply disturbing. The truth is that x is NOT "roughly" around x = epsilon. It's a whole factor of log(1/epsilon) away. I suppose this deals with the semantics of how you're using the word "roughly", but the fact is that your initial guess for the iterative method is not the correct leading order solution! In fact, as epsilon tends to 0, your first guess is INFINITELY bad!

Putting semantics aside, I'm trying to teach someone this question and motivate WHY we should be putting x = epsilon into the iterative method.

Well, to insert x=epsilon is just plain false, of reasons mentioned by yourself.

This CANNOT be represented by a simplistic power series expansion, which is what your lecturer did wrongly.

Rather, we need a clever start!

That start happens to be to set [tex]x=\epsilon\ln(\frac{1}{\epsilon})+f_{1}(\epsilon)[/tex] where we must require that f is asymptotically negligible to the leading order term.
If our calculations turn out wrong for that requirement, we made a wrong initial guess.

Let us see where this leads us:
[tex]\epsilon\ln(\frac{1}{\epsilon})+f_{1}(\epsilon)\sim{-}\epsilon\ln((\epsilon\ln(\frac{1}{\epsilon}))(1+\frac{f_{1}(\epsilon)}{\epsilon\ln(\frac{1}{\epsilon})}))\approx{-}\epsilon\ln\epsilon-\epsilon\ln(\ln(\frac{1}{\epsilon}))-\frac{f_{1}(\epsilon)}{\ln(\frac{1}{\epsilon})}[/tex]
Thus, we gain another asymptotic requirement on f (unless this is compatible with our first requirement, our initial hypothesis about leading order will have been proved wrong!):
[tex]f_{1}(\epsilon)\sim{-}\epsilon\ln(\ln(\frac{1}{\epsilon}))-\frac{f_{1}(\epsilon)}{\ln(\frac{1}{\epsilon})}[/tex]
which can be simplified to:
[tex]f_{1}(\epsilon)\sim(\epsilon\ln(\frac{1}{\epsilon}))(\frac{-\ln(\ln(\frac{1}{\epsilon}))}{1+\ln(\frac{1}{\epsilon})}[/tex]

The expression for f is, indeed, asymptotically tinier than our assumed leading order term, so all is well..

You misunderstood my (badly written) explanation. The person who wrote up the answer did not use a power series expansion. The iterative method was used with,

[tex]x_{n+1} = \epsilon \log(1/x_n)[/tex]

and the initial guess [tex]x_0 = \epsilon[/tex].

This does work.

However, my qualm was with their use of [tex]x_0 = \epsilon[/tex]. It works, but the reason why eludes me.

Well, perhaps you might be able to prove the iteration is contractive for a wide range of initial values; I'm uncertain as to how to conduct that proof.

Well, we can make a dominant balance argument:
1. Assume that x is as epsilon as epsilon goes to zero.
Then, the equation would tell us that [tex]-\epsilon\sim\epsilon\ln(\epsilon)[/tex]
which is totally false, the right-hand side is dominant with respect to the left-hand side.

2. Assume that x is as log(1/e) as e goes to zero. In this case, the left hand side would dominate completely.

Thus, having established two wrong choices, each failing at a different side, we might speculate if we should try something "in between" those two pitfalls.

The product of the two behaviours might be our natural third guess, which happens to work.