Recently my dad wrote a computer program that finds an approximate solution to the "crossed-ladders problem"

for inputed ladder lengths and height of ladder intersection. It uses what I just learned is called "fixed point iteration" to find the square of the width (approximately) and then after about ten cycles returns the square root of the result.

We are both impressed with how efficiently it finds so precise an answer, but neither of us completely understands why it works, especially when if one takes a random polynomial function and attempts to find a solution to it by isolating the exponent=1 term, making an initial guess, plugging it into the other side and iterating, the result almost always diverges.

So my question is: under what circumstances will fixed point iteration converge? I've played around with a few sample functions and right now I have the impression that it always works if the absolute value of the slope of the function (with the exponent=1 term removed and its coeffecient made into unity by diving both sides as necessary) never gets bigger than one. But I don't know if that's in fact true.

Borek
Mentor
Square root? I guess you are using

$$x_{n+1} = \frac 1 2 (x_n + \frac a {x_n})$$

and xn is approximation of square root of a?

As far as I remember this is variant of the Newton method, and Newton method works only if derivative of the function between starting point and solution is larger than 1. Or something close to that, could be it depends on the side from which you approach the solution, my imagination went on strike.

Unfortunately, LaTeX is no longer displaying correctly on this old computer of mine so I'll have to just type the characters and hope they appear correctly.

The program starts with the relationship

$$\frac {1}{h} = \frac {1}{\sqrt{l^2 - w^2}} + \frac {1}{\sqrt{s^2 - w^2}}$$

where h is the intersection height, l and s and the lengths of the long and short ladders respectively, and w is the width between the buildings.

It rearranges this to

$$w^2 = s^2 - ( \frac {1}{h} - \frac {1}{\sqrt{l^2 - w^2}} ) ^ {-2}$$

Then it starts with a guess value for $w^2$, puts that into the right side of the equation to produce a new $w^2$, plugs that back into the $w^2$ on the right, and so on. After about ten cycles it takes the square root of the last value $w^2$ it found, and that's the approximate value of $w$.

Since this is of the form $a_{n+1} = f (a_n)$ (where $w^2$ is the $a$), it's an example of fixed point iteration as I understand it.

Last edited:
Recently my dad wrote a computer program that finds an approximate solution to the "crossed-ladders problem"

for inputed ladder lengths and height of ladder intersection. It uses what I just learned is called "fixed point iteration" to find the square of the width (approximately) and then after about ten cycles returns the square root of the result.

We are both impressed with how efficiently it finds so precise an answer, but neither of us completely understands why it works, especially when if one takes a random polynomial function and attempts to find a solution to it by isolating the exponent=1 term, making an initial guess, plugging it into the other side and iterating, the result almost always diverges.

So my question is: under what circumstances will fixed point iteration converge? I've played around with a few sample functions and right now I have the impression that it always works if the absolute value of the slope of the function (with the exponent=1 term removed and its coeffecient made into unity by diving both sides as necessary) never gets bigger than one. But I don't know if that's in fact true.

Fixed point iteration will converge provided the function involved decreases distances. More precisely, the condition is that

$$|f(x) - f(y)| < k \; |x - y|$$ for all x, y
for some $$k < 1$$.

Another test that guarantees convergence is that $$|f'(x)| < k$$ for some $$k < 1$$.

Thanks awkward; that's about what I suspected.