Simple Fixed Point Iteration for Root-solving

Click For Summary

Discussion Overview

The discussion revolves around the concept of simple fixed point iteration for root-solving, specifically examining why this iterative method converges to the root of a function. Participants explore the mechanics of the method, its theoretical underpinnings, and the conditions necessary for convergence.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions the rationale behind the convergence of fixed point iteration, expressing confusion about why the method seems arbitrary yet effective.
  • Another participant explains that each iteration brings the estimate closer to the root, suggesting that even inaccurate initial guesses can lead to convergence, albeit more slowly.
  • A third participant provides a link to external resources for understanding the arithmetic behind root-solving algorithms.
  • One participant proposes an approximate formula for small perturbations around a known value, inviting others to consider solving for small changes.
  • Another participant warns that convergence is not guaranteed for all initial estimates, citing an example where a poor choice leads to divergence, emphasizing the need for the initial guess to be sufficiently close to the actual root.
  • A later reply mentions the requirement that the derivative of the function must be less than 1 around the root and initial guess for the fixed point method to converge.
  • One participant shares a personal anecdote about discovering fixed point iteration through repeated square root calculations on a calculator.

Areas of Agreement / Disagreement

Participants express differing views on the conditions necessary for convergence, with some emphasizing the importance of a close initial guess while others discuss broader theoretical aspects. The discussion remains unresolved regarding the general applicability of fixed point iteration and the specific conditions under which it converges.

Contextual Notes

Participants mention various mathematical concepts such as contraction mappings and the Banach fixed point theorem, indicating that the discussion involves advanced mathematical reasoning. There are also references to specific functions and their behavior under iteration, which may depend on the choice of initial estimates.

Noesis
Messages
99
Reaction score
0
I just want to know why in the world this works?

I am speaking about the simple iteration of taking a function, f(x), setting it to 0, f(x) = 0, solving for x in the function and setting it equal to g(x)...and then iterating.

For example the function :f(x) = x^2 +2x - 1

Setting it to 0 and solving for x: x = (1 - 2x)/x

and then you pick a 'guess' and iterate.

Why in the world does the iteration narrow down onto the root? It seems so arbitrary. We just keep plugging in the values, keep iterating on, and it just converges on the root.

Why oh why is this behavior so? Btw, I promise to learn LaTeX soon..so it won't look as funky next time.
 
Physics news on Phys.org
Because with each iteration it gets you a bit closer to the root. Even if your guess is inaccurate, iteration will still work, but take longer to converge.The iteration narrows down onto the root because the algorithm in designed to get closer to the root.

Eg Newtons Method to Square Root numbers.

[tex]E=\frac{\frac{N}{E_0} + E_0}{2}[/tex] Where E is the estimate.

The simple theory behind it is, when we divide a number by its root, we get the root again. We when divide it by a number bigger than its root, the result is a number smaller than its square root. When divided by a number smaller than its root, the result is a number bigger than its root.

So the theory is, Say E is less than the root. We divided the number by something less than the root, so we Get a number bigger than the root. By averaging ones bigger and smaller, we get a new approximation, which can be iterated again and again.

It doesn't matter how bad the approximation is, eventually it will converge because the theory behind it is very simple and designs it so.
 
If you want to know the arithmetical reasons for all those algorithms to converge to a value, then take a look at:

http://mipagina.cantv.net/arithmetic/rmdef.htm

There you will find the true story about roots solving.

Regards,

Ing. Domingo Gomez
 
Last edited by a moderator:
Suppose x is small; do you know an approximate formula for f(a+x)? If you set f(a+x) ~ 0, can you solve for x?

(this may be easier if you forget that you know exactly what f is)
 
It doesn't necessarily. For example, if you pick 1/2 as your "estimate", the caculation "blows up" almost immediately. For that to work, your "estimate" must be relatively close to the solution: here that is [itex]-1+\sqrt{2}[/itex] and to work, your initial guess must be larger than 1.

Notice that
[tex]\frac{1-2x}{x}-\frac{1-2y}{y}= \frac{y-2xy-x+2xy}{xy}= \frac{y-x}{xy}[/tex]
As long as xy> 1 (which is certainly true if both x and y are larger than 1), that will be less than y-x. That's a "contraction" map: f(x) is close to f(y) than x is to y. The "Banach fixed point theorem" shows that any sequence iterating a contraction map converges to the fixed point of the map.
 
IIRC you must have f'(x) < 1 around your root and initial guess for convergence of the fixed point method.

I "discovered" fixed point iteration by repeatly taking the square root on one of my first calculators.
 

Similar threads

  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K