Numerical method: iteratation to converge on root

In summary, fixed point iteration can be a useful tool for finding roots of equations, but it does not always work. The solution must exist and the derivative of the function must be less than 1 near the solution for it to converge. It is called "fixed point iteration" because the point at which x=g(x) is a "fixed point" of the function g(x).
  • #1
thisischris
26
1
Hello!

I'm having trouble understanding the method/reasoning behind finding the root of an equation though iterative convergence.

x2 - 4x + 1 = 0
x2 = + 4x - 1
x = 4 - 1/x

I can understand that once we input a 'root' the equation will equal be equal on both sides. (Due to the remainder theorem) However I can't grasp why it converges to a root in all cases...

I also can't seem to understand how if we set x as 3, then we get x or x0 as 3.667...
x = 4 - 1/(3) = 3.667
I obviously understand the equation it just doesn't quite fit how x is equal to 2 different values neither of which are a root?

How would it account for a equation that would have a 'higher' Y value for: x0 . Which 'goes past' the root? (I've drawn a blue line over the graph).
 

Attachments

  • iteration.png
    iteration.png
    44.6 KB · Views: 414
Physics news on Phys.org
  • #2
Are you familiar with the Newton- Raphson method, in particular the form which involves the derivative of a function?
I don't think the iterative process needs to converge to a root in all cases. It depends on the value chosen for the initial x. (e.g if you take initial x to be close to a stationary point, x(1) will be be given by drawing a tangent line to the sp and thus it might take you further from your iniital approximation.)
 
  • #3
thisischris said:
Hello!

I'm having trouble understanding the method/reasoning behind finding the root of an equation though iterative convergence.

x2 - 4x + 1 = 0
x2 = + 4x - 1
x = 4 - 1/x

I can understand that once we input a 'root' the equation will equal be equal on both sides. (Due to the remainder theorem) However I can't grasp why it converges to a root in all cases...
Good! You shouldn't grasp that because it isn't true. IF the sequence converges THEN it must converge to a solution to the equation. But there will always exist initial values for which the sequence does not converge.

I also can't seem to understand how if we set x as 3, then we get x or x0 as 3.667...
x = 4 - 1/(3) = 3.667
I obviously understand the equation it just doesn't quite fit how x is equal to 2 different values neither of which are a root?
They are not. What you should have is [itex]x_{n+1}= 4- 1/x_n[/itex] where [itex]x_n[/itex] and [itex]x_{n+1}[/itex] are different numbers. They are successive terms in the sequence.

How would it account for a equation that would have a 'higher' Y value for: x0 . Which 'goes past' the root? (I've drawn a blue line over the graph).
I don't know what you mean by that.
 
  • #4
HallsofIvy said:
I don't know what you mean by that.

Thank you for your help!

Let say the function (the iterative function?) equals x(n+1) = x2 + 9999999999999999
which translates to y = x2 + 9999999999999999
If we assume x is a root and hence intersects the y=x line, let's say there is a root at 3 x-coordinate... If we input 2.5/2.6/2.8/2.99999999 however we would get a very large Y value and if we input the very large y value back into the iterative function it would go 'way past' the root (3 in this case).

Obviously the root isn't 3 in this example but I'm only asking if this could 'occur', not too sure how I could 'find' an actual equation that replicates this behavior if that makes sense.
 
  • #5
You have chosen an example which clearly does not have a real root! It is impossible for your sequence to converge to a non-existant root!
 
  • #6
HallsofIvy said:
You have chosen an example which clearly does not have a real root! It is impossible for your sequence to converge to a non-existant root!

I do realize that. I'm guessing though, that I'm thinking of an something that doesn't exist.

Thank you for everyones help!
 
  • #7
thisischris said:
I can understand that once we input a 'root' the equation will equal be equal on both sides. (Due to the remainder theorem) However I can't grasp why it converges to a root in all cases...
What you have done is to re-write the expression [itex]f(x)=0[/itex] with [itex]f(x)=x^2-4x+1[/itex] into the form [itex]x=g(x)[/itex] with [itex]g(x)=4-1/x[/itex] and then used this latter expression as an iterative method [itex]x_{n+1} = g(x_n)[/itex]. There's a name for this technique: It's called "fixed point iteration". The reason it's called this is that the point at which [itex]x=g(x)[/itex] is called a "fixed point" of the function [itex]g(x)[/itex].

And it doesn't always work.

What if there isn't such a point? For example, the expression [itex]x^2-x+1=0[/itex] can be rewritten as [itex]x=1-1/x[/itex]. The sequence [itex]x_{n+1} = 1 - 1/x_n[/itex] won't converge. It will instead either blow up or it will cycle between three values. Note that there are no real zeros of [itex]x^2-x+1[/itex]. The function [itex]g(x)=1-1/x[/itex] does have fixed points, but they're complex.

Another problem arises with the example at hand. There are two real roots of [itex]f(x)=x^2-4x+1[/itex], 2+√3 and 2-√3. Pick an initial value that is not one of these roots and the iterator [itex]x_{n+1} = 4-1/x_n[/itex] will either blow up because you happened to hit [itex]x_n=1/4[/itex] or it will converge to 2+√3. It won't find that other root, 2-√3. The reason is that for small epsilon in the vicinity of x=2-√3, [itex]g(x+\epsilon) \approx x + (7+4\sqrt 3)\epsilon[/itex]. That factor of 7+4√3 makes an initial guess near the 2-√3 diverge away from that solution.

Yet another example is [itex]x^2-2x+1=0[/itex]. Here the fixed point iterator is [itex]x_{n+1} = 2-1/x_n[/itex]. This will converge to 1 for any initial guess greater than 1, but the convergence is extremely slow. The reason for this slow convergence is that the derivative of [itex]g(x)=2-1/x[/itex] is exactly 1 at the solution point x=1.

Fixed point iteration can be a useful tool because it is so easy to formulate. It doesn't always work. A solution has to exist, and the absolute value of the derivative needs to be less than 1 in the vicinity of the solution in order for the technique to converge to that solution.
 
  • #8
D H said:
Another problem arises with the example at hand. There are two real roots of [itex]f(x)=x^2-4x+1[/itex], 2+√3 and 2-√3. Pick an initial value that is not one of these roots and the iterator [itex]x_{n+1} = 4-1/x_n[/itex] will either blow up because you happened to hit [itex]x_n=1/4[/itex] or it will converge to 2+√3. It won't find that other root, 2-√3. The reason is that for small epsilon in the vicinity of x=2-√3, [itex]g(x+\epsilon) \approx x + (7+4\sqrt 3)\epsilon[/itex]. That factor of 7+4√3 makes an initial guess near the 2-√3 diverge away from that solution.

This is exactly what I'm asking about if I understand what you mean by 'small epsilon'. I assume it's not likely to be asked in a question, which would have this behavior present?
 
  • #9
Suppose you use the iterator [itex]x_{n+1} = 4-1/x_n[/itex] with an initial value that is just 1/100 away from one of the roots 2±√3. If you start with [itex]x_0 = 2+\surd 3 + 1/100[/itex], the next element in the sequence is closer to 2+√3 than is the initial guess. That initial error of 1/100 shrinks by a factor of almost 14. Now look at what happens with a guess near the smaller root. If you start with [itex]x_0 = 2-\surd 3 + 1/100[/itex], the next element in the sequence is further from 2-√3 than is the initial guess. Instead of shrinking, that initial error of 1/100 grows by a factor of over 13.
 

What is numerical method?

Numerical method is a mathematical technique used to find approximate solutions to equations and problems that cannot be solved analytically. It involves using numerical calculations and algorithms to obtain a numerical answer.

What is iteration?

Iteration is the process of repeating a sequence of steps or calculations in order to improve the accuracy of a solution. In the context of numerical methods, it involves repeatedly applying a formula or algorithm to approach a desired value or solution.

What is a root in numerical methods?

A root in numerical methods refers to the value at which an equation or function equals zero. It is the solution to the equation or the point where the function crosses the x-axis on a graph. Finding the root is often the goal of using numerical methods.

What is convergence in numerical methods?

Convergence in numerical methods refers to the process of approaching a desired value or solution as the number of iterations increases. It is a measure of how closely the calculated values are getting to the true solution and is typically represented by a convergence criteria or tolerance value.

What is the importance of convergence in numerical methods?

Convergence is important in numerical methods because it ensures the accuracy and reliability of the solution. If an algorithm does not converge, the calculated solution may not be close enough to the true solution and may not be useful. Convergence also allows for the comparison of different numerical methods to determine which one produces the most accurate results.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
751
Replies
16
Views
1K
  • Precalculus Mathematics Homework Help
Replies
3
Views
3K
  • Calculus
Replies
13
Views
1K
  • Quantum Physics
Replies
1
Views
586
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
958
  • Precalculus Mathematics Homework Help
Replies
6
Views
700
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • General Math
Replies
9
Views
1K
Back
Top