Is this predictive iterative method the same as the Newton-Raphson method?

  • Context: Undergrad 
  • Thread starter Thread starter yuiop
  • Start date Start date
  • Tags Tags
    Method
Click For Summary
SUMMARY

The discussion centers on a predictive iterative method developed in Java that converges rapidly without requiring initial bounds or derivatives, unlike the Newton-Raphson method. The method is conceptually simple and can be applied to various problems, demonstrating fewer iteration steps than the bijective method. Participants suggest that this method may actually be a variation of the Newton-Raphson method, specifically the secant method, which estimates derivatives using discrete points. The method's efficiency is highlighted, particularly in simulations where initial guesses are close to the final solution.

PREREQUISITES
  • Understanding of iterative methods in numerical analysis
  • Familiarity with Java programming language
  • Knowledge of the Newton-Raphson method
  • Basic concepts of function approximation
NEXT STEPS
  • Research the secant method and its applications in numerical analysis
  • Explore the differences between the secant method and Newton-Raphson method
  • Learn about error analysis in iterative methods
  • Investigate optimization techniques for initial guesses in iterative algorithms
USEFUL FOR

Mathematicians, software developers, and engineers interested in numerical methods, particularly those looking to optimize iterative solutions in simulations and computational problems.

yuiop
Messages
3,962
Reaction score
20
Hi,

I came up with a predictive iterative method that converges very rapidly, when writing a java simulation. It turns out that this method can be used for a wide variety of problems and is very simple because unlike the bijective method it does not require upper and lower bounds to be found initially and unlike the Newton-Raphson method it does not require that the derivative of the equation be found first. This predictive method is conceptually simple, so I am almost certain this method is already well known and has a name, but here it is anyway in Liberty basic (free):

Code:
T=63 ' <Target output value>
x1 = 5 ' <Initial input guess - (very bad guess in this example)>
epsilon=0.0000000000001 ' <Acceptable error margin>
x0 = x1+0.1 ' <First guess plus increment - Can be optimised for a given problem>

' <Initial first two guesses:>
t1 = ComplicatedFunction(x1)
print"guess1 = ";x1;" result1 = ";t1
t0 = ComplicatedFunction(x0)
print"guess0 = ";x0;" result0 = ";t0

do ' <Iterative loop>
x2=x1
t2=t1
x1=x0
t1=t0
x0 = (x1-x2)*(T-t2)/(t1-t2)+x2 ' < *Key iterative predictive step* >
t0 = ComplicatedFunction(x0)
print"guess = ";x0;" result = ";t0
loop while (abs(T-t0)>epsilon)
end

function ComplicatedFunction(x)
ComplicatedFunction = 56+x^2+2^(x+5)-3*x^4 ' <Insert function to be solved here>
end function

Example output:
Code:
guess1 = 5 result1 = -770
guess0 = 5.1 result0 = -850.054274
guess = 3.95945594 result = -167.843532
guess = 3.57352355 result = -39.4866854
guess = 3.26537535 result = 33.2835597
guess = 3.13954002 result = 56.390209
guess = 3.1035441 result = 62.355382
guess = 3.09965425 result = 62.9833978
guess = 3.09955142 result = 62.9999564
guess = 3.09955115 result = 63.0

Does anyone know if this method already has a name? It takes significantly less iteration steps than the bijective iteration method, for almost any none trivial function. It does require that the initial guess is somewhere close to the final solution (with a fair degree of latitude), but this is not a big problem for simulations with small time increments where the last solution provides the basis for the next step. I think it would compare well with the Newton-Raphson method (although I have not tested that yet) and is simpler to set up for a random function.

{EDIT} It may be that the above method is in fact the Newton-Raphson method in disguise. Would anyone agree?
 
Last edited:
Physics news on Phys.org
Yes, I believe it is the same thing, using a discrete derivative. It basically constructs a straight line, and then jumps to the x-value where that line crosses the function value T.
 
The standard name for this is the secant method.

There are several versons of exactly how to do it, depending on exactly which points you use to estimate the derivative. For example you can use the last two you calculated, or the two that have the smallest function value, or keep the same value for every iteration, etc.

It behaves very similar to Newton's method, but when the two points are close together you can lose accuracy because the slope is calculated from the difference of numbers that are almost equal. If you find the derivative directly as in Newton's method, you avoid that problem.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
1
Views
4K
  • · Replies 7 ·
Replies
7
Views
5K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K