MATLAB Absolute and Relative Approximate Errors for Secant Method in MATLAB

AI Thread Summary
The discussion focuses on using the secant method in MATLAB to find the root of the equation fn=40*n^1.5-875*n+35000, with initial guesses n1=60 and n2=68. The user encounters an infinite loop and seeks assistance in calculating the absolute relative approximate error correctly. It is suggested that the error calculation should not use abs(fnnew) but rather the difference between successive iterations to avoid inaccuracies. The conversation highlights the importance of adjusting the tolerance level and understanding the implications of scaling the error by 100, as well as the potential for convergence issues due to floating-point errors or algorithm deficiencies. Properly addressing these points can lead to accurate results in the secant method implementation.
chronicals
Messages
35
Reaction score
0
I try to solve this equation with secant method in MATLAB.

fn=40*n^1.5-875*n+35000

my initial guess is n1=60; n2=68; I want to find root and absolute relative approximate
error at the end of each iteration. I have an infinite loop. Can you help me repair my file?
This is my m-file:

clc
clear
n1=60;
n2=68;
tol=1e-3;
err0=3;
iter=0;
fprintf('iteration n relative approximate error\n')
while err0>=tol
iter=iter+1;
fn1=40*(n1).^1.5-875*(n1)+35000;
fn2=40*(n2).^1.5-875*(n2)+35000;
nnew=n2-fn2*((n2-n1)/(fn2-fn1));
fnnew=40*(nnew).^1.5-875*(nnew)+35000;
err(iter)=(abs((nnew-n2)/nnew))*100;
fprintf('%2d %f %f\n',iter,nnew,err(iter))


if nnew>n1
n1=nnew;

else
n2=nnew;

end

end
nnew
iter
 
Last edited:
Physics news on Phys.org
I rearranged my m-file and i solved infinite loop problem but i think i have mistaken at calculating absolute relative approximate error at the end of each iteration, i think this command is wrong:

err(iter)=(abs((nnew-n2)/nnew))*100;

How can i fix this error calculation problem?



My m-file:

clc
clear
n1=60;
n2=68;
tol=1e-5;
err0=3;
iter=0;
fprintf('iteration n relative approximate error\n')
while err0>=tol
iter=iter+1;
fn1=40*(n1).^1.5-875*(n1)+35000;
fn2=40*(n2).^1.5-875*(n2)+35000;
nnew=n2-fn2*((n2-n1)/(fn2-fn1));
fnnew=40*(nnew).^1.5-875*(nnew)+35000;
err(iter)=(abs((nnew-n2)/nnew))*100;
fprintf('%2d %f %f\n',iter,nnew,err(iter))
err0=abs(fnnew);


if nnew>n1
n1=nnew;

else
n2=nnew;

end

end
nnew
iter
 
Last edited:
Why are using abs(fnnew) as your error, isn't it err(sayac) ? I would also remove the factor of 100 from the relative error unless it is your meaning to express it as a percent relative error. Trivial quibble but the nice thing about the relative error is that the base 10 log of it gives you an estimate on the number of digits it is accurate to in decimal. So by setting your tol to 1e-5, you are desiring to be accurate to at least five digits.
 
Born2bwire said:
Why are using abs(fnnew) as your error, isn't it err(sayac) ? I would also remove the factor of 100 from the relative error unless it is your meaning to express it as a percent relative error. Trivial quibble but the nice thing about the relative error is that the base 10 log of it gives you an estimate on the number of digits it is accurate to in decimal. So by setting your tol to 1e-5, you are desiring to be accurate to at least five digits.

If I use err(iter), I have an infinite loop, so I use abs(fnnew).

These are my results:

iteration n relative approximate error
1 62.759758 8.349685
2 62.689966 8.470309
3 62.691698 0.002762
4 62.691697 0.000001

nnew =
62.6917

iter =
4

how can second iterations' relative approximate error be 8.470309. I think this m-file is calculating 'relative approximate error' wrongly. Please help me fix this error command: err(iter)=(abs((nnew-n2)/nnew))*100;
 
It is 8.4 because you are scaling the relative error by 100. Like I said, if you do not scale then the log of the relative error is indicative of the number of digits of accuracy. Indeed, you note that the first two digits, 62, do not change as you converge. Thus, you started out with two digits of accuracy which would correlate to log(0.08).

Using err(iter) is the proper thing to do. Right now fnnew only works because it is the same as the absolute error since you are trying to find the zero. But if you were trying to converge to any number other than zero then you would never converge properly. Which brings us to the relative error. That is not the relative error because you do not know the true answer. It is a measure of the difference between your old and new values I think. This is still a reasonable metric to use though as long as we assume that you are always converging at a constant or increasing rate (this could allow us to use this metric to estimate the actual relative error but that is unnecessary for most applications). For example, I use this when I do semi-infinite integrations.

From your sample output, it should have converged in four iterations since the "error" is 1.0e-6. If you removed the scaling factor of 100 then this should happen. If it is not ending, then it is probably because the result is fluctuating back and forth around the true answer. If you increased your tolerance slightly this may allow you to achieve convergence. This can happen due to floating point errors, getting trapped around an incorrect guess, or due to deficiencies in the algorithm.

EDIT: Hmmm... probably should be a little more blunt. In this case, you should use abs(fnnew) because you know that the result should be zero and thus abs(fnnew) is the amount by which the result is off from zero. So you can find the residual. That is, you can't find the error in the x you wish to find (since obviously you do not know x apriori) but you can find the error in the f(x) that you want to achieve. This is not always feasible. Sometimes, like say with an integration, you do not have such a metric to use. So using the relative change to the result, like you found in err(), is often a valid metric. Though, as you have found, it may not always be perfectly reliable.
 
Last edited:
Back
Top