- #1
guerom00
- 93
- 0
Hello all
I have a very complicated function, let's call if f.
My problem : f[0.01] does not give the same result as f[1*^-2] because of accuracies. In the first case, the argument 0.01 has machine precision which is not sufficient in my case and the result is wrong.
In the second case, the argument 1*^-2 has infinite precision. This infinite precision is being carried out throughout the function and the final result is correct.
So, you see my point : how to force a certain accuracy on the argument of a function ?
Thanks in advance
I have a very complicated function, let's call if f.
My problem : f[0.01] does not give the same result as f[1*^-2] because of accuracies. In the first case, the argument 0.01 has machine precision which is not sufficient in my case and the result is wrong.
In the second case, the argument 1*^-2 has infinite precision. This infinite precision is being carried out throughout the function and the final result is correct.
So, you see my point : how to force a certain accuracy on the argument of a function ?
Thanks in advance