View Single Post
Sci Advisor
HW Helper
P: 1,996
 Quote by eljose the qeustion is that i have given a formula for $$\pi(x^a)$$ with error O(x^d) so let,s suppose we want to calculate $$\pi(10^{1000})$$ then set a=1000 and x=10 so the total error would be O(10^d) that is the advantage of my method.... if you want to judge the formula by yourselves it is:
So you're saying that you are free to choose a to be whatever you like, apparently not affecting d or the error term at all? This is as ridiculous as having an error term in the first place for what you're also claiming is an exact formula.

Again you're missing the distinction between runtime and an error term. If you have some kind of exact algorithm, then there's no error term. What's interesting is how long it takes the algorithm to run (usually measured in number of some kind of operation, like the number of multiplications).

Maybe you're claiming that for a fixed run time you get a certain error. If this is the case, then there is no way that your error is independant of the size of the argument, which is what you'd be claiming.

Why do I get the feeling that this is going to be a complete repetition of your earlier threads on this?