Quote by eljose
..in fact if you knew [tex]\pi(x^{a})[/tex] with a total error O(x^d) by setting a=Ad and making A>oo (infinite) the total error would be e=1/A O(x^e) with e the smallest positive number...

This looks like nonsense to me. If you can find [tex]\pi(x^{a})[/tex] with a total error O(x^d) then your a and d are fixed. As this A changes, your error analyisis will no longer be correct. Before you tried to do some kind of change of variables to make the error look smaller, but actually did nothing at all, is this what you're doing again?
then there's the whole "smallest positive number" problem...
Quote by eljose
( i know this exist i have seen proofs with results a+e being e an infinitesimal number.

No you haven't, not in any number theory paper at least. You'll often see an error like [tex]O(x^{1+\epsilon})[/tex] and they say you can take any [tex]\epsilon>0[/tex]. They are not saying that [tex]\epsilon[/tex] is an "infintessimal number", just that the big O bound of that form holds for any possible fixed [tex]\epsilon>0[/tex] that you like, though the big O constant may depend on epsilon (sometimes they write [tex]O_\epsilon (x^{1+\epsilon})[/tex] to make this dependance explicit, but not usually).
Quote by eljose
i can not calculate the error it will be in general O(x^d) with d=a+b+c
a=number of time used to evaluate the integral.

So you're saying the more time you spend on calculating the integral (presumably to get a more accurate value for it), the larger your error term? That sounds bad...