So you're saying that you are free to choose a to be whatever you like, apparently not affecting d or the error term at all? This is as ridiculous as having an error term in the first place for what you're also claiming is an exact formula.
Again you're missing the distinction between runtime and an error term. If you have some kind of exact algorithm, then there's no error term. What's interesting is how long it takes the algorithm to run (usually measured in number of some kind of operation, like the number of multiplications).
Maybe you're claiming that for a fixed run time you get a certain error. If this is the case, then there is no way that your error is independant of the size of the argument, which is what you'd be claiming.
Why do I get the feeling that this is going to be a complete repetition of your earlier threads on this?