While discussing Taylor's theorem, my professor pointed out that for n=2, Taylor's Theorem says:(adsbygoogle = window.adsbygoogle || []).push({});

[itex] f(x) = f(x_{0}) + f'(x_{0})(x - x_{0}) + O(|x - x_{0}|^{2}) [/itex]

He then emphasized that [itex]O(|x - x_{0}|^{2}) [/itex] is a much better approximation than [itex] o(|x - x_{0}|)[/itex].

But how is [itex]O(|x - x_{0}|^{2}) [/itex] a better approximation than [itex] o(|x - x_{0}|)[/itex]?

(I'm assuming he means as x goes to x_0)

I know in this situation (as x goes to x_o) if something is little o, it means it goes to zero faster than whatever its being compared to goes to zero. And if something is Big O of the same thing squared, then it's bounded as the thing it's being compared to, squared, goes to zero. And I understand that if something goes to zero, then that same thing squared goes to zero much faster, but I can't see exactly why we can conclude that Big O of something squared is better than little o of the same quantity not squared. For instance, if something is little o when compared to a quantity that goes to zero, how do you know it's not also little o to that quantity squared? In that case, certainly little o is better than Big O.

I get the basic concept of Big O/little o, but I guess I'm still prone to confusion during application.

**Physics Forums | Science Articles, Homework Help, Discussion**

Join Physics Forums Today!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Big O notation: I'm confused about quality of Big O approx vs little o approximation

**Physics Forums | Science Articles, Homework Help, Discussion**