Edgardo
- 706
- 17
I have a question with respect to the Big Oh notation, in which the Landau symbol O(n) is involved.
The Landau symbol O(n) is defined as:
f(n)=O(g(n)) if there exists an integer n_0 and a constant C>0
such that for all integers n \geq n_0, f(n) \leq C \cdot g(n).
For example f(n)= 5 \cdot n = O(n).
Now the equality sign is not to be understand as an equality sign in the common sense, but rather as f(n) \in O(n), see Big Oh Fallacies and Pitfalls and
the Wikipedia article.
( 5 \cdot n = O(n) and 2 \cdot n = O(n), but 5 \cdot n \neq 2 \cdot n)
So it would be better to think of O(n) as a set of functions.
My question:
Does anyone know why it became common to use the notation f(n)=O(g(n))?
I remember my professor mentioning that physicists introduced this sloppy notation.
The Landau symbol O(n) is defined as:
f(n)=O(g(n)) if there exists an integer n_0 and a constant C>0
such that for all integers n \geq n_0, f(n) \leq C \cdot g(n).
For example f(n)= 5 \cdot n = O(n).
Now the equality sign is not to be understand as an equality sign in the common sense, but rather as f(n) \in O(n), see Big Oh Fallacies and Pitfalls and
the Wikipedia article.
( 5 \cdot n = O(n) and 2 \cdot n = O(n), but 5 \cdot n \neq 2 \cdot n)
So it would be better to think of O(n) as a set of functions.
My question:
Does anyone know why it became common to use the notation f(n)=O(g(n))?
I remember my professor mentioning that physicists introduced this sloppy notation.
Last edited: