Was the Professor's Big O Notation Example Correct?

  • Thread starter Thread starter Allday
  • Start date Start date
  • Tags Tags
    Analysis
AI Thread Summary
The discussion revolves around the interpretation of Big O notation as presented in an MIT OpenCourseWare lecture on algorithms. A specific example from the lecture, where 2n^2 is stated as O(n^3), raises questions about its accuracy. Participants agree that while the statement is technically correct—since 2n^2 grows slower than n^3—it is not the most precise upper bound, which would be O(n^2). The conversation highlights the distinction between formal definitions of Big O notation and its practical applications in computer science. It emphasizes that while O(n^3) is an upper limit for 2n^2, it is not the best upper limit, and understanding the "=" sign in this context as indicating membership in a set rather than strict equality can clarify the concept. The discussion also references a textbook that supports this interpretation, indicating a broader consensus on the importance of precision in algorithm analysis.
Allday
Messages
164
Reaction score
1
I decided to watch an MIT OpenCourseWare class on algorithms and the second lecture has me checking my understanding. A link to the lecture is here (http://ocw.mit.edu/courses/electric...ation-recurrences-substitution-master-method/)

My question is pretty basic (I think). Right in the beginning of the lecture (2:50 on the video) the prof. writes on the blackboard,

2 n^2 = O(n^3)

as an example. I was thinking that should be O(n^2) but then I thought the example statement is true (i.e. 2 n^2 grows slower than n^3 ... and n^4 and n^5 ... ) but the statement with the most information is 2 n^2 = O(n^2). Do you think it was a mistake on the blackboard or he did it to show that the O(n^3) is also true?
 
Technology news on Phys.org
I was under the impression that you could ignore constant factors, and lesser terms when deducing the Big O of a function. So 2n^2 => O(n^2).

Or 5n^3 + 8000000000n^2 + 4612876198274194109438109284n + 8 => O(n^3). When you reach large values of n, the n^3 term eclipses the other terms.
 
I agree:
2 n^2 => O(n^2)
 
If I recall correctly, big O notation refers to an upper bound. So strictly speaking, an algorithm that is O(n^2) is also O(n^3), because the latter will be larger than the former. However, it is best to hold a fair amount of precision in your upper bound: simply stating that an algorithm is O(infinity) is technically always true, and also never useful (unless your algorithm will literally never finish).
 
Last edited:
DimReg said:
If I recall correctly, big O notation refers to an upper bound. So strictly speaking, an algorithm that is O(n^2) is also O(n^3), because the latter will be larger than the former.
There are several uses - but in all cases it is related to an asymptote approached towards infinity. So 2N^2 => O(n^3) would be inaccurate.
 
.Scott said:
There are several uses - but in all cases it is related to an asymptote approached towards infinity. So 2N^2 => O(n^3) would be inaccurate.

Inaccurate, but not incorrect, at least according to: http://en.wikipedia.org/wiki/Big_O_notation#Formal_definition

(If you use n^2 as f(x) and n^3 as g(x), the formal definition is easily satisfied)

This is just semantics, but my guess is that the prof was trying to illustrate this point.
 
DimReg said:
Inaccurate, but not incorrect, at least according to: http://en.wikipedia.org/wiki/Big_O_notation#Formal_definition

(If you use n^2 as f(x) and n^3 as g(x), the formal definition is easily satisfied)

This is just semantics, but my guess is that the prof was trying to illustrate this point.
At least in computer science, if you called a merge sort O(n^2), you would be both inaccurate and incorrect - because it is O(n log n).
 
Well, I know of at least one computer science textbook that agrees with me, and it's where I learned this definition. The book is http://sist.sysu.edu.cn/~isslxm/DSA/textbook/Skiena.-.TheAlgorithmDesignManual.pdf , and on it's 36th page Skiena writes:

"The Big Oh notation provides for a rough notion of equality when comparing
functions. It is somewhat jarring to see an expression like n^2 = O(n^3), but its
meaning can always be resolved by going back to the definitions in terms of upper
and lower bounds. It is perhaps most instructive to read the “=” here as meaning
one of the functions that are. Clearly, n^2 is one of functions that are O(n^3)."

Maybe the greater computer science community disagrees with him on the definition of Big Oh notation.
 
Last edited by a moderator:
Allday said:
I decided to watch an MIT OpenCourseWare class on algorithms and the second lecture has me checking my understanding. A link to the lecture is here (http://ocw.mit.edu/courses/electric...ation-recurrences-substitution-master-method/)

My question is pretty basic (I think). Right in the beginning of the lecture (2:50 on the video) the prof. writes on the blackboard,

2 n^2 = O(n^3)

as an example. I was thinking that should be O(n^2) but then I thought the example statement is true (i.e. 2 n^2 grows slower than n^3 ... and n^4 and n^5 ... ) but the statement with the most information is 2 n^2 = O(n^2). Do you think it was a mistake on the blackboard or he did it to show that the O(n^3) is also true?

As already pointed out, there is a formal definition of big O and then there is a typical usage of it in computer science. From the formal standpoint, 2n^2=o(n^3) is just fine since it satisfies the formal definition. In other words, O(n^3) is an upper limit for 2n^2; however, it is not the best upper limit we can obtain here.

So I think he was just saying be careful. There is a difference between having an upper limit and the best upper limit.
 
  • #10
thanks everyone. thinking of the "=" sign not as a true equality but as indicating membership in a set makes it clearer for me.
 
Back
Top