What is Big-Oh Notation and How Does it Measure Algorithm Efficiency?

  • Thread starter Thread starter Montag42
  • Start date Start date
  • Tags Tags
    Notation
Click For Summary

Discussion Overview

The discussion revolves around explaining Big-Oh notation, particularly in the context of sorting algorithms, for a report aimed at a middle school audience. Participants explore how to present the concept clearly and concisely while addressing potential misconceptions and the importance of examples.

Discussion Character

  • Conceptual clarification
  • Debate/contested
  • Technical explanation

Main Points Raised

  • One participant suggests that Big-O notation provides the upper bound for time complexity, using examples like O(n) and O(n^2) to illustrate the concept.
  • Another participant corrects the initial explanation, stating that an O(n) algorithm takes at most kn steps for some k, which may be greater than 1.
  • Some participants propose simplifying the language used to describe Big-O notation, suggesting phrases like "Big O tells you how slow a certain algorithm may be in the worst case."
  • There is a suggestion to clarify that the "n" in O(n) refers to the size of the problem, not the number of instructions in the algorithm.
  • Participants discuss the importance of providing examples to illustrate why O(n log n) algorithms are generally faster than O(n^2) algorithms.
  • One participant mentions that sorting algorithms must take at least O(n) time, with a note that algorithms can take at least O(log(n!)) = O(n log n) in the worst case.
  • Another participant argues against including details like "in the worst case," advocating for a simpler explanation that focuses on the relationship between time and the size of the task.
  • There is agreement that using "time" instead of "instructions" is preferable, but some emphasize the significance of the worst-case scenario in sorting algorithms.

Areas of Agreement / Disagreement

Participants express differing views on how to best explain Big-Oh notation, with no consensus on a single approach. Some advocate for simpler language, while others emphasize the importance of precision and examples.

Contextual Notes

Participants note that the explanations may depend on the audience's prior knowledge and that the definitions of terms like "instructions" and "steps" can lead to confusion. The discussion highlights the challenge of balancing technical accuracy with accessibility.

Who May Find This Useful

This discussion may be useful for educators, students, and anyone interested in learning how to communicate complex algorithmic concepts in a simplified manner.

Montag42
Messages
3
Reaction score
0
Hello everyone,

Im doing a relatively simple report/project in my science class (8th grade) on the efficiencies of various sorting algorithms, and I need to explain Big-O notation in a way which is:

1) Easy for a middle aged teacher who's knowledge of a computer pretty much stops at " It uses electricity"

and

2) Is compact enough to be able to put onto a poster board that people don't give up on reading it and walk away.

So what I've come up with so far is
Big O gives the upper bound for time complexity of an algorithm. For example, an algorithm that is O(n) (where n is the number of items to be processed) means that the highest amount of instructions executed is equal to the number of instructions in the algorithm. If an algorithm is O(n^2), that would mean that the highest number of instructions executed is equal to the number of instructions in the program squared.

Any suggestions as to how I could make this better, and (even better) is anything up there wrong?
 
Technology news on Phys.org
Strictly, what you wrote is wrong. An algorithm that is O(n) takes at most kn steps for some k, which is often greater than 1.

In the context of sorting, to give your readers intuition, I would certainly explain that O(n^2) algorithms are "slow" and that O(n log n) algorithms are "fast".
 
What is the actual intent? I suspect it's unlikely that explaining Big-Oh notation is the best way to convey the point you're trying to make...
 
"Big O gives the upper bound for time complexity of an algorithm."
This is a little wordy for a general, non-technical description of big-oh notation. You may want to make it more colloquial. For instance...
"Big O tells you how slow a certain algorithm may be in the worst case."
"Big O notation provides a means for describing the worst-case running time of a certain computational procedure"
etc.
The content is good, though.


"For example, an algorithm that is O(n) (where n is the number of items to be processed) means that the highest amount of instructions executed is equal to the number of instructions in the algorithm."
This is misleading... the wording is good, but the "n" in O(n) has nothing to do with the number of instructions in the algorithm, but the size of the problem. It's generally true that larger problems cause more instructions to be executed. I'd phrase it a little more precisely.

"If an algorithm is O(n^2), that would mean that the highest number of instructions executed is equal to the number of instructions in the program squared."
See above.

I like CRGreatHouse's idea... I would explain why an O(n lg n) algorithm will beat an O(n^2) algorithm (usually... you could provide examples where it didn't, though) and why any sorting algorithm must take at least O(n)...

It's an interesting idea, and I think you could really make it work. Examples will be key. Include examples, if you can.
 
csprof2000 said:
I like CRGreatHouse's idea... I would explain why an O(n lg n) algorithm will beat an O(n^2) algorithm (usually... you could provide examples where it didn't, though) and why any sorting algorithm must take at least O(n)...

Right. It would be worth mentioning that sorting algorithms* take at least O(n) time, and that algorithms take at least O(log(n!)) = O(n log n) time in the worst case.

* Assuming they always produce correct output and accept all inputs.
 
I would not bother with details like 'in the worst case', etc.
What's wrong with simply saying something like:
Big-O notation is a way of describing how fast an algorithm is compared to the size of the task. For instance, in the case of sorting, O(n) means the time taken is proportional to the number of items to sort (n). If it's O(n^2) then it grows as the square of the number of items. And so forth

Saying 'number of instructions executed' is a pretty complicated way of conveying the simple idea 'amount of time'. It's also incorrect by virtue of being more specific than the actual definition, which is in terms of abstract 'steps'. Which can be assumed to be proportional to instructions, but you might as well assume it's proportional to time - which is what people are interested in anyway.
 
I agree that saying 'time' is probably better than 'instructions', since you need a proportionality constant either way. But I think that the worst-case distinction is important in the case of sorting, where one popular algorithm (you know the one... Hoare's QuickSort) has very different behavior on average and in the wost case.
 

Similar threads

  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 11 ·
Replies
11
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
4K
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K