What is Big-Oh Notation and How Does it Measure Algorithm Efficiency?

  • Thread starter Montag42
  • Start date
  • Tags
    Notation
In summary: This is a remarkable and practical difference which merits attention.In summary, Big-O notation is a way of describing the efficiency of an algorithm in relation to the size of the problem. It helps us understand how fast an algorithm will run, with O(n) meaning the time is proportional to the size of the problem and O(n^2) meaning the time grows as the square of the problem size. It is important to note that this notation is based on the worst-case scenario and can vary greatly depending on the algorithm used.
  • #1
Montag42
3
0
Hello everyone,

Im doing a relatively simple report/project in my science class (8th grade) on the efficiencies of various sorting algorithms, and I need to explain Big-O notation in a way which is:

1) Easy for a middle aged teacher who's knowledge of a computer pretty much stops at " It uses electricity"

and

2) Is compact enough to be able to put onto a poster board that people don't give up on reading it and walk away.

So what I've come up with so far is
Big O gives the upper bound for time complexity of an algorithm. For example, an algorithm that is O(n) (where n is the number of items to be processed) means that the highest amount of instructions executed is equal to the number of instructions in the algorithm. If an algorithm is O(n^2), that would mean that the highest number of instructions executed is equal to the number of instructions in the program squared.

Any suggestions as to how I could make this better, and (even better) is anything up there wrong?
 
Technology news on Phys.org
  • #2
Strictly, what you wrote is wrong. An algorithm that is O(n) takes at most kn steps for some k, which is often greater than 1.

In the context of sorting, to give your readers intuition, I would certainly explain that O(n^2) algorithms are "slow" and that O(n log n) algorithms are "fast".
 
  • #3
What is the actual intent? I suspect it's unlikely that explaining Big-Oh notation is the best way to convey the point you're trying to make...
 
  • #4
"Big O gives the upper bound for time complexity of an algorithm."
This is a little wordy for a general, non-technical description of big-oh notation. You may want to make it more colloquial. For instance...
"Big O tells you how slow a certain algorithm may be in the worst case."
"Big O notation provides a means for describing the worst-case running time of a certain computational procedure"
etc.
The content is good, though.


"For example, an algorithm that is O(n) (where n is the number of items to be processed) means that the highest amount of instructions executed is equal to the number of instructions in the algorithm."
This is misleading... the wording is good, but the "n" in O(n) has nothing to do with the number of instructions in the algorithm, but the size of the problem. It's generally true that larger problems cause more instructions to be executed. I'd phrase it a little more precisely.

"If an algorithm is O(n^2), that would mean that the highest number of instructions executed is equal to the number of instructions in the program squared."
See above.

I like CRGreatHouse's idea... I would explain why an O(n lg n) algorithm will beat an O(n^2) algorithm (usually... you could provide examples where it didn't, though) and why any sorting algorithm must take at least O(n)...

It's an interesting idea, and I think you could really make it work. Examples will be key. Include examples, if you can.
 
  • #5
csprof2000 said:
I like CRGreatHouse's idea... I would explain why an O(n lg n) algorithm will beat an O(n^2) algorithm (usually... you could provide examples where it didn't, though) and why any sorting algorithm must take at least O(n)...

Right. It would be worth mentioning that sorting algorithms* take at least O(n) time, and that algorithms take at least O(log(n!)) = O(n log n) time in the worst case.

* Assuming they always produce correct output and accept all inputs.
 
  • #6
I would not bother with details like 'in the worst case', etc.
What's wrong with simply saying something like:
Big-O notation is a way of describing how fast an algorithm is compared to the size of the task. For instance, in the case of sorting, O(n) means the time taken is proportional to the number of items to sort (n). If it's O(n^2) then it grows as the square of the number of items. And so forth

Saying 'number of instructions executed' is a pretty complicated way of conveying the simple idea 'amount of time'. It's also incorrect by virtue of being more specific than the actual definition, which is in terms of abstract 'steps'. Which can be assumed to be proportional to instructions, but you might as well assume it's proportional to time - which is what people are interested in anyway.
 
  • #7
I agree that saying 'time' is probably better than 'instructions', since you need a proportionality constant either way. But I think that the worst-case distinction is important in the case of sorting, where one popular algorithm (you know the one... Hoare's QuickSort) has very different behavior on average and in the wost case.
 

What is Big-Oh Notation?

Big-Oh Notation, also known as asymptotic notation, is a mathematical notation used to describe the limiting behavior of a function as its input approaches infinity. It is commonly used in computer science to analyze the time and space complexity of algorithms.

How is Big-Oh Notation used in computer science?

In computer science, Big-Oh Notation is used to classify the efficiency of algorithms in terms of time and space complexity. It allows us to compare the performance of different algorithms and determine which one is more efficient for a given problem.

What does the "O" in Big-Oh Notation stand for?

The "O" in Big-Oh Notation stands for "order of". It represents the upper bound of the growth rate of a function as its input size increases.

What is the difference between Big-Oh Notation and Big-Omega Notation?

Big-Oh Notation represents the upper bound of a function's growth rate, while Big-Omega Notation represents the lower bound. In other words, Big-Oh Notation describes the worst-case scenario, while Big-Omega Notation describes the best-case scenario.

How do I calculate the Big-Oh Notation of an algorithm?

To calculate the Big-Oh Notation of an algorithm, you need to determine the dominant term of the algorithm's time complexity, ignoring any constant factors. This dominant term will be the term inside the "O" in the Big-Oh Notation.

Similar threads

Replies
15
Views
2K
  • Programming and Computer Science
Replies
1
Views
1K
Replies
6
Views
1K
  • Programming and Computer Science
Replies
7
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
2K
  • Programming and Computer Science
Replies
4
Views
1K
  • Programming and Computer Science
Replies
1
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
2K
  • Programming and Computer Science
Replies
1
Views
3K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
Back
Top