Opposites Attract: Understanding O Notation in Math & CS

In summary, the O symbol is used in mathematics and computer science to describe a term T in relation to another term x^p. In mathematics, it is defined as lim x->0 T/x^p=c, while in computer science it is defined as lim x->∞ T/x^p is a constant. This may seem contradictory, but it is up to the user to define the value of x in the limit. Therefore, the O symbol is valid in both cases, but the results may differ depending on the chosen value of x.
  • #1
Max.Planck
129
0
Hi, I noticed in mathematics the O symbol is used in the following way:

A term T is in O(x^p), if lim x->0 T/x^p=c, for a constant c.

While in computer science the O symbol is used is this way:

A term T is in O(x^p), if lim x->∞ T/x^p is a constant.

What gives, these two notations seem to be the complete opposite of each other?
 
Mathematics news on Phys.org
  • #2
The O symbol is valid in both cases. It is up to you to define what the x limit is.
 
  • #3
mathman said:
The O symbol is valid in both cases. It is up to you to define what the x limit is.

But don't they contradict each other?

For example, in the first case x^7 is in O(x^5), but in the second case it is not.
 
  • #4
No, they are just two distinct cases of a general concept. We should aways say "f(x)= O(g(x)) as x-> a and specify a. They are using two different values of a and so getting two different results.
 
  • #5
HallsofIvy said:
No, they are just two distinct cases of a general concept. We should aways say "f(x)= O(g(x)) as x-> a and specify a. They are using two different values of a and so getting two different results.

Aha, thanks!
 

1. What is O Notation and why is it important in math and computer science?

O Notation, also known as Big O Notation, is a mathematical notation used to describe the asymptotic behavior of a function as the input size approaches infinity. It is important in math and computer science because it allows us to analyze the efficiency of algorithms and make informed decisions about which algorithm is the best to use in a given situation.

2. How is O Notation calculated and what does the result represent?

O Notation is calculated by looking at the growth rate of an algorithm's runtime in relation to the input size. The result of O Notation represents the upper bound of the algorithm's runtime, or the worst-case scenario.

3. What is the difference between O(1), O(n), and O(n^2) in terms of efficiency?

O(1) represents constant time, meaning the algorithm's runtime does not change regardless of the input size. O(n) represents linear time, meaning the algorithm's runtime increases proportionally to the input size. O(n^2) represents quadratic time, meaning the algorithm's runtime increases exponentially with the input size. In terms of efficiency, O(1) is the most efficient, followed by O(n), and O(n^2) being the least efficient.

4. How can O Notation be useful in practical applications?

O Notation can be useful in practical applications because it allows us to compare different algorithms and choose the most efficient one. It also helps us predict how an algorithm will perform as the input size increases, allowing us to plan and optimize our code for better performance.

5. Are there any drawbacks to using O Notation?

One drawback of O Notation is that it only considers the worst-case scenario, so it may not accurately represent the algorithm's performance in all cases. It also does not take into account other factors such as memory usage or implementation details, so it should be used in conjunction with other analysis techniques for a complete understanding of an algorithm's efficiency.

Similar threads

Replies
2
Views
997
  • General Discussion
Replies
5
Views
869
Replies
3
Views
413
  • General Math
Replies
12
Views
1K
  • General Math
Replies
23
Views
5K
Replies
3
Views
2K
  • Biology and Chemistry Homework Help
Replies
4
Views
1K
Replies
11
Views
11K
Back
Top