Can an O(n) algorithm's actual growth rate exceed n?

  • Thread starter Thread starter MinusTheBear
  • Start date Start date
  • Tags Tags
    Notation
AI Thread Summary
Big-O notation measures the upper bound of an algorithm's resource requirements as input size n increases. An O(n) algorithm indicates linear growth, meaning it will not exceed a constant multiple of n, even though the actual growth rate can be expressed as kn + m, where k and m are constants. The worst-case growth rate for an O(n) algorithm remains linear, as it is bounded by a constant multiple of n. While O(n) can be classified under O(n^c) for c ≥ 1, it does not imply that the algorithm's growth exceeds linear behavior. Understanding these nuances is essential for accurately assessing algorithm efficiency.
MinusTheBear
Messages
22
Reaction score
0
I am learning about Big-O notation, and am kind of confused. So, Big-O notation is the upper-bound order that measures the amount of required resources as n approaches a sufficient size. It essentially measures the scalability of an algorithm. My question, however, is with the notation.If an algorithm is O(n), this means it is linear and that algorithm never exceeds this upper-bound order. I realize this means that the algorithm is technically O(nc) where c ≥ 1.

Now, my question is two-fold, and maybe I've answered them myself:
1) Can the ACTUAL growth rate of an O(n) algorithm EXCEED n?
2) Is the actual growth rate of an O(n) algorithm in the worst case, then, n?

Based on my understanding it would seem that O(n) can exceed n, since the actual growth rate may be kn+m, where k,m are positive integers.Which leads me to believe that the actual growth rate in the worst case is not n.

Sorry if this all seems obvious, it all just seems very weird to me and very broad in what it actually means.
 
Technology news on Phys.org
MinusTheBear said:
I am learning about Big-O notation, and am kind of confused. So, Big-O notation is the upper-bound order that measures the amount of required resources as n approaches a sufficient size. It essentially measures the scalability of an algorithm. My question, however, is with the notation.If an algorithm is O(n), this means it is linear and that algorithm never exceeds this upper-bound order. I realize this means that the algorithm is technically O(nc) where c ≥ 1.
No, this is wrong. It means the algorithm takes less than ##c\cdot n## steps with some constant ##c##. ##O(n^c)## wouldn't be linear anymore.
Now, my question is two-fold, and maybe I've answered them myself:
1) Can the ACTUAL growth rate of an O(n) algorithm EXCEED n?
The growth, yes, as it is the ##c-##fold of ##n##, however, this is a constant multiple which doesn't change the behavior "times n"; the growth rate, no, since this rate ##\dfrac{\Delta c\cdot n}{\Delta n} = c## is constant.
2) Is the actual growth rate of an O(n) algorithm in the worst case, then, n?
See above. Otherwise, please define growth rate.
Based on my understanding it would seem that O(n) can exceed n, since the actual growth rate may be kn+m, where k,m are positive integers.Which leads me to believe that the actual growth rate in the worst case is not n.
Whatever you mean by growth rate. Also, are ##k## and ##m## constant here, in which case we have ##kn+m < (k+m)\cdot n = c\cdot n## and the algorithm goes with ##O(n)## or is, e.g. ##m## dependent on ##n##, in which case you'll have to quantify this dependency?
 
  • Like
Likes QuantumQuest and MinusTheBear
MinusTheBear said:
I am learning about Big-O notation, and am kind of confused. So, Big-O notation is the upper-bound order that measures the amount of required resources as n approaches a sufficient size. It essentially measures the scalability of an algorithm. My question, however, is with the notation.If an algorithm is O(n), this means it is linear and that algorithm never exceeds this upper-bound order. I realize this means that the algorithm is technically O(nc) where c ≥ 1.

Now, my question is two-fold, and maybe I've answered them myself:
1) Can the ACTUAL growth rate of an O(n) algorithm EXCEED n?
2) Is the actual growth rate of an O(n) algorithm in the worst case, then, n?

Based on my understanding it would seem that O(n) can exceed n, since the actual growth rate may be kn+m, where k,m are positive integers.Which leads me to believe that the actual growth rate in the worst case is not n.

Sorry if this all seems obvious, it all just seems very weird to me and very broad in what it actually means.

1) No; if algorithm is in class O(n) , then it runs in C.n time or less for any input size. C is a constant.
2) No; it is always C.n for some constant C. In your case , C=1. So, yes, running time for input size n can exceed n, by a constant multiple.
 
Last edited:
  • Like
Likes QuantumQuest and MinusTheBear
fresh_42 said:
No, this is wrong. It means the algorithm takes less than ##c\cdot n## steps with some constant ##c##. ##O(n^c)## wouldn't be linear anymore.
I know. It is quadratic if c > 1. However, by definition, and according to my book, since O(n) is no worse than O(nc ) where c > 1, then saying that O(n) is O(n2) for example is correct -- but not best fit. In fact, the author gives multiple examples saying that the only case where it'd be wrong is saying that O(n) is something smaller like O(log). Is this wrong?

fresh_42 said:
The growth, yes, as it is the ##c-##fold of ##n##, however, this is a constant multiple which doesn't change the behavior "times n"; the growth rate, no, since this rate ##\dfrac{\Delta c\cdot n}{\Delta n} = c## is constant.

See above. Otherwise, please define growth rate.

Whatever you mean by growth rate. Also, are ##k## and ##m## constant here, in which case we have ##kn+m < (k+m)\cdot n = c\cdot n## and the algorithm goes with ##O(n)## or is, e.g. ##m## dependent on ##n##, in which case you'll have to quantify this dependency?

According to the author, the growth rate would be the growing resource (time/space) requirements that the algorithm requires as the input (n) increases. I guess my confusion is back to my first point with the algorithm with the quadratic order. Since the author made that point, it seems as though n could exceed n. However, it may just be the wording. I'll have to take a look once I have my book in front of me and provide a direct quote.

Thanks.
 
MinusTheBear said:
I know. It is quadratic if c > 1. However, by definition, and according to my book, since O(n) is no worse than O(nc ) where c > 1, then saying that O(n) is O(n2) for example is correct -- but not best fit. In fact, the author gives multiple examples saying that the only case where it'd be wrong is saying that O(n) is something smaller like O(log). Is this wrong?
According to the author, the growth rate would be the growing resource (time/space) requirements that the algorithm requires as the input (n) increases. I guess my confusion is back to my first point with the algorithm with the quadratic order. Since the author made that point, it seems as though n could exceed n. However, it may just be the wording. I'll have to take a look once I have my book in front of me and provide a direct quote.

Thanks.
You're right ; if algorithm A is ##O(f) ## for any function f then it is also ##O(g) ## if ##f<g ## ( If ##f<g## , then ##f < 1.g ## , so ##A < Cf < C.1g=Cg ##) . In particular, for ##n>1, n^2 > n ## , so if A is on ##O(n) ## then it is in ##O(n^2)## as well as in ##O(n^k) ## , for ##k\geq 2 ## , but, as you said, this is not the best fit.
 
If you have ##1 < c < 2## then you also have ##O(n) < O(n^c) < O(n^2)## which are three classes. And for ##c > 2## it isn't even bounded quadratic. So with only given ##c > 1## all that can be said is, that ##O(n^c)## isn't linear. Classes between linear and quadratic are e.g. ##O(n \log n)## or ##O(n \log n \log \log n)##. Take the matrix exponent as an example. It is less than cubic but more than quadratic and nobody knows the least lower bound. Probably the infimum is ##O(n^2)## but I'm not sure, too long ago.
https://en.wikipedia.org/wiki/Matrix_multiplication#Algorithms_for_efficient_matrix_multiplication
 
fresh_42 said:
If you have ##1 < c < 2## then you also have ##O(n) < O(n^c) < O(n^2)## which are three classes. And for ##c > 2## it isn't even bounded quadratic. So with only given ##c > 1## all that can be said is, that ##O(n^c)## isn't linear. Classes between linear and quadratic are e.g. ##O(n \log n)## or ##O(n \log n \log \log n)##. Take the matrix exponent as an example. It is less than cubic but more than quadratic and nobody knows the least lower bound.
Least upper bound?
 
Matrix multiplication is ##O(n^c)## with ##\omega = \inf \{c\}## and I'm not sure whether ##\omega =2 ## or ##\omega > 2## or if it is known at all. The current record holder is ## c = 2.3728639##.
 
fresh_42 said:
Matrix multiplication is ##O(n^c)## with ##\omega = \inf \{c\}## and I'm not sure whether ##\omega =2 ## or ##\omega > 2## or if it is known at all. The current record holder is ## c = 2.3728639##.
No, I was referring to the expression " Least Lower Bound" . If L is a lower bound, and L'<L , then we can go on indefinitely for lower bounds. Usual usage is of terms "Least Upper Bound"(lub) and "Greatest Lower Bound(glb)" .
 
  • #10
Yes, but ##\omega## is a least lower bound. All lower bounds are bounded by ##2## from below, the least needed amount is unknown, at least to me.

Correction: Guess you're right, it has to be the least upper and greatest lower.
 
Last edited:
  • #11
fresh_42 said:
Yes, but ##\omega## is a least lower bound. All lower bounds are bounded by ##2## from below, the least needed amount is unknown, at least to me.

Correction: Guess you're right, it has to be the least upper and greatest lower.
Yes, I think, you would want the least upper bound L , so that if L'<L then L' is not a lower bound, as I see it.
 
  • #12
Lower bounds are far harder to get, so the greatest lower would be much more interesting than the lowest upper. I had been confused by the boundary of two, as we don't know, how close we can get from above and whether two is the infimum or two plus something as lower bound. So a lower bound of 2.1 would be a sensation, an upper bound 2.1 would only be nice, as the matrix dimensions would probably be astronomic. But to be able to prove omega greater than two, would actually draw a line between the asymptotic linear limit and the need of a fundamentally non-linear algorithm. The old question: are we too stupid to find a linear one, or is it immanently more difficult than linear?
 
  • #13
Just to add to all the above, in order to describe the order of growth we use the Big Theta notation (##\Theta##). This provides the asymptotic order of growth. For example if we have ##\Theta(N^2)## this will be a shorthand for e.g. ##\frac{1}{2}N^2##, ##10N^2## i.e. anything times ##N^2## or maybe a expression including linear and logarithmic terms of ##N## besides ##N^2## etc. The Big Theta notation is used to classify algorithms.

Now, Big Oh notation (##O##) is used to develop upper bounds. So, when we say for instance ##O(N)## we mean less than some constant times ##N##, ##O(N^2)## is less than some constant times ##N^2## etc.

For lower bounds we use Big Omega notation (##\Omega##). When we say for instance ##\Omega(N^2)## we mean greater than some constant times ##N^2##.
 
  • Like
Likes fresh_42
  • #14
In order for something to be in a Big Oh class of something else, you want to prove that you can do some operation on your something else so that it will outspace the your something in the long run. Let's look at a concrete example. Let's say that you want to prove that ##T(n) = n^2+3n+5 \in O(n^2)##. We want to find some ##c## and some ##n \in \mathbb{N}## such that ##c \cdot O(n) \geq T(n)## for all numbers greater than ##n.## Let's see if we can do that. We know that ## \forall n \in \mathbb{N}, n^2+3n^2+5n^2 > T(n)##, since all I did was raise every term to a power of two, if it already wasn't so. Summing that up, I get that ##9n^2 > T(n)##. Now I can pick ##n=1## and ##c=9.## Thus, I have shown that there exists some combination of ##n## and ##c## such that ##c \cdot O(k^2) \geq T(n)## for all ##k > n,## where ##n=1.## Thus, I have proven that ##T(n) \in O(n^2).##
 
  • #15
MinusTheBear said:
Based on my understanding it would seem that O(n) can exceed n, since the actual growth rate may be kn+m, where k,m are positive integers.Which leads me to believe that the actual growth rate in the worst case is not n.
This is correct. It simply shows a linear relationship. It does not tell you that sending twice as much data to your function takes twice as much time, it simply means that sending it twice as much data will take half as long as sending it four times as much data.

After O notation, you'll likely go into actually counting CPU cycles and creating maths for that, so don't get hung up on it. Honestly the only Os that matter are O(n), O(n^2), O(log n), and occasionally O(n^3). I rarely have algorithms that fit in any categories other than those.
 
  • #16
newjerseyrunner said:
This is correct. It simply shows a linear relationship. It does not tell you that sending twice as much data to your function takes twice as much time, it simply means that sending it twice as much data will take half as long as sending it four times as much data.

After O notation, you'll likely go into actually counting CPU cycles and creating maths for that, so don't get hung up on it. Honestly the only Os that matter are O(n), O(n^2), O(log n), and occasionally O(n^3). I rarely have algorithms that fit in any categories other than those.
I also see O(nlogn) when looking at sorting algorithms, so that might be useful as well.
 
  • Like
Likes newjerseyrunner
Back
Top