Is the Worst Runtime of This Algorithm Θ(n^2)?

  • Thread starter Thread starter TiberiusK
  • Start date Start date
  • Tags Tags
    Algorithm Runtime
AI Thread Summary
The discussion centers on analyzing the worst-case runtime of an algorithm that calculates the squared distance of the closest pair of points from a set of n points. The algorithm's structure involves a double loop, leading to a runtime of T(n) = O(n^2) due to the combination of n points taken two at a time. It is also established that T(n) = Ω(n^2), indicating that the algorithm's runtime grows at least as fast as n^2 for all inputs. Consequently, it is concluded that T(n) = Θ(n^2), as it satisfies both upper and lower bounds. The conversation emphasizes understanding big O, big Ω, and their implications for algorithm analysis.
TiberiusK
Messages
24
Reaction score
0
Hello guys,I decided to try and do a problem about analyzing the worst possible runtime of an algorithm.Since I'm a beginner and I want to do an understand this right I require some help.
I came accros this problem in a book that uses the following algoritthm
Input: A set of n points (x1, y1), . . . , (xn, yn) with n ≥ 2.
Output: The squared distance of a closest pair of points.
ClosePoints
1. if n = 2 then return (x1 − x2)^2 + (y1 − y2)^2
2. else
3. d ← 0
4. for i ← 1 to n − 1 do
5...for j ← i + 1 to n do
6...t ← (xi − xj)^2 + (yi − yj)^2
7...if t < d then
8...d ← t
9. return d
The questions are the following:T(n) represents the worst possible runtime.
1. Prove that T(n) = O(n^2). For this part you must use the O(·) notation along with any
appropriate properties .
2. Prove that T(n) = Ω(n^2). (In fact the runtime of the algorithm is Ω(n^2) for all inputs of n
points.)
3. Is it true to say that T (n) = Θ(n^2)? Justify your answer very briefly.

Now 3 is dependent of 1 and 2.As for 1 I know that we say that f is O(g), pronounced
if and only if there is an n0 ∈ N and c > 0 in R such that for all
n ≥ n0 we have
f(n) ≤ cg(n).
And also we say that f is Ω(g) if there is an
n0 ∈ N and c > 0 in R such that for all n ≥ n0 we have
f(n) ≥ cg(n).
I was thinking of either using small positive constants c1, c2, c3, c4... to represent the cost of
executing line 1,2,3,4,.. or maybe just directly O(1),O(1)...instead of constants...
How should I solve the questions?Thanks in advance
 
Last edited:
Technology news on Phys.org
In this case, the double loop produces all combinations of n things taken 2 at a time,

T(n) = c\ \left ( \begin{array}{c} n \\ 2 \end{array} \right ) = c\ \frac{n\ (n-1)} {2}}

where c is some constant. The only variation in overhead is how often statement #8 (d ← t) is performed. I'm assuming you're not supposed to be reinventing how to derive this. If you list the number of iterations of the inner loop, (n-1, n-2, ... 2, 1), and then reverse the list and add the two lists, you get:

Code:
n-1 n-2 n-3 ...   3   2   1
  1   2   3 ... n-3 n-2 n-1
---------------------------
  n   n   n ...   n   n   n

There are (n-1) instances of n in the sum, and since two lists were used, divide this sum by 2 to get the number of iterations, (n) (n-1) / 2.

You could also use calculus, integrating the equation y = x + 1/2, from 0 to n-1, noting that the area under the line from 0 to 1 = 1, from 1 to 2 = 2, ... from n-2 to n-1 = n-1, and the total sum of the area = 1/2 (n-1)2 + 1/2(n-1) = (n) (n-1) / 2
 
Last edited:
thanks...I quest writting O(1)+O(1)+O(1)+...for the first lines doesn't make any difference as the final result is not changed
 
why the equation y = x + 1/2?...also how can I find a constant to prove 2?
 
TiberiusK said:
why the equation y = x + 1/2?

Because of this relationship between a summation of sequential integers and the integral of x + 1/2:

\sum _{i=1} ^{n}\ i \ = \ \int _0 ^n \ (x + 1/2)\ dx
 
thanks rcgldr you've been most helpful...any advice on proving part 2?I have something but I don't find it very solid as a proof
 
I don't know what omega(n^2) even means.
 
Back
Top