Is the Worst Runtime of This Algorithm Θ(n^2)?

  • Thread starter Thread starter TiberiusK
  • Start date Start date
  • Tags Tags
    Algorithm Runtime
Click For Summary

Discussion Overview

The discussion revolves around analyzing the worst-case runtime of an algorithm designed to find the squared distance of the closest pair of points from a set of n points. Participants explore the implications of Big O, Big Omega, and Big Theta notations in the context of this algorithm's performance, focusing on theoretical aspects of algorithm analysis.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Homework-related

Main Points Raised

  • One participant outlines the algorithm and poses questions regarding the runtime analysis, specifically asking for help in proving T(n) = O(n^2) and T(n) = Ω(n^2).
  • Another participant calculates the number of iterations of the double loop, suggesting that T(n) can be expressed as c(n(n-1)/2), where c is a constant, and discusses the relationship between summation and integration to support their reasoning.
  • A participant questions the relevance of using the equation y = x + 1/2 in the context of proving the runtime and seeks clarification on how to find a constant for the Ω(n^2) proof.
  • One participant expresses confusion about the meaning of Ω(n^2).
  • Another participant suggests that writing O(1) for the initial lines does not affect the final result of the runtime analysis.

Areas of Agreement / Disagreement

There is no consensus on the proofs for T(n) = Ω(n^2) or the implications of the runtime notations. Participants express varying levels of understanding and confidence in their approaches, indicating that multiple competing views remain on how to effectively analyze the algorithm's runtime.

Contextual Notes

Some participants express uncertainty about the definitions and implications of the notations used in runtime analysis, indicating a potential gap in foundational knowledge that may affect their contributions.

TiberiusK
Messages
24
Reaction score
0
Hello guys,I decided to try and do a problem about analyzing the worst possible runtime of an algorithm.Since I'm a beginner and I want to do an understand this right I require some help.
I came accros this problem in a book that uses the following algoritthm
Input: A set of n points (x1, y1), . . . , (xn, yn) with n ≥ 2.
Output: The squared distance of a closest pair of points.
ClosePoints
1. if n = 2 then return (x1 − x2)^2 + (y1 − y2)^2
2. else
3. d ← 0
4. for i ← 1 to n − 1 do
5...for j ← i + 1 to n do
6...t ← (xi − xj)^2 + (yi − yj)^2
7...if t < d then
8...d ← t
9. return d
The questions are the following:T(n) represents the worst possible runtime.
1. Prove that T(n) = O(n^2). For this part you must use the O(·) notation along with any
appropriate properties .
2. Prove that T(n) = Ω(n^2). (In fact the runtime of the algorithm is Ω(n^2) for all inputs of n
points.)
3. Is it true to say that T (n) = Θ(n^2)? Justify your answer very briefly.

Now 3 is dependent of 1 and 2.As for 1 I know that we say that f is O(g), pronounced
if and only if there is an n0 ∈ N and c > 0 in R such that for all
n ≥ n0 we have
f(n) ≤ cg(n).
And also we say that f is Ω(g) if there is an
n0 ∈ N and c > 0 in R such that for all n ≥ n0 we have
f(n) ≥ cg(n).
I was thinking of either using small positive constants c1, c2, c3, c4... to represent the cost of
executing line 1,2,3,4,.. or maybe just directly O(1),O(1)...instead of constants...
How should I solve the questions?Thanks in advance
 
Last edited:
Technology news on Phys.org
In this case, the double loop produces all combinations of n things taken 2 at a time,

T(n) = c\ \left ( \begin{array}{c} n \\ 2 \end{array} \right ) = c\ \frac{n\ (n-1)} {2}}

where c is some constant. The only variation in overhead is how often statement #8 (d ← t) is performed. I'm assuming you're not supposed to be reinventing how to derive this. If you list the number of iterations of the inner loop, (n-1, n-2, ... 2, 1), and then reverse the list and add the two lists, you get:

Code:
n-1 n-2 n-3 ...   3   2   1
  1   2   3 ... n-3 n-2 n-1
---------------------------
  n   n   n ...   n   n   n

There are (n-1) instances of n in the sum, and since two lists were used, divide this sum by 2 to get the number of iterations, (n) (n-1) / 2.

You could also use calculus, integrating the equation y = x + 1/2, from 0 to n-1, noting that the area under the line from 0 to 1 = 1, from 1 to 2 = 2, ... from n-2 to n-1 = n-1, and the total sum of the area = 1/2 (n-1)2 + 1/2(n-1) = (n) (n-1) / 2
 
Last edited:
thanks...I quest writting O(1)+O(1)+O(1)+...for the first lines doesn't make any difference as the final result is not changed
 
why the equation y = x + 1/2?...also how can I find a constant to prove 2?
 
TiberiusK said:
why the equation y = x + 1/2?

Because of this relationship between a summation of sequential integers and the integral of x + 1/2:

\sum _{i=1} ^{n}\ i \ = \ \int _0 ^n \ (x + 1/2)\ dx
 
thanks rcgldr you've been most helpful...any advice on proving part 2?I have something but I don't find it very solid as a proof
 
I don't know what omega(n^2) even means.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
Replies
9
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 7 ·
Replies
7
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K