Theta(n^2) running time of Quicksort Algorithm

Click For Summary

Discussion Overview

The discussion revolves around the running time of the Quicksort algorithm, specifically exploring scenarios that lead to a Theta(n^2) complexity. Participants examine different partitioning strategies and their implications on the algorithm's performance, focusing on theoretical aspects and recursive equations.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant describes a scenario where Quicksort partitions an array into sub-arrays of sizes i and i(j-1), leading to a proposed recurrence relation for the running time.
  • Another participant challenges the factorization of n, arguing that the worst-case scenario involves a pivot splitting the array into sizes 0 and n-1, leading to a different analysis of the running time.
  • A participant clarifies that while the worst-case scenario is well-documented, they are attempting to show that their specific case also results in a Theta(n^2) running time.
  • Further clarification is provided regarding the asymptotic behavior of i and j, with i being a constant and j increasing as n increases, suggesting that the running time remains Theta(n^2) regardless of the value of i.
  • One participant suggests that if it can be shown that T(ij) = Theta(j^2), then it follows that the running time is also Theta(n^2) since i is constant.

Areas of Agreement / Disagreement

Participants express differing views on the scenarios leading to Theta(n^2) running time, with some supporting the worst-case analysis and others advocating for the validity of the proposed case. The discussion remains unresolved regarding the specific implications of the partitioning strategy on the running time.

Contextual Notes

Participants note the dependence of the analysis on the definitions of i and j, as well as the assumptions made regarding their behavior as n increases. The discussion highlights the complexity of proving running time in different scenarios without reaching a consensus.

ATroelstein
Messages
15
Reaction score
0
I am trying to prove the following scenario of Quicksort has a running time of Theta(n^2). Initially, we only have arrays of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be Theta(n^2). To prove this, I've used repeated substitution to solve the recurrence equation:

T(n) = T(ij) = m if j = 1, where m just denotes a constant
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1

T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci

So it looks like the recurrence is:

$$T(n) = jm + (ci\sum_{k=1}^{j}k) - 1$$

At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.
 
Technology news on Phys.org
ATroelstein said:
I am trying to prove the following scenario of Quicksort has a running time of Theta(n^2). Initially, we only have arrays of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be Theta(n^2). To prove this, I've used repeated substitution to solve the recurrence equation:

T(n) = T(ij) = m if j = 1, where m just denotes a constant
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1

T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci

So it looks like the recurrence is:

$$T(n) = jm + (ci\sum_{k=1}^{j}k) - 1$$

At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.


Why are you factorising \(n\)?

In the worst case the pivot splits the array of size \(n\) into arrays of size \(0\), and size \(n-1\). This spliting requires \(n-1\) compartisons, so:

\[T_{wc}(n)= k.(n-1)+T_{wc}(n-1)=k\sum_{k=1}^{n-1} n=k \frac{n.(n-1)}{2}=K(n^2-n)\]

which implies that \(T_{wx}\in \Theta(n^2)\)

CB
 
Last edited:
CaptainBlack said:
Why are you factorising \(n\)?

In the worst case the pivot splits the array of size \(n\) into arrays of size \(0\), and size \(n-1\). This spliting requires \(n-1\) compartisons, so:

\[T_{wc}(n)= k.(n-1)+T_{wc}(n-1)=k\sum_{k=1}^{n-1} n=k \frac{n.(n-1)}{2}=K(n^2-n)\]

which implies that \(T_{wx}\in \Theta(n^2)\)

CB
Thank you for your reply CaptainBlack. I am factorizing n because the specific problem I am working on is trying to show that while the case you described is the absolute worst-case scenario, where you have a sub-arrays of size 0 and n-1, that the scenario I gave also has a running time of theta(n^2). I am able to find many examples of the scenario you gave in textbooks and online, but am unfortunately am unable to find any examples of the case that I have given above and that's why I've become so stuck at this point. Thanks.
 
ATroelstein said:
Thank you for your reply CaptainBlack. I am factorizing n because the specific problem I am working on is trying to show that while the case you described is the absolute worst-case scenario, where you have a sub-arrays of size 0 and n-1, that the scenario I gave also has a running time of theta(n^2). I am able to find many examples of the scenario you gave in textbooks and online, but am unfortunately am unable to find any examples of the case that I have given above and that's why I've become so stuck at this point. Thanks.

Can you tell us exactly what the case you are considering is?

In particular how are the i's and j's behaving asymtotically as n becomes large?

CB
 
Since i is an integer constant, its asymptotic behavior won't depend on n. The asymptotic behavior of j however will be increasing as n increases, since n = ij. To make my scenario a little more clear, we could look at the case where i = 1. This falls into my scenario and is just one step away from the worst case you described, as now we have sub-arrays of size 1 and n-2. The next case is my scenario would be where i = 2 and we have sub-arrays of 2 and n-3, and so on. So now I'm trying to show that no matter what the integer constant i is, the running time will still be theta(n^2) as j is becoming large as n becomes large, but i does not. Thank you.
 
ATroelstein said:
Since i is an integer constant, its asymptotic behavior won't depend on n. The asymptotic behavior of j however will be increasing as n increases, since n = ij. To make my scenario a little more clear, we could look at the case where i = 1. This falls into my scenario and is just one step away from the worst case you described, as now we have sub-arrays of size 1 and n-2. The next case is my scenario would be where i = 2 and we have sub-arrays of 2 and n-3, and so on. So now I'm trying to show that no matter what the integer constant i is, the running time will still be theta(n^2) as j is becoming large as n becomes large, but i does not. Thank you.

If you can show that:

\[T(ij)=\Theta(j^2)\]

you are done, since in this context \(i\) is a constant and so \(\Theta(j^2)=\Theta(i^2j^2)=\Theta(n^2) \)

CB
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
9
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 11 ·
Replies
11
Views
4K
Replies
2
Views
2K
Replies
12
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 20 ·
Replies
20
Views
2K