MHB Theta(n^2) running time of Quicksort Algorithm

AI Thread Summary
The discussion revolves around proving that a specific scenario of the Quicksort algorithm has a running time of Theta(n^2). The user describes a case where the array is partitioned into sub-arrays of sizes i and i(j-1), with i being a constant. They derive a recurrence relation and express concerns about how to demonstrate that the resulting summation leads to n^2. A participant suggests that if the user can show T(ij) equals Theta(j^2), they can conclude that the running time is indeed Theta(n^2), since i remains constant as n increases. The conversation emphasizes the relationship between the parameters and the asymptotic behavior of the algorithm.
ATroelstein
Messages
15
Reaction score
0
I am trying to prove the following scenario of Quicksort has a running time of Theta(n^2). Initially, we only have arrays of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be Theta(n^2). To prove this, I've used repeated substitution to solve the recurrence equation:

T(n) = T(ij) = m if j = 1, where m just denotes a constant
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1

T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci

So it looks like the recurrence is:

$$T(n) = jm + (ci\sum_{k=1}^{j}k) - 1$$

At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.
 
Technology news on Phys.org
ATroelstein said:
I am trying to prove the following scenario of Quicksort has a running time of Theta(n^2). Initially, we only have arrays of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be Theta(n^2). To prove this, I've used repeated substitution to solve the recurrence equation:

T(n) = T(ij) = m if j = 1, where m just denotes a constant
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1

T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci

So it looks like the recurrence is:

$$T(n) = jm + (ci\sum_{k=1}^{j}k) - 1$$

At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.


Why are you factorising \(n\)?

In the worst case the pivot splits the array of size \(n\) into arrays of size \(0\), and size \(n-1\). This spliting requires \(n-1\) compartisons, so:

\[T_{wc}(n)= k.(n-1)+T_{wc}(n-1)=k\sum_{k=1}^{n-1} n=k \frac{n.(n-1)}{2}=K(n^2-n)\]

which implies that \(T_{wx}\in \Theta(n^2)\)

CB
 
Last edited:
CaptainBlack said:
Why are you factorising \(n\)?

In the worst case the pivot splits the array of size \(n\) into arrays of size \(0\), and size \(n-1\). This spliting requires \(n-1\) compartisons, so:

\[T_{wc}(n)= k.(n-1)+T_{wc}(n-1)=k\sum_{k=1}^{n-1} n=k \frac{n.(n-1)}{2}=K(n^2-n)\]

which implies that \(T_{wx}\in \Theta(n^2)\)

CB
Thank you for your reply CaptainBlack. I am factorizing n because the specific problem I am working on is trying to show that while the case you described is the absolute worst-case scenario, where you have a sub-arrays of size 0 and n-1, that the scenario I gave also has a running time of theta(n^2). I am able to find many examples of the scenario you gave in textbooks and online, but am unfortunately am unable to find any examples of the case that I have given above and that's why I've become so stuck at this point. Thanks.
 
ATroelstein said:
Thank you for your reply CaptainBlack. I am factorizing n because the specific problem I am working on is trying to show that while the case you described is the absolute worst-case scenario, where you have a sub-arrays of size 0 and n-1, that the scenario I gave also has a running time of theta(n^2). I am able to find many examples of the scenario you gave in textbooks and online, but am unfortunately am unable to find any examples of the case that I have given above and that's why I've become so stuck at this point. Thanks.

Can you tell us exactly what the case you are considering is?

In particular how are the i's and j's behaving asymtotically as n becomes large?

CB
 
Since i is an integer constant, its asymptotic behavior won't depend on n. The asymptotic behavior of j however will be increasing as n increases, since n = ij. To make my scenario a little more clear, we could look at the case where i = 1. This falls into my scenario and is just one step away from the worst case you described, as now we have sub-arrays of size 1 and n-2. The next case is my scenario would be where i = 2 and we have sub-arrays of 2 and n-3, and so on. So now I'm trying to show that no matter what the integer constant i is, the running time will still be theta(n^2) as j is becoming large as n becomes large, but i does not. Thank you.
 
ATroelstein said:
Since i is an integer constant, its asymptotic behavior won't depend on n. The asymptotic behavior of j however will be increasing as n increases, since n = ij. To make my scenario a little more clear, we could look at the case where i = 1. This falls into my scenario and is just one step away from the worst case you described, as now we have sub-arrays of size 1 and n-2. The next case is my scenario would be where i = 2 and we have sub-arrays of 2 and n-3, and so on. So now I'm trying to show that no matter what the integer constant i is, the running time will still be theta(n^2) as j is becoming large as n becomes large, but i does not. Thank you.

If you can show that:

\[T(ij)=\Theta(j^2)\]

you are done, since in this context \(i\) is a constant and so \(\Theta(j^2)=\Theta(i^2j^2)=\Theta(n^2) \)

CB
 
Thread 'Is this public key encryption?'
I've tried to intuit public key encryption but never quite managed. But this seems to wrap it up in a bow. This seems to be a very elegant way of transmitting a message publicly that only the sender and receiver can decipher. Is this how PKE works? No, it cant be. In the above case, the requester knows the target's "secret" key - because they have his ID, and therefore knows his birthdate.
Thread 'Project Documentation'
Trying to package up a small bank account manager project that I have been tempering on for a while. One that is certainly worth something to me. Although I have created methods to whip up quick documents with all fields and properties. I would like something better to reference in order to express the mechanical functions. It is unclear to me about any standardized format for code documentation that exists. I have tried object orientated diagrams with shapes to try and express the...
Back
Top