MHB Explaining Proof of Quick-Sort's O(nlog n) Expected Running Time

  • Thread starter Thread starter evinda
  • Start date Start date
  • Tags Tags
    Explanation Proof
AI Thread Summary
The discussion centers on understanding the proof of quick-sort's expected running time of O(n log n). The algorithm involves selecting a random pivot and recursively sorting the partitions. The expected time T(n) is derived using a recurrence relation that accounts for the time complexity of partitioning and the recursive calls. The participants explore how to establish that T(n) remains bounded by k n log n through mathematical induction. The conversation emphasizes the importance of the recurrence relation and the need for a structured approach to complete the proof.
evinda
Gold Member
MHB
Messages
3,741
Reaction score
0
Hello! (Wave)

View attachment 4422

View attachment 4423I am trying to understand the proof that the expected running time of quick-sort is $O(n \log n)$.

Could you maybe explain it to me?
 

Attachments

  • worst.PNG
    worst.PNG
    25.6 KB · Views: 102
  • proof1.PNG
    proof1.PNG
    31.5 KB · Views: 94
Technology news on Phys.org
Suppose that we use the following algorithm:

Code:
Quicksort(S):
1.  if S contains at most one element then return S
    else
2.       choose an element a randomly from S;
3.       let S1, S2, and S3 be the sequences of elements in S less than, equal to, and greater than a, respectively;
4.       return Quicksort(S1) followed by S2 followed by Quicksort(S3)
I have tried the following:

Let T(n) be the expected time required by the Quicksort algorithm in order to sort a sequence of n elements. We have that T(0)=T(1)=b for a constant b.

We suppose that the elements a is the i-th smallest elements of the n elements of the sequence S.
Then the 2 recursive calls of Quicksort have expected time T(i-1) and T(n-i), respectively.
Since the lines 2-3 require time O(n) and the line 1 requires time O(1), we have the following recurrence relation:

T(n) \leq cn+ \frac{1}{n} \sum_{i=1}^n \left[ T(i-1)+T(n-i)\right], n \geq 2

\\ T(n) \leq cn+ \frac{1}{n} \sum_{i=1}^n \left[ T(i-1)+T(n-i)\right] \\ \leq cn+ \frac{1}{n} \sum_{i=1}^n \left[ 2T(i-1) \right] =cn+ \frac{2}{n} \sum_{i=0}^{n-1} T(i) \\ \Rightarrow T(n) \leq cn+ \frac{2}{n} \sum_{i=0}^{n-1} T(i)(1)

We will show that for n \geq 2, T(n) \leq k n \log n, for k=2(c+b) positive.

For n=2: From the relation (1) we have T(2) \leq c2+ 2b=2(c+b) \checkmark

We suppose that T(n) \leq k n \log n γfor some n \geq 2.

We will show that T(n+1) \leq k (n+1) \log (n+1) .

From the relation (1) we have T(n+1) \leq c(n+1)+ \frac{2}{n+1} \sum_{i=0}^{n} T(i) \leq c(n+1)+ \frac{2}{n+1} (n+1) T(n)= c(n+1)+2T(n) \leq c(n+1)+2 k n \log n \leq (n+1) (c+2k \log n)How could we continue? Or should we use strong induction?
 
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...

Similar threads

Replies
1
Views
2K
Replies
1
Views
1K
Replies
12
Views
3K
Replies
17
Views
4K
Replies
11
Views
3K
Back
Top