How reason that an algorithm has n*logn time complexity?

AI Thread Summary
The discussion centers on the time complexity of the quicksort algorithm, specifically addressing the misconception that all divide-and-conquer algorithms inherently have a time complexity of O(n*logn). While quicksort typically achieves this in the best case, it is not universally applicable to all divide-and-conquer algorithms. To accurately determine the runtime complexity, one must analyze the execution structure of the algorithm, particularly the nesting of loops and recursive calls. This involves examining the bounds of each loop or recursive call to derive a complexity factor. The conversation also briefly mentions merge sort as an alternative to quicksort, noting its higher memory usage. Overall, understanding the specific operations and their nesting is crucial for evaluating algorithm complexity effectively.
Pithikos
Messages
55
Reaction score
1
Some time ago I wrote an exam where I had to write a pseudocode to sort elemens in a list. I used a quicksort with the pivot in the middle and wrote that the algorithm has time complexity in best case O(n*logn) because it is a divide-and-conquer algorithm. That didn't satisfy my teacher and he noted to me that not all divide-and-conquer algorithms have n*logn.
So how would someone reason about this? (Taking quicksort as en example)
 
Technology news on Phys.org
You'd either have to add counters for the number of operations done in quicksort program and actually run it with some test sets of data, or generate a mathematical equation for the number of operations done by analyzing a quicksort program. I'm not aware of any generic method that would apply to all divide and conquer algorithms.

On a side note, an alternative to quicksort is a merge sort, but it uses more memory (double if sorting values, double the number of pointers if sorting an array of pointers to objects).

Previous threads on sorting. I included some test results in post #16 of the thread in the first link:

https://www.physicsforums.com/showthread.php?t=218369

https://www.physicsforums.com/showthread.php?t=181589
 
Pithikos said:
Some time ago I wrote an exam where I had to write a pseudocode to sort elemens in a list. I used a quicksort with the pivot in the middle and wrote that the algorithm has time complexity in best case O(n*logn) because it is a divide-and-conquer algorithm. That didn't satisfy my teacher and he noted to me that not all divide-and-conquer algorithms have n*logn.
So how would someone reason about this? (Taking quicksort as en example)

If you want to figure out the runtime complexity of an algorithm, the best way is to look at the execution nesting within the algorithm.

For example let's say you have two loops, both go from 1 to some variable n. The run-time is going to be O(n^2). Also note that run-time is typically "worst-case" unless otherwise specified.

You basically use nesting as a hint for creating another factor. Look at the bounds of each loop and use that information to create your complexity factor.

The same also applies if you use recursion. The same idea is applied, although its typically more complex and you have to keep track of more variables (think memory state) when doing the analysis.

So in a nutshell translate all loops/recursive calls with boundary information to nested factors and use factors to generate complexity bound.
 
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...

Similar threads

Replies
1
Views
2K
Replies
3
Views
2K
Replies
4
Views
2K
Replies
10
Views
4K
Replies
31
Views
5K
Replies
17
Views
4K
Back
Top