How reason that an algorithm has n*logn time complexity?

In summary, the conversation discusses the topic of determining the runtime complexity of algorithms, specifically focusing on divide-and-conquer algorithms such as quicksort. It is noted that not all divide-and-conquer algorithms have a runtime complexity of n*logn and there are various methods to reason about the runtime complexity, such as analyzing the execution nesting within the algorithm or using recursion. An alternative to quicksort, merge sort, is also mentioned. Additionally, there are some previous threads on sorting with test results provided in one of them.
  • #1
Pithikos
55
1
Some time ago I wrote an exam where I had to write a pseudocode to sort elemens in a list. I used a quicksort with the pivot in the middle and wrote that the algorithm has time complexity in best case O(n*logn) because it is a divide-and-conquer algorithm. That didn't satisfy my teacher and he noted to me that not all divide-and-conquer algorithms have n*logn.
So how would someone reason about this? (Taking quicksort as en example)
 
Technology news on Phys.org
  • #2
You'd either have to add counters for the number of operations done in quicksort program and actually run it with some test sets of data, or generate a mathematical equation for the number of operations done by analyzing a quicksort program. I'm not aware of any generic method that would apply to all divide and conquer algorithms.

On a side note, an alternative to quicksort is a merge sort, but it uses more memory (double if sorting values, double the number of pointers if sorting an array of pointers to objects).

Previous threads on sorting. I included some test results in post #16 of the thread in the first link:

https://www.physicsforums.com/showthread.php?t=218369

https://www.physicsforums.com/showthread.php?t=181589
 
  • #3
Pithikos said:
Some time ago I wrote an exam where I had to write a pseudocode to sort elemens in a list. I used a quicksort with the pivot in the middle and wrote that the algorithm has time complexity in best case O(n*logn) because it is a divide-and-conquer algorithm. That didn't satisfy my teacher and he noted to me that not all divide-and-conquer algorithms have n*logn.
So how would someone reason about this? (Taking quicksort as en example)

If you want to figure out the runtime complexity of an algorithm, the best way is to look at the execution nesting within the algorithm.

For example let's say you have two loops, both go from 1 to some variable n. The run-time is going to be O(n^2). Also note that run-time is typically "worst-case" unless otherwise specified.

You basically use nesting as a hint for creating another factor. Look at the bounds of each loop and use that information to create your complexity factor.

The same also applies if you use recursion. The same idea is applied, although its typically more complex and you have to keep track of more variables (think memory state) when doing the analysis.

So in a nutshell translate all loops/recursive calls with boundary information to nested factors and use factors to generate complexity bound.
 

1. What is an algorithm?

An algorithm is a step-by-step procedure or set of instructions used to solve a problem or perform a task. It is a fundamental part of computer science and is used in a wide range of applications, from simple calculations to complex data analysis.

2. What is time complexity?

Time complexity is a measure of how long an algorithm takes to run based on the size of the input. It is typically denoted by the letter "n" and can be described using mathematical notation such as "O(n)" or "n*logn". This measure helps in understanding how efficient an algorithm is and how it will perform as the input size increases.

3. What does n*logn time complexity mean?

n*logn time complexity means that the time taken by the algorithm to run is directly proportional to the size of the input (n) multiplied by the logarithm of the input size (logn). In simple terms, as the input size increases, the time taken by the algorithm will increase at a slower rate compared to linear time complexity (where the time taken is directly proportional to the input size).

4. How do you determine the time complexity of an algorithm?

The time complexity of an algorithm can be determined by analyzing the number of operations or steps it takes to complete based on the size of the input. This can be done by counting the number of loops, recursive calls, and other operations in the algorithm. The Big O notation is commonly used to express the time complexity of an algorithm.

5. What are the advantages of using algorithms with n*logn time complexity?

The main advantage of using algorithms with n*logn time complexity is that they are more efficient compared to algorithms with higher time complexities. This means they can handle larger input sizes without significantly increasing the time taken to run. This makes them ideal for applications that require fast and efficient processing of large datasets.

Similar threads

  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
2
Views
2K
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
4
Views
2K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
31
Views
5K
  • Programming and Computer Science
Replies
17
Views
4K
  • Programming and Computer Science
Replies
10
Views
3K
  • Programming and Computer Science
Replies
4
Views
2K
Back
Top