Why insertion sort works better than quick-sort for small data?

  • Thread starter Avichal
  • Start date
  • #1
292
0
I have seen in books that when number of elements is small ~ 30-40 insertion sort is recommended. Why is that? The worst case of insertion sort is n2 whereas for quick-sort it is nlogn.

nlogn beats n2 for all values of n isn't it? Then why?
 

Answers and Replies

  • #2
jtbell
Mentor
15,636
3,683
nlogn beats n2 for all values of n isn't it?
Each step of quicksort normally requires more "work" than a step of insertion sort. Compare 10000 n log n with 10 n2. Of course, this is an exaggeration, but it illustrates the basic idea.
 
  • #3
292
0
Merge sort is much easier for me so lets talk compare merge-sort with insertions sort.
In worst case insertion sort will be 1 + 2 + 3 + ... + n which is n.(n+1) / 2 iterations/steps.
There is not worst case in merge sort. Each step (that is dividing the array by two) takes n iterations (that is merging). This happens logn or logn + 1 times. Thus total iterations/steps is n.logn or n.logn + n

Where does the constant in front comes from?
 
  • #4
rcgldr
Homework Helper
8,710
538
Merge sort is much easier for me so lets talk compare merge-sort with insertions sort.
Using a bottom up merge sort is easier to explain. In a top down merge sort, no merging takes place until sub-array's of size 1 are produced by recursively dividing the array by 2 (or creating a tree of indexes on the stack).

The initial condition of a merge sort considers an array of size n as n arrays of size 1 (an array of size 1 can be considered sorted) and the first pass merges those n arrays of size 1 into a secondary buffer producing n/2 sorted arrays of size 2. The next pass merges the n/2 sorted arrays of size 2 back into the primary buffer producing n/4 sorted arrays of size 4. This merge process is repeated until there is 1 sorted array. The number of elements moved on each pass is n, and the number of passes is log2(n), so the number of moves is n log2(n). The worst case number of compares is n log2(n) - n + 1. This is explained next:

On the first pass each pair of arrays of size 1 are merged taking 1 compare, so that's n/2 compares. On the second pass, the worst case number of compares for each pair of arrays of size 2 is 3, so that's 3n/4 compares. For the third pass, 7n/8. For the final log2(n) pass, each pair of arrays of size n/2 are merged taking n-1 compares in the worst case, for a total of (n-1) n / n = n - 1. To calculate the sum, subtract 1x the number of compares from 2x the number of compares:

Code:
  2x # compares = 1n/1 + 3n/2 + 7n/4 + ... 
- 1x # compares =        1n/2 + 3n/4 + ... + (n-1)
------------------------------------------------
  1x # compares =    n +    n +    n + ... - (n-1) = n log2(n) - n + 1
 
Last edited:
  • #5
292
0
Using a bottom up merge sort is easier to explain. In a top down merge sort, no merging takes place until sub-array's of size 1 are produced by recursively dividing the array by 2 (or creating a tree of indexes on the stack).

The initial condition of a merge sort considers an array of size n as n arrays of size 1 (an array of size 1 can be considered sorted) and the first pass merges those n arrays of size 1 into a secondary buffer producing n/2 sorted arrays of size 2. The next pass merges the n/2 sorted arrays of size 2 back into the primary buffer producing n/4 sorted arrays of size 4. This merge process is repeated until there is 1 sorted array. The number of elements moved on each pass is n, and the number of passes is log2(n), so the number of moves is n log2(n). The worst case number of compares is n log2(n) - n + 1.
Thanks for the explanation.
Each step of quicksort normally requires more "work" than a step of insertion sort. Compare 10000 n log n with 10 n2. Of course, this is an exaggeration, but it illustrates the basic idea.
Anyways where do the constants 10000 and 10 come up? As explained above the number of steps/comparisons is n log2n ... no big constants in it
 
  • #6
rcgldr
Homework Helper
8,710
538
Thanks for the explanation.
I updated my previous post to explain how the worst case number of compares are calclated.

Anyways where do the constants 10000 and 10 come up?
That was an exagerration. The issue is the number of temporary variables, like indexes, and size of array, and how often they have to be updated in order to implement a sort algorithm. So the overhead per comparason, swap, or move may be higher for the "better" algorithms, making them slower if the number of elements in an array is very small.
 
  • #7
292
0
That was an exagerration. The issue is the number of temporary variables, like indexes, and size of array, and how often they have to be updated in order to implement a sort algorithm. So the overhead per comparason, swap, or move may be higher for the "better" algorithms, making them slower if the number of elements in an array is very small.
Yeah I knew they were an exaggeration but couldn't see how any constants could rise. I totally forgot about overhead time taken to divide the array into two, calling other functions, swapping etc.
I was just looking at the comparisons!
Anyways thank you for clearing the doubt
 

Related Threads on Why insertion sort works better than quick-sort for small data?

Replies
5
Views
818
  • Last Post
Replies
2
Views
4K
  • Last Post
Replies
2
Views
2K
  • Last Post
2
Replies
47
Views
13K
  • Last Post
Replies
17
Views
6K
  • Last Post
Replies
1
Views
3K
  • Last Post
Replies
4
Views
716
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
2
Views
600
Top