Sorting Algorithms and Their Run Times

Click For Summary

Discussion Overview

The discussion revolves around the run times of sorting algorithms, specifically focusing on the challenges of measuring these times accurately and understanding their theoretical complexities. Participants explore the relationship between empirical run times and expected big-O notation, particularly in the context of selection sort.

Discussion Character

  • Homework-related
  • Technical explanation
  • Conceptual clarification

Main Points Raised

  • One participant expresses confusion over inconsistent run time measurements for sorting algorithms and questions whether the observed patterns are approximations.
  • Another participant suggests that external factors, such as caching, can affect run time measurements and recommends running tests multiple times to obtain an average.
  • A participant requests guidance on how to analyze their algorithm to determine its big-O complexity, indicating a desire for a clearer understanding of theoretical performance.
  • One participant explains that the big-O notation reflects the worst-case time complexity of an algorithm, illustrating this with examples of counting operations in loops to derive the complexity.

Areas of Agreement / Disagreement

Participants generally agree that measuring run times can be complicated by various factors, and there is a shared understanding of the importance of big-O notation. However, there is no consensus on the specific patterns observed in the run times or the best methods for analysis.

Contextual Notes

Participants mention the influence of caching on run time measurements, which may introduce variability. There is also an emphasis on the need for careful analysis to derive theoretical complexities, but no specific methods are universally accepted or detailed.

Who May Find This Useful

This discussion may be useful for students learning about sorting algorithms, those interested in algorithm analysis, and individuals seeking to understand the relationship between empirical performance and theoretical complexity.

wisvuze
Messages
372
Reaction score
1

Homework Statement



hi, I was asked to notice a pattern involving the run times of sorting algorithms.. however, the run time timer program I have keeps giving me different values for the same length lists :| will the pattern/formula be an approximation then? also, I read that the run time should follow n^2 or some kind of quadratic relation.. I just can't see it, the runtime values I'm getting are no way quadratic. could anybody help me?

thanks

Homework Equations


The Attempt at a Solution



here is my selection sort code , it is written in python:

Code:
def swapper(L, i):
    larger = i
    for e in range(i+1 , len(L)): 
        if L[e] > L[larger]:
            larger = e
            
    return larger    

def non_ascending(L):
    i = 0
    while i != len(L)-1:
        larger = swapper(L,i)
        L[i], L[larger] = L[larger], L[i]
        i += 1
 
Physics news on Phys.org
It's difficult to test because of things like caches.
You either need to do the run 100s of times an take an average
or you should analyse the algorithm to decide the big-O
 
mgb_phys said:
It's difficult to test because of things like caches.
You either need to do the run 100s of times an take an average
or you should analyse the algorithm to decide the big-O

ah, thanks for the speedy reply. I was certain that something was off with my testing method. could you show me how (briefly at least?) I could analyze my algorithm to decide the big-O?

thanks a lot :)
 
wisvuze said:
also, I read that the run time should follow n^2 or some kind of quadratic relation..


That's its bigO run time, which you get by taking the worst case time of your algorithm. The worst case time is when you have N (N>1) values, how long will it take to sort? This is figured out by counting your operations and adding them/multiplying them together.

Example:
Code:
while(i<N): #done N times
  k[i] = k[o] #assigned N times
the O(n) = n^2 because n assignemnts in n loops = n(n)=n*2

if your code is
Code:
while(i<N): # done N times
  k[i] = k[o] #N times
  l[i] = l[o] # N times
the O(n) = n^2 because n+n assignments in n loops = n(2n)=2n*2, but 2 (as are all constants) is negligible so O(n) = n^2
 
Last edited:

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K