Sorting Algorithms and Their Run Times

AI Thread Summary
The discussion centers on the challenges of measuring the run times of sorting algorithms, specifically selection sort, and the confusion surrounding expected quadratic run times. Users note that inconsistent timing results may stem from factors like caching, suggesting that averaging multiple runs could yield more reliable data. The conversation emphasizes the importance of analyzing the algorithm to determine its big-O notation, which reflects the worst-case scenario for sorting N values. A detailed explanation of calculating big-O is provided, illustrating how to derive it from counting operations in loops. Overall, understanding the algorithm's complexity is crucial for accurate performance assessment.
wisvuze
Messages
372
Reaction score
1

Homework Statement



hi, I was asked to notice a pattern involving the run times of sorting algorithms.. however, the run time timer program I have keeps giving me different values for the same length lists :| will the pattern/formula be an approximation then? also, I read that the run time should follow n^2 or some kind of quadratic relation.. I just can't see it, the runtime values I'm getting are no way quadratic. could anybody help me?

thanks

Homework Equations


The Attempt at a Solution



here is my selection sort code , it is written in python:

Code:
def swapper(L, i):
    larger = i
    for e in range(i+1 , len(L)): 
        if L[e] > L[larger]:
            larger = e
            
    return larger    

def non_ascending(L):
    i = 0
    while i != len(L)-1:
        larger = swapper(L,i)
        L[i], L[larger] = L[larger], L[i]
        i += 1
 
Physics news on Phys.org
It's difficult to test because of things like caches.
You either need to do the run 100s of times an take an average
or you should analyse the algorithm to decide the big-O
 
mgb_phys said:
It's difficult to test because of things like caches.
You either need to do the run 100s of times an take an average
or you should analyse the algorithm to decide the big-O

ah, thanks for the speedy reply. I was certain that something was off with my testing method. could you show me how (briefly at least?) I could analyze my algorithm to decide the big-O?

thanks a lot :)
 
wisvuze said:
also, I read that the run time should follow n^2 or some kind of quadratic relation..


That's its bigO run time, which you get by taking the worst case time of your algorithm. The worst case time is when you have N (N>1) values, how long will it take to sort? This is figured out by counting your operations and adding them/multiplying them together.

Example:
Code:
while(i<N): #done N times
  k[i] = k[o] #assigned N times
the O(n) = n^2 because n assignemnts in n loops = n(n)=n*2

if your code is
Code:
while(i<N): # done N times
  k[i] = k[o] #N times
  l[i] = l[o] # N times
the O(n) = n^2 because n+n assignments in n loops = n(2n)=2n*2, but 2 (as are all constants) is negligible so O(n) = n^2
 
Last edited:
Back
Top