# To empirically estimate the big-Oh in small programs

Gold Member

## Homework Statement

First, I am a teacher preparing to give a discrete mathematics course, in which I will not do any computing, but the students will (in Python). I want them to make empirical estimates for the growth of their (short) recursive programs by putting in some sort of "counter" in their programs, and then run it on several sets of data of varying sizes. (I will also ask them to analyze theoretically, but that is not my question here.) But since I am not a programmer, I am not sure how to do this.

N/A

## The Attempt at a Solution

Simply have the program count the number of operations seems to me to be incorrect, because different operations will take different amounts of time and/or memory space. I am also aware that different machines (not to mention different versions of Python) will give different answers, but they should be within a linear factor of one another. Reading off the time for the completion of the program is not going to work, because the programs will be short. Some websites classify some operations complexity, but not all of them, and besides, that brings us back to a theoretical approach. Anyway, I don't want to actually give them the Python program construction (my students can program; I can't), but I will be suggesting the mathematical steps to put in. (Therefore an answer in mathematical language rather than computer language would be appreciated. I think the first mathematician to have seen a computer program reading "n=n+1" probably had a heart attack.)
Thanks for any hints.

mfb
Mentor
Reading off the time for the completion of the program is not going to work, because the programs will be short.
You can always make the input longer (or let the programs run multiple times).
Most of the time you have some set of loops with just one or two different operations executed many times. If they do not get called the same amount of times, you can count both separately. Or add them. If one is just a factor of 2-3 slower than the other, it does not matter much compared to a different complexity.
my students can program; I can't
I don't think it is a good idea to give students tasks you cannot solve.

• jedishrfu
Mentor
you can also query for millisecs and and do stats on the deltas :

Code:
import time

def do_something_here():
print "hello world"

# init timer
oldticks=time.clock()

# loop with timing
for i in range(1000):

do_something_here()

newticks=time.clock()
delta=(newticks-oldticks)
oldticks=newticks

print "loop: ",i,"  delta: ",delta, " secs"

• mfb
Mentor
Printing can take some time on its own, so better do it outside the areas where time is evaluated.

• Gold Member
Thanks, mfb and jedishrfu. I guess combining the answers, having the students run the program lots of times, subtracting the time taken to record the result from each run, and then doing stats on the results, should do it.
I do agree that giving students stuff that the instructor cannot solve is not a great idea; however, in this course, the computing part is relegated to a colleague with whom I split the course (I just do the maths part). That is, I know that the assignments I give are mathematically possible, but I need to give assignments that I know are computable in Python, even though my colleague will help them through the details. Therefore I am informing myself via this magnificent forum.

jedishrfu
Mentor
Since, you're doing python and math together you both might want to check out Pyzo at pyzo.org. Its a collection of python libs + an interactive display where the environment is setup and ready to go. They don't have any example code but searching the web should find examples of how to use the various libraries available namely numpy, scipy, matplotlib ...

• 