Why is the complexity of this code O(n^2)?

  • Thread starter rem1618
  • Start date
  • #1
rem1618
14
0
def fib(n):
f0, f1, = 0, 1
for i in range(n - 1):
f0, f1 = f1, f0 + f1
return f1

It looks like it'd be linear, given there's only one loop, but when I plotted n against runtime, the relationship was quadratic, why?
 

Answers and Replies

  • #2
36,027
12,927
Does your addition happen in constant time, independent of the size of the numbers?
 
  • #3
rem1618
14
0
So it's the adding of big numbers that's contributing to the overall complexity? That makes sense, but how do I quantify it? So far my understanding of determining complexity is merely by counting the number of operations.
 
  • #4
jim mcnamara
Mentor
4,623
3,535
The addition primitive in hardware for two objects of the same datatype [ assuming no overflows ] does not take more time as numbers grow in size. Bad assumption that larger number addition changes complexity.
 
  • #5
36,027
12,927
If the numbers are so long that the computer has to add them in several steps (>>64 bits), addition might take linear time (note: linear in the length of the digits in bits).

@rem1618: What is the range of n you tested?
 
  • #6
rem1618
14
0
They're not huge numbers, I ran it in range(0, 100000, 10000). This was the plot I got

The rest of my code is just for clocking runtime and plotting, so there should be nothing special there, but here it is.

Code:
import time
from pylab import *

def fib(n):
....f0, f1, = 0, 1
....for i in xrange(n - 1):
........f0, f1 = f1, f0 + f1
....return f1

limits = (0, 100000, 10000)
n_N = range(*limits)
n_t = []
n2_N = [i**2 for i in n_N]

for i in n_N:
....start = time.clock()
....fib(i)
....end = time.clock()
....diff = end - start
....n_t.append(diff)

figure()

subplot(211)
title('Complexity of Iterative Fib')
xlabel('n')
ylabel('runtime (s)')
plot(n_N, n_t)

subplot(212)
title('Complexity of n^2')
xlabel('n')
ylabel('runtime (s)')
plot(n_N, n2_N)

show()
 
Last edited by a moderator:
  • #7
36,027
12,927
The 100000. fibonacci number is huge compared to the size of the adders in your CPU (probably 64 bit). It has about 50000 binary digits.
 
  • #8
AlephZero
Science Advisor
Homework Helper
7,002
297
Check out the difference between "plain integers" and "long integers" in your python documentation.

Take home lesson from this: the downside of an "ultra-user-friendly" language like Python is that many users don't know what's their programs are actually doing, so long as they "seem to work OK".

And statements like "from pylab import *" don't make it any easier to find out what's going on!
 
  • #9
rem1618
14
0
Ah right. I had thought 100000 was small because I've read that Python can represent numbers up to around 10^308, but that was for floats. When I changed fib(n) to calculate with floats instead the runtime did indeed turn linear.
 
  • #10
36,027
12,927
10^308 has just ~1000 binary digits. The 100000. fibonacci number is way larger than that.

Floats can be added in constant time, but you lose the precision for large numbers.
 
  • #11
Grep
298
2
Ah right. I had thought 100000 was small because I've read that Python can represent numbers up to around 10^308, but that was for floats. When I changed fib(n) to calculate with floats instead the runtime did indeed turn linear.

Sure, but keep in mind that floats can still only represent a certain number of digits, and some numbers can't be represented exactly at all. Floating point is imprecise. I bet your answers are essentially wrong past a certain number of digits in the answer. Make sure you understand how floating point numbers work before using them. They do have pitfalls.

Might want to read the wiki page on them: http://en.wikipedia.org/wiki/Floating_point
 
  • #12
Borek
Mentor
29,003
3,668
I've read that Python can represent numbers up to around 10^308

That's just 64-bit float (double precision float, IEEE 754), standard format on Intel processors (and others). Nothing specifically pythonesque about it.
 
  • #13
rem1618
14
0
10^308 has just ~1000 binary digits. The 100000. fibonacci number is way larger than that.

Floats can be added in constant time, but you lose the precision for large numbers.

You're totally right. I don't know what my mind was thinking haha. And yes, I'm aware of the precision issue with floats. I was mostly just experimenting with the runtime, because I've only recently realized the iterative algorithm is way faster than the recursive one for fibonacci numbers.
 

Suggested for: Why is the complexity of this code O(n^2)?

  • Last Post
Replies
4
Views
1K
Replies
1
Views
739
  • Last Post
Replies
2
Views
1K
  • Last Post
Replies
1
Views
2K
Replies
9
Views
3K
  • Last Post
Replies
6
Views
5K
  • Last Post
Replies
1
Views
754
Replies
8
Views
3K
Replies
6
Views
3K
Top