Why is the complexity of this code O(n^2)?

  • Thread starter Thread starter rem1618
  • Start date Start date
  • Tags Tags
    Code Complexity
Click For Summary

Discussion Overview

The discussion revolves around the time complexity of a Fibonacci sequence implementation in Python, specifically questioning why the observed complexity appears to be O(n^2) despite the presence of a single loop. Participants explore factors influencing runtime, including the handling of large integers and the implications of using different data types.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant suggests that the code should be linear due to the single loop, but observes a quadratic relationship in runtime when plotted.
  • Another participant questions whether the addition of large numbers affects the overall complexity.
  • Some participants propose that the addition of large integers could contribute to increased complexity, while others argue that addition in hardware does not depend on the size of the numbers, assuming no overflows.
  • There is a suggestion that if numbers exceed the capacity of standard data types, addition may take linear time relative to the number of bits.
  • A participant shares their testing range and provides code for measuring runtime, indicating that their results show a quadratic relationship.
  • Discussion includes the size of Fibonacci numbers and their representation in Python, with some noting that the 100,000th Fibonacci number has a significant number of binary digits.
  • Some participants discuss the differences between "plain integers" and "long integers" in Python, highlighting potential misunderstandings about how Python handles large numbers.
  • Concerns are raised about the precision of floating-point numbers when used for large Fibonacci calculations, with a participant noting that switching to floats resulted in linear runtime.

Areas of Agreement / Disagreement

Participants express differing views on the impact of number size on addition complexity, with some asserting that it does not change while others suggest it can under certain conditions. The discussion remains unresolved regarding the exact reasons for the observed quadratic complexity.

Contextual Notes

Participants mention limitations related to the handling of large integers and the precision of floating-point arithmetic, indicating that these factors may influence runtime but are not fully resolved in the discussion.

rem1618
Messages
14
Reaction score
0
def fib(n):
f0, f1, = 0, 1
for i in range(n - 1):
f0, f1 = f1, f0 + f1
return f1

It looks like it'd be linear, given there's only one loop, but when I plotted n against runtime, the relationship was quadratic, why?
 
Technology news on Phys.org
Does your addition happen in constant time, independent of the size of the numbers?
 
So it's the adding of big numbers that's contributing to the overall complexity? That makes sense, but how do I quantify it? So far my understanding of determining complexity is merely by counting the number of operations.
 
The addition primitive in hardware for two objects of the same datatype [ assuming no overflows ] does not take more time as numbers grow in size. Bad assumption that larger number addition changes complexity.
 
If the numbers are so long that the computer has to add them in several steps (>>64 bits), addition might take linear time (note: linear in the length of the digits in bits).

@rem1618: What is the range of n you tested?
 
They're not huge numbers, I ran it in range(0, 100000, 10000). This was the plot I got

The rest of my code is just for clocking runtime and plotting, so there should be nothing special there, but here it is.

Code:
import time
from pylab import *

def fib(n):
...f0, f1, = 0, 1
...for i in xrange(n - 1):
...f0, f1 = f1, f0 + f1
...return f1

limits = (0, 100000, 10000)
n_N = range(*limits)
n_t = []
n2_N = [i**2 for i in n_N]

for i in n_N:
...start = time.clock()
...fib(i)
...end = time.clock()
...diff = end - start
...n_t.append(diff)

figure()

subplot(211)
title('Complexity of Iterative Fib')
xlabel('n')
ylabel('runtime (s)')
plot(n_N, n_t)

subplot(212)
title('Complexity of n^2')
xlabel('n')
ylabel('runtime (s)')
plot(n_N, n2_N)

show()
 
Last edited by a moderator:
The 100000. fibonacci number is huge compared to the size of the adders in your CPU (probably 64 bit). It has about 50000 binary digits.
 
Check out the difference between "plain integers" and "long integers" in your python documentation.

Take home lesson from this: the downside of an "ultra-user-friendly" language like Python is that many users don't know what's their programs are actually doing, so long as they "seem to work OK".

And statements like "from pylab import *" don't make it any easier to find out what's going on!
 
Ah right. I had thought 100000 was small because I've read that Python can represent numbers up to around 10^308, but that was for floats. When I changed fib(n) to calculate with floats instead the runtime did indeed turn linear.
 
  • #10
10^308 has just ~1000 binary digits. The 100000. fibonacci number is way larger than that.

Floats can be added in constant time, but you lose the precision for large numbers.
 
  • #11
rem1618 said:
Ah right. I had thought 100000 was small because I've read that Python can represent numbers up to around 10^308, but that was for floats. When I changed fib(n) to calculate with floats instead the runtime did indeed turn linear.

Sure, but keep in mind that floats can still only represent a certain number of digits, and some numbers can't be represented exactly at all. Floating point is imprecise. I bet your answers are essentially wrong past a certain number of digits in the answer. Make sure you understand how floating point numbers work before using them. They do have pitfalls.

Might want to read the wiki page on them: http://en.wikipedia.org/wiki/Floating_point
 
  • #12
rem1618 said:
I've read that Python can represent numbers up to around 10^308

That's just 64-bit float (double precision float, IEEE 754), standard format on Intel processors (and others). Nothing specifically pythonesque about it.
 
  • #13
mfb said:
10^308 has just ~1000 binary digits. The 100000. fibonacci number is way larger than that.

Floats can be added in constant time, but you lose the precision for large numbers.

You're totally right. I don't know what my mind was thinking haha. And yes, I'm aware of the precision issue with floats. I was mostly just experimenting with the runtime, because I've only recently realized the iterative algorithm is way faster than the recursive one for fibonacci numbers.
 

Similar threads

Replies
47
Views
5K
Replies
10
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 34 ·
2
Replies
34
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K