Why is the complexity of this code O(n^2)?

In summary, the complexity of iterative fibonacci calculation is quadratic in the size of the numbers being added, while the complexity of recursive fibonacci calculation is linear.
  • #1
rem1618
14
0
def fib(n):
f0, f1, = 0, 1
for i in range(n - 1):
f0, f1 = f1, f0 + f1
return f1

It looks like it'd be linear, given there's only one loop, but when I plotted n against runtime, the relationship was quadratic, why?
 
Technology news on Phys.org
  • #2
Does your addition happen in constant time, independent of the size of the numbers?
 
  • #3
So it's the adding of big numbers that's contributing to the overall complexity? That makes sense, but how do I quantify it? So far my understanding of determining complexity is merely by counting the number of operations.
 
  • #4
The addition primitive in hardware for two objects of the same datatype [ assuming no overflows ] does not take more time as numbers grow in size. Bad assumption that larger number addition changes complexity.
 
  • #5
If the numbers are so long that the computer has to add them in several steps (>>64 bits), addition might take linear time (note: linear in the length of the digits in bits).

@rem1618: What is the range of n you tested?
 
  • #6
They're not huge numbers, I ran it in range(0, 100000, 10000). This was the plot I got

The rest of my code is just for clocking runtime and plotting, so there should be nothing special there, but here it is.

Code:
import time
from pylab import *

def fib(n):
...f0, f1, = 0, 1
...for i in xrange(n - 1):
...f0, f1 = f1, f0 + f1
...return f1

limits = (0, 100000, 10000)
n_N = range(*limits)
n_t = []
n2_N = [i**2 for i in n_N]

for i in n_N:
...start = time.clock()
...fib(i)
...end = time.clock()
...diff = end - start
...n_t.append(diff)

figure()

subplot(211)
title('Complexity of Iterative Fib')
xlabel('n')
ylabel('runtime (s)')
plot(n_N, n_t)

subplot(212)
title('Complexity of n^2')
xlabel('n')
ylabel('runtime (s)')
plot(n_N, n2_N)

show()
 
Last edited by a moderator:
  • #7
The 100000. fibonacci number is huge compared to the size of the adders in your CPU (probably 64 bit). It has about 50000 binary digits.
 
  • #8
Check out the difference between "plain integers" and "long integers" in your python documentation.

Take home lesson from this: the downside of an "ultra-user-friendly" language like Python is that many users don't know what's their programs are actually doing, so long as they "seem to work OK".

And statements like "from pylab import *" don't make it any easier to find out what's going on!
 
  • #9
Ah right. I had thought 100000 was small because I've read that Python can represent numbers up to around 10^308, but that was for floats. When I changed fib(n) to calculate with floats instead the runtime did indeed turn linear.
 
  • #10
10^308 has just ~1000 binary digits. The 100000. fibonacci number is way larger than that.

Floats can be added in constant time, but you lose the precision for large numbers.
 
  • #11
rem1618 said:
Ah right. I had thought 100000 was small because I've read that Python can represent numbers up to around 10^308, but that was for floats. When I changed fib(n) to calculate with floats instead the runtime did indeed turn linear.

Sure, but keep in mind that floats can still only represent a certain number of digits, and some numbers can't be represented exactly at all. Floating point is imprecise. I bet your answers are essentially wrong past a certain number of digits in the answer. Make sure you understand how floating point numbers work before using them. They do have pitfalls.

Might want to read the wiki page on them: http://en.wikipedia.org/wiki/Floating_point
 
  • #12
rem1618 said:
I've read that Python can represent numbers up to around 10^308

That's just 64-bit float (double precision float, IEEE 754), standard format on Intel processors (and others). Nothing specifically pythonesque about it.
 
  • #13
mfb said:
10^308 has just ~1000 binary digits. The 100000. fibonacci number is way larger than that.

Floats can be added in constant time, but you lose the precision for large numbers.

You're totally right. I don't know what my mind was thinking haha. And yes, I'm aware of the precision issue with floats. I was mostly just experimenting with the runtime, because I've only recently realized the iterative algorithm is way faster than the recursive one for fibonacci numbers.
 

1. What does "O(n^2)" mean in terms of code complexity?

In computer science, "O(n^2)" is used to represent the worst-case time complexity of an algorithm, where "n" represents the size of the input. This means that as the input size increases, the time it takes for the algorithm to run will increase at a quadratic rate.

2. How do I determine the complexity of a code?

The complexity of a code can be determined by analyzing the number of iterations or operations performed on the input size. In general, loops and nested loops will result in O(n^2) complexity, as the number of operations increases exponentially with the input size.

3. Why is O(n^2) considered a bad complexity for code?

O(n^2) complexity is considered bad because it can significantly slow down the performance of the code, especially for large input sizes. This can make the code inefficient and impractical for real-world use.

4. What are some common examples of code with O(n^2) complexity?

Some common examples of code with O(n^2) complexity are nested for loops, bubble sort, and selection sort. In these cases, the number of operations increases exponentially with the input size, resulting in a quadratic time complexity.

5. How can I improve the complexity of my code from O(n^2) to a better time complexity?

To improve the complexity of your code from O(n^2), you can try to optimize your algorithms and data structures. For example, using divide and conquer techniques or implementing more efficient sorting algorithms like merge sort or quicksort can reduce the time complexity to O(nlogn). It is also important to carefully analyze and design your code to avoid unnecessary loops and operations.

Similar threads

  • Programming and Computer Science
2
Replies
47
Views
4K
Replies
10
Views
1K
  • Programming and Computer Science
Replies
34
Views
2K
  • Programming and Computer Science
Replies
8
Views
1K
  • Programming and Computer Science
Replies
11
Views
811
Replies
5
Views
360
  • Programming and Computer Science
Replies
9
Views
2K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
3K
Replies
9
Views
1K
Back
Top