Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Why is the complexity of this code O(n^2)?

  1. Sep 21, 2013 #1
    def fib(n):
    f0, f1, = 0, 1
    for i in range(n - 1):
    f0, f1 = f1, f0 + f1
    return f1

    It looks like it'd be linear, given there's only one loop, but when I plotted n against runtime, the relationship was quadratic, why?
     
  2. jcsd
  3. Sep 21, 2013 #2

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    Does your addition happen in constant time, independent of the size of the numbers?
     
  4. Sep 21, 2013 #3
    So it's the adding of big numbers that's contributing to the overall complexity? That makes sense, but how do I quantify it? So far my understanding of determining complexity is merely by counting the number of operations.
     
  5. Sep 21, 2013 #4

    jim mcnamara

    User Avatar

    Staff: Mentor

    The addition primitive in hardware for two objects of the same datatype [ assuming no overflows ] does not take more time as numbers grow in size. Bad assumption that larger number addition changes complexity.
     
  6. Sep 21, 2013 #5

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    If the numbers are so long that the computer has to add them in several steps (>>64 bits), addition might take linear time (note: linear in the length of the digits in bits).

    @rem1618: What is the range of n you tested?
     
  7. Sep 21, 2013 #6
    They're not huge numbers, I ran it in range(0, 100000, 10000). This was the plot I got

    The rest of my code is just for clocking runtime and plotting, so there should be nothing special there, but here it is.

    Code (Text):
    import time
    from pylab import *

    def fib(n):
    ....f0, f1, = 0, 1
    ....for i in xrange(n - 1):
    ........f0, f1 = f1, f0 + f1
    ....return f1

    limits = (0, 100000, 10000)
    n_N = range(*limits)
    n_t = []
    n2_N = [i**2 for i in n_N]

    for i in n_N:
    ....start = time.clock()
    ....fib(i)
    ....end = time.clock()
    ....diff = end - start
    ....n_t.append(diff)

    figure()

    subplot(211)
    title('Complexity of Iterative Fib')
    xlabel('n')
    ylabel('runtime (s)')
    plot(n_N, n_t)

    subplot(212)
    title('Complexity of n^2')
    xlabel('n')
    ylabel('runtime (s)')
    plot(n_N, n2_N)

    show()
     
    Last edited by a moderator: Sep 28, 2013
  8. Sep 22, 2013 #7

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    The 100000. fibonacci number is huge compared to the size of the adders in your CPU (probably 64 bit). It has about 50000 binary digits.
     
  9. Sep 22, 2013 #8

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    Check out the difference between "plain integers" and "long integers" in your python documentation.

    Take home lesson from this: the downside of an "ultra-user-friendly" language like Python is that many users don't know what's their programs are actually doing, so long as they "seem to work OK".

    And statements like "from pylab import *" don't make it any easier to find out what's going on!
     
  10. Sep 22, 2013 #9
    Ah right. I had thought 100000 was small because I've read that Python can represent numbers up to around 10^308, but that was for floats. When I changed fib(n) to calculate with floats instead the runtime did indeed turn linear.
     
  11. Sep 22, 2013 #10

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    10^308 has just ~1000 binary digits. The 100000. fibonacci number is way larger than that.

    Floats can be added in constant time, but you lose the precision for large numbers.
     
  12. Sep 22, 2013 #11
    Sure, but keep in mind that floats can still only represent a certain number of digits, and some numbers can't be represented exactly at all. Floating point is imprecise. I bet your answers are essentially wrong past a certain number of digits in the answer. Make sure you understand how floating point numbers work before using them. They do have pitfalls.

    Might want to read the wiki page on them: http://en.wikipedia.org/wiki/Floating_point
     
  13. Sep 22, 2013 #12

    Borek

    User Avatar

    Staff: Mentor

    That's just 64-bit float (double precision float, IEEE 754), standard format on Intel processors (and others). Nothing specifically pythonesque about it.
     
  14. Sep 22, 2013 #13
    You're totally right. I don't know what my mind was thinking haha. And yes, I'm aware of the precision issue with floats. I was mostly just experimenting with the runtime, because I've only recently realized the iterative algorithm is way faster than the recursive one for fibonacci numbers.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Why is the complexity of this code O(n^2)?
Loading...