Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Increasing the efficiency of a python code.

  1. Sep 8, 2014 #1

    adjacent

    User Avatar
    Gold Member

    Here is a python function to check whether a given list or string contains duplicate values:
    Code (Text):

    def has_duplicates(x):
        t = []
        for i in x:
            if i not in t:
                t.append(i)          
        if t != list(x):              
            return True
        else:
            return False
     
    But I am sure this approach will be slow for large lists. Is there any other, faster way to do this?
     
  2. jcsd
  3. Sep 8, 2014 #2
    You could use something like this:

    Code (Text):

    def has_duplicates(seq):
        return len(seq) != len(set(seq))
     
     
  4. Sep 8, 2014 #3

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    If Python has a simple-minded implementation of lists, that will take n/2 comparisons on average for each "i not in t" test, if the list has n elements. So the total time will be proportional to n2. You could check that out experimentally.

    It might be faster to use a dictionary instead of a list. There is an OrderedDict that remembers the order you added the elements, if that ordering is important for some reason. That should give you a time roughly proportional to n.

    Another idea would be to sort the lists first and then check for duplicates. The time to do that should be proportional to n log n.

    The "best" option to choose depends on what the whole application needs to do, not necessarily on optimizing just one function.
     
    Last edited: Sep 8, 2014
  5. Sep 8, 2014 #4
    There are a zillion answers for this on stackoverflow. Integrand has a good solution if you are only interested in whether there are duplicates.

    For the efficiency of the built-ins, see https://wiki.python.org/moin/TimeComplexity.
    See also: https://wiki.python.org/moin/PythonSpeed/PerformanceTips

    To estimate the efficiency of your routine, think about the worst case scenario: two duplicates at the end of the list. "i not in t" looks simple, but each check requires a complete traversal of t (it's an O(n) operation). If n=len(x), there are (n-1) append calls and (n-1) * ( 1 + 2 + ... + n-1) comparisons. That's (n-1)*(n-1)/2*n comparisons. Our complexity is O(n**3). Ouch.

    If we use a set for t instead of a list, and t.add(i) instead of append, then "i not in t" is O(1) instead of O(n), so the algorithm is O(n). For that matter, you don't need the if statement, just add() the value to the set, and duplicates are squashed for you. Then you can see that

    Code (Text):
    t = set()
    for i in x:
        t.add(i)
     
    is the same as t = set(x), though set(x) is about twice as fast. Which is the method Integrand gave.

    So...TIP: when using the "in" operator on a longish collection of items in an inner loop, convert the collection to a set first.
     
    Last edited: Sep 8, 2014
  6. Sep 9, 2014 #5

    adjacent

    User Avatar
    Gold Member

    Thanks guys. I haven't studied sets yet. I will try that method when I study it. :smile:
     
  7. Sep 9, 2014 #6
    Try it with a dictionary, which is how one would have done it before Python had sets. The "in" operator is O(1) for dictionaries.
     
  8. Sep 10, 2014 #7

    adjacent

    User Avatar
    Gold Member

    I have learned sets. It's exactly what I needed. No duplicate values :smile:
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Increasing the efficiency of a python code.
Loading...