Python: Are Terms in this List Monotonic?

In summary: We are working with real numbers, so I assumed that duplicates were as acceptable as a one LSB difference in any floating point representation. The critical thing being that it does not flip the sign of the difference.In summary,The program detects the first non-zero difference in a list of Real numbers, and compares that difference with all the following differences. If the list fails the test for monotonicity, the program returns a no.
  • #36
I have python 3 installed, but typing python invokes python2. I should probably change the behaviour...
 
  • Like
Likes pbuk
Technology news on Phys.org
  • #37
@Ibix just realized your algorithm is ## O(n) ## in memory against ## O(1) ## for a list traversal (either via iteration per @Balancore or functools.reduce per my #7), but I won't hold that against you :biggrin:
 
  • Like
Likes Ibix
  • #38
It actually seems to be quite a lot slower than Baluncore's approach as well, which surprises me since I understood python loops to be slow and list comprehensions to be better. Perhaps I need to do some reading.
 
  • #39
Ibix said:
I understood python loops to be slow and list comprehensions to be better.
The difference is not so much loops vs. comprehensions but how much code written in Python (as opposed to just built-in functions or operations, which are really C code running inside the interpreter) is executed on each iteration.

However, @Baluncore's algorithm exits early if it spots a failure of monotonicity, whereas yours will traverse the entire list each time, even if a failure in monotonicity occurs early in the traversal. That's probably why yours is slower.
 
  • #40
PeterDonis said:
However, @Baluncore's algorithm exits early if it spots a failure of monotonicity, whereas yours will traverse the entire list each time, even if a failure in monotonicity occurs early in the traversal. That's probably why yours is slower.
It actually seems to be random noise. I set up a version that runs the two algorithms a thousand times on range(1000) and range(1000)[::-1] and the average is pretty close for monotonic cases. Looks like my two-or-three-runs-and-eyeball-it approach wasn't entirely reliable from a statistical point of view. Baluncore's approach is, as you note, way faster for non-monotonic cases since it exits early.
 
  • #41
Ibix said:
It actually seems to be quite a lot slower than Baluncore's approach as well
Bear in mind that you are doing FOUR list traversals (for...in, zip and two all's) instead of just one.

Ibix said:
which surprises me since I understood python loops to be slow and list comprehensions to be better
This is a common belief, partly based on misunderstanding and partly based on early implementations of Python which were really slow.

PeterDonis said:
The difference is not so much loops vs. comprehensions but how much code written in Python (as opposed to just built-in functions or operations, which are really C code running inside the interpreter) is executed on each iteration.
Exactly this. So for instance finding the dot product of two vectors represented as numpy.arrays is implemented entirely in C and runs much quicker than iterating over plain lists, but when as in this case you are executing the same code to compare elements of a list, it doesn't matter whether that list is traversed via a for...in comprehension, a for..next iteration or functools.reduce. And where it is possible that the iteration might terminate early, the for..next loop will always win because it is the only one you can break out of.
 
Last edited:
  • #42
pbuk said:
This is a common belief [...] partly based on early implementations of Python which were really slow.
That'll be it. I learned python originally from an O'Reilly book well over fifteen years ago, so it's probably 20+ year old info now. How time flies. I guess I need to update more than my install.
 
  • #43
Succinct version, haven't tested performance:
Python:
from itertools import accumulate

def is_monotonic(lst):
    def all_eq(acc):
        return all(x == y for x, y in zip(lst, accumulate(lst, acc)))
    return all_eq(max) or all_eq(min)
accumulate is like reduce except it returns a generator of all intermediate values (and it can stop early).

It is possible to rearrange this so everything is done in "one loop" but it's hairier and doesn't really seem worth it in Python.
 
  • #44
The Baluncore technique was to imagine the stream of sequential data arriving from secondary storage, then constructing the minimum systolic processor to filter that. My Python code indexes the list twice for each entry, which can be reduced by simply saving the previous data item. I did not test that because I don't have Python 3 running here, but that is OK because I don't have Python 2 either. Thanks for checking my code. I must admit that the more I see of Python, the more human, hilarious and Pythonesque I find it.
 
  • Like
Likes pbuk

Similar threads

  • Programming and Computer Science
Replies
4
Views
1K
Replies
6
Views
660
  • Programming and Computer Science
Replies
4
Views
1K
  • Programming and Computer Science
Replies
3
Views
1K
  • Programming and Computer Science
Replies
2
Views
652
  • Programming and Computer Science
Replies
7
Views
439
  • Programming and Computer Science
Replies
3
Views
323
  • Programming and Computer Science
Replies
10
Views
2K
  • Programming and Computer Science
Replies
3
Views
720
Replies
9
Views
1K
Back
Top