Python Python: Are Terms in this List Monotonic?

Click For Summary
The discussion focuses on determining if a list of numbers in Python is monotonic, meaning it is either non-decreasing or non-increasing. Participants explore various methods to achieve this, including using loops to check differences between adjacent elements and suggestions to utilize Python's built-in functions like `reduce`. There is a debate over the efficiency of different approaches, with a preference for single-pass algorithms over sorting methods due to their lower time complexity. The conversation also touches on handling special cases like zero and duplicates in the list. Ultimately, the goal is to find a concise and effective way to implement a monotonicity check in Python.
  • #31
Baluncore said:
You only need to scan the list once if you test for a sign change of the first difference.
If you use sort you must do it twice, once for ascending and once for descending.
Yes, and even if the algorithm used is ## O(n \log n) ## then in practice performance is much worse than ## k n \log n ## because there is a step change in ## k ## when the list no longer fits in processor cache, and again, dramitcally when the list no longer fits in main memory. There is also the memory usage to consider.

Baluncore said:
But with Python the aim is to write the minimum source code, in a way that can be quickly understood.
Even if "minimum source code" were a criterion (and I don't see that anywhere in the question) I don't think it would ever be the only criterion without any regard to performance.

Baluncore said:
Sorting twice will win under that measure.
But it will lose against an equally or even more concise implementation of a list traversal such as the one in #9.

Baluncore said:
A monotonicity test is not the type of algorithm I would expect to find inside a tight loop, or running 50,000 times each day. If you only run the test once each day, on a 100 element list, then efficiency is irrelevant.
No list length was specified in the op. Edit: I agree that premature optimization is to be avoided, but selecting an (at best) ## O(n \log n) ## time ## O(n) ## memory implementation when there is an at least equally trivial ## O(n) ## time ## O(1) ## memory one available is premature pessimization, and this is never good.
 
Technology news on Phys.org
  • #32
Since we all seem to be posting solutions, here's what I had in mind:
Python:
MONOTONIC_INCREASING=1
MONOTONIC_DECREASING=-1
NOT_MONOTONIC=0

def isMonotonic(lst):
    ordered=[cmp(l1,l) for (l,l1) in zip(lst[:-1],lst[1:]) if cmp(l,l1)!=0]
    if len(ordered)==0 or all(o==1 for o in ordered):
        return MONOTONIC_INCREASING
    if all(o==-1 for o in ordered):
        return MONOTONIC_DECREASING
    return NOT_MONOTONIC
The cmp ("compare") function returns 1, 0, or -1 depending on whether the first argument is greater than, equal to, or less than the second, and it's used with zip to get a list of each element compared to the next, dropping equalities. Then it just checks that all differences were positive (or all elements were equal, arbitrarily) or all negative.
 
  • #33
Ibix said:
The cmp ("compare") function returns 1, 0, or -1 depending on whether the first argument is greater than, equal to, or less than the second
Only in Python 2: it does not exist in Python 3, although it is easy enough to implement. Also I believe your algorithm checks strict monotonicity, the OP (or a later clarification) asked for a loose check.
 
  • #34
pbuk said:
Only in Python 2: it does not exist in Python 3
Spot the person with the elderly python install. In that case, try
Python:
ordered=[(1 if l1>l else -1) for (l,l1) in zip(lst[:-1],lst[1:]) if l!=l1]
(I think I have the comparison the right way round).
pbuk said:
Also I believe your algorithm checks strict monotonicity, the OP (or a later clarification) asked for a loose check.
I don't think so. It drops the cases where they are equal, so [1,2,2,3] should give [1,1,1], which I think matches the loose check criterion.
 
  • #35
Ibix said:
It drops the cases where they are equal, so [1,2,2,3] should give [1,1,1], which I think matches the loose check criterion.
Ah yes, didn't see that (and I couldn't check because I am not running Py2 which is now 18 months past EOL!)
 
  • #36
I have python 3 installed, but typing python invokes python2. I should probably change the behaviour...
 
  • Like
Likes pbuk
  • #37
@Ibix just realized your algorithm is ## O(n) ## in memory against ## O(1) ## for a list traversal (either via iteration per @Balancore or functools.reduce per my #7), but I won't hold that against you :biggrin:
 
  • Like
Likes Ibix
  • #38
It actually seems to be quite a lot slower than Baluncore's approach as well, which surprises me since I understood python loops to be slow and list comprehensions to be better. Perhaps I need to do some reading.
 
  • #39
Ibix said:
I understood python loops to be slow and list comprehensions to be better.
The difference is not so much loops vs. comprehensions but how much code written in Python (as opposed to just built-in functions or operations, which are really C code running inside the interpreter) is executed on each iteration.

However, @Baluncore's algorithm exits early if it spots a failure of monotonicity, whereas yours will traverse the entire list each time, even if a failure in monotonicity occurs early in the traversal. That's probably why yours is slower.
 
  • #40
PeterDonis said:
However, @Baluncore's algorithm exits early if it spots a failure of monotonicity, whereas yours will traverse the entire list each time, even if a failure in monotonicity occurs early in the traversal. That's probably why yours is slower.
It actually seems to be random noise. I set up a version that runs the two algorithms a thousand times on range(1000) and range(1000)[::-1] and the average is pretty close for monotonic cases. Looks like my two-or-three-runs-and-eyeball-it approach wasn't entirely reliable from a statistical point of view. Baluncore's approach is, as you note, way faster for non-monotonic cases since it exits early.
 
  • #41
Ibix said:
It actually seems to be quite a lot slower than Baluncore's approach as well
Bear in mind that you are doing FOUR list traversals (for...in, zip and two all's) instead of just one.

Ibix said:
which surprises me since I understood python loops to be slow and list comprehensions to be better
This is a common belief, partly based on misunderstanding and partly based on early implementations of Python which were really slow.

PeterDonis said:
The difference is not so much loops vs. comprehensions but how much code written in Python (as opposed to just built-in functions or operations, which are really C code running inside the interpreter) is executed on each iteration.
Exactly this. So for instance finding the dot product of two vectors represented as numpy.arrays is implemented entirely in C and runs much quicker than iterating over plain lists, but when as in this case you are executing the same code to compare elements of a list, it doesn't matter whether that list is traversed via a for...in comprehension, a for..next iteration or functools.reduce. And where it is possible that the iteration might terminate early, the for..next loop will always win because it is the only one you can break out of.
 
Last edited:
  • #42
pbuk said:
This is a common belief [...] partly based on early implementations of Python which were really slow.
That'll be it. I learned python originally from an O'Reilly book well over fifteen years ago, so it's probably 20+ year old info now. How time flies. I guess I need to update more than my install.
 
  • #43
Succinct version, haven't tested performance:
Python:
from itertools import accumulate

def is_monotonic(lst):
    def all_eq(acc):
        return all(x == y for x, y in zip(lst, accumulate(lst, acc)))
    return all_eq(max) or all_eq(min)
accumulate is like reduce except it returns a generator of all intermediate values (and it can stop early).

It is possible to rearrange this so everything is done in "one loop" but it's hairier and doesn't really seem worth it in Python.
 
  • #44
The Baluncore technique was to imagine the stream of sequential data arriving from secondary storage, then constructing the minimum systolic processor to filter that. My Python code indexes the list twice for each entry, which can be reduced by simply saving the previous data item. I did not test that because I don't have Python 3 running here, but that is OK because I don't have Python 2 either. Thanks for checking my code. I must admit that the more I see of Python, the more human, hilarious and Pythonesque I find it.
 
  • Like
Likes pbuk

Similar threads

Replies
1
Views
1K
Replies
4
Views
2K
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
992
  • · Replies 28 ·
Replies
28
Views
4K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 10 ·
Replies
10
Views
3K