Loren Booda said:
Which has the greatest potential complexity -- a random or a non-random distribution?
How do you define complexity?
If you define complexity by the amount of order or disorder a system has, then entropy and its various forms are probably the best way statistically to do that.
The example I showed above with the 0,1,2,3,4,5 process that is periodic shows that at one level, the entropy is actually 0 meaning that at that particular level, the system is completely orderly.
The most complex system to analyze is one which has maximal entropy across many different entropic measures. In the area of data compression, we usually refer to these data sources as uniform random data for the reason that no probabilistic compressor can take advantage of distribution information because of the fact that you get the kind of 'maximal' entropy for the various measures.
Now if you want to get even more technical, you would have to show that no transform of the data also changes the measures of the entropy.
If you showed that for a particular class of data-sets that there existed a transform that lowered some particular entropy measure of the data-set, then depending on the actual measure you used you could transform the data in which you take something that is 'disordered' and 'random' and make it 'less disordered'. If you were able to keep doing this continually until you got a set of entropy measures that made the data set 'completely orderly' then you would have described a transformation (or composition of transformations) that describe some order of the data under some transformation.
So to answer your question, if there did exist the kind of transform mentioned above, then in a sense you have shown that the data under a transformation has an order, and that order is represented by the final transformation where your exhaustive entropy is zero for some exhaustive measure. If a transformation doesn't exist to get this far, you basically go as low as you can go in terms of exhaustive entropy and that becomes a way to describe your 'order' of the process in terms of a transformation.
When I say exhaustive I haven't really given a proper definition, but essentially what it means is that it depends on what entropic measure you taking.
For example in the 0,1,2,3,4,5 process the first order conditional entropy is 0 which means that at this level (first order conditional or normal markovian property), the process is completely orderly. Now since we are looking at first order conditional we have to consider all possibilities that this entails which means we have to 'exhaust' all of these possibilities. This is what I mean when I say 'exhaustive' or 'exhaustion'.
If you do even higher orders, then you get more 'exhaustion' required: the idea is that if you end up at some order and exhaust the entire space getting an entropy with respect to those states being 0, then you have found absolute order at some level of your process.
A purely random state should not allow you to ever get the above result, although I do not know about that when you consider some kind of complex transformation on the data, especially for something that is non-linear.