Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Is there any similarity between random and non-random?

  1. Mar 4, 2012 #1
    How can a random distribution relate to a non-random one?
     
  2. jcsd
  3. Mar 4, 2012 #2

    mathman

    User Avatar
    Science Advisor
    Gold Member

    You could look at it the other way around. A non-random event is a random event, where you have one outcome with a probability of one.
     
  4. Mar 4, 2012 #3

    chiro

    User Avatar
    Science Advisor

    Following on from what mathman said, essentially what you will need to do is end up getting information that eventually leads to the exhaustion of getting a probability of 1 in the right cases.

    I'll give you an example of what I mean.

    Lets say you have a process that goes 0,1,2,3,4,5 and repeats itself forever and ever.

    Now P(X = a) where a = [1,6] is 1/6 which suggests a purely random process, but when you take into account P(X_b+1 = a| X_b = (a-1) MOD 6) = 1, then basically you have an exhaustion of the space of first order conditional probabilities which means that your process is in fact, completely deterministic.
     
  5. Mar 6, 2012 #4
    Which has the greatest potential complexity -- a random or a non-random distribution?
     
  6. Mar 6, 2012 #5
    Which has the greatest potential entropy -- a random or a non-random distribution?
     
  7. Mar 6, 2012 #6

    chiro

    User Avatar
    Science Advisor

    How do you define complexity?

    If you define complexity by the amount of order or disorder a system has, then entropy and its various forms are probably the best way statistically to do that.

    The example I showed above with the 0,1,2,3,4,5 process that is periodic shows that at one level, the entropy is actually 0 meaning that at that particular level, the system is completely orderly.

    The most complex system to analyze is one which has maximal entropy across many different entropic measures. In the area of data compression, we usually refer to these data sources as uniform random data for the reason that no probabilistic compressor can take advantage of distribution information because of the fact that you get the kind of 'maximal' entropy for the various measures.

    Now if you want to get even more technical, you would have to show that no transform of the data also changes the measures of the entropy.

    If you showed that for a particular class of data-sets that there existed a transform that lowered some particular entropy measure of the data-set, then depending on the actual measure you used you could transform the data in which you take something that is 'disordered' and 'random' and make it 'less disordered'. If you were able to keep doing this continually until you got a set of entropy measures that made the data set 'completely orderly' then you would have described a transformation (or composition of transformations) that describe some order of the data under some transformation.

    So to answer your question, if there did exist the kind of transform mentioned above, then in a sense you have shown that the data under a transformation has an order, and that order is represented by the final transformation where your exhaustive entropy is zero for some exhaustive measure. If a transformation doesn't exist to get this far, you basically go as low as you can go in terms of exhaustive entropy and that becomes a way to describe your 'order' of the process in terms of a transformation.

    When I say exhaustive I haven't really given a proper definition, but essentially what it means is that it depends on what entropic measure you taking.

    For example in the 0,1,2,3,4,5 process the first order conditional entropy is 0 which means that at this level (first order conditional or normal markovian property), the process is completely orderly. Now since we are looking at first order conditional we have to consider all possibilities that this entails which means we have to 'exhaust' all of these possibilities. This is what I mean when I say 'exhaustive' or 'exhaustion'.

    If you do even higher orders, then you get more 'exhaustion' required: the idea is that if you end up at some order and exhaust the entire space getting an entropy with respect to those states being 0, then you have found absolute order at some level of your process.

    A purely random state should not allow you to ever get the above result, although I do not know about that when you consider some kind of complex transformation on the data, especially for something that is non-linear.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook