Is there any similarity between random and non-random?

  • Context: Undergrad 
  • Thread starter Thread starter Loren Booda
  • Start date Start date
  • Tags Tags
    Random
Click For Summary

Discussion Overview

The discussion explores the relationship between random and non-random distributions, focusing on concepts of complexity and entropy. Participants examine how randomness can be interpreted in relation to deterministic processes and the implications of transformations on data sets.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants inquire about how a random distribution can relate to a non-random one, suggesting a need for deeper understanding of the underlying processes.
  • One participant proposes that a non-random event can be viewed as a random event with a probability of one, indicating a potential overlap between the two concepts.
  • A participant provides an example of a periodic process (0,1,2,3,4,5) to illustrate how a seemingly random distribution can be deterministic when considering conditional probabilities.
  • Questions are raised regarding which type of distribution—random or non-random—has greater potential complexity and entropy, with some suggesting that complexity may depend on how it is defined.
  • Another participant argues that if complexity is defined by order or disorder, then entropy measures could be used to assess it, citing the periodic process as an example of low entropy.
  • There is a discussion about the possibility of transformations that could alter the entropy of a data set, with implications for how randomness and order are perceived in data analysis.
  • Concerns are expressed about the limitations of defining exhaustive entropy and the implications of higher-order conditional probabilities on the understanding of order in processes.

Areas of Agreement / Disagreement

Participants express differing views on the definitions of complexity and entropy, and whether transformations can create order from randomness. The discussion remains unresolved, with multiple competing perspectives presented.

Contextual Notes

Limitations in definitions of complexity and entropy are noted, as well as the dependence on specific entropic measures and the unresolved nature of transformations discussed.

Loren Booda
Messages
3,115
Reaction score
4
How can a random distribution relate to a non-random one?
 
Physics news on Phys.org
You could look at it the other way around. A non-random event is a random event, where you have one outcome with a probability of one.
 
Loren Booda said:
How can a random distribution relate to a non-random one?

Following on from what mathman said, essentially what you will need to do is end up getting information that eventually leads to the exhaustion of getting a probability of 1 in the right cases.

I'll give you an example of what I mean.

Lets say you have a process that goes 0,1,2,3,4,5 and repeats itself forever and ever.

Now P(X = a) where a = [1,6] is 1/6 which suggests a purely random process, but when you take into account P(X_b+1 = a| X_b = (a-1) MOD 6) = 1, then basically you have an exhaustion of the space of first order conditional probabilities which means that your process is in fact, completely deterministic.
 
Which has the greatest potential complexity -- a random or a non-random distribution?
 
Which has the greatest potential entropy -- a random or a non-random distribution?
 
Loren Booda said:
Which has the greatest potential complexity -- a random or a non-random distribution?

How do you define complexity?

If you define complexity by the amount of order or disorder a system has, then entropy and its various forms are probably the best way statistically to do that.

The example I showed above with the 0,1,2,3,4,5 process that is periodic shows that at one level, the entropy is actually 0 meaning that at that particular level, the system is completely orderly.

The most complex system to analyze is one which has maximal entropy across many different entropic measures. In the area of data compression, we usually refer to these data sources as uniform random data for the reason that no probabilistic compressor can take advantage of distribution information because of the fact that you get the kind of 'maximal' entropy for the various measures.

Now if you want to get even more technical, you would have to show that no transform of the data also changes the measures of the entropy.

If you showed that for a particular class of data-sets that there existed a transform that lowered some particular entropy measure of the data-set, then depending on the actual measure you used you could transform the data in which you take something that is 'disordered' and 'random' and make it 'less disordered'. If you were able to keep doing this continually until you got a set of entropy measures that made the data set 'completely orderly' then you would have described a transformation (or composition of transformations) that describe some order of the data under some transformation.

So to answer your question, if there did exist the kind of transform mentioned above, then in a sense you have shown that the data under a transformation has an order, and that order is represented by the final transformation where your exhaustive entropy is zero for some exhaustive measure. If a transformation doesn't exist to get this far, you basically go as low as you can go in terms of exhaustive entropy and that becomes a way to describe your 'order' of the process in terms of a transformation.

When I say exhaustive I haven't really given a proper definition, but essentially what it means is that it depends on what entropic measure you taking.

For example in the 0,1,2,3,4,5 process the first order conditional entropy is 0 which means that at this level (first order conditional or normal markovian property), the process is completely orderly. Now since we are looking at first order conditional we have to consider all possibilities that this entails which means we have to 'exhaust' all of these possibilities. This is what I mean when I say 'exhaustive' or 'exhaustion'.

If you do even higher orders, then you get more 'exhaustion' required: the idea is that if you end up at some order and exhaust the entire space getting an entropy with respect to those states being 0, then you have found absolute order at some level of your process.

A purely random state should not allow you to ever get the above result, although I do not know about that when you consider some kind of complex transformation on the data, especially for something that is non-linear.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
15
Views
2K