B What's the meaning of "random" in Mathematics?

  • Thread starter fbs7
  • Start date
  • Featured

jbriggs444

Science Advisor
6,968
2,289
We could, but we don't need to. Any positive number, or for that matter, any negative number or floating point number can be represented by a string of 0s or 1s. At heart, any data in a computer is composed of binary strings.
Early in a course in the Theory of Computing, one would encounter a number of basic proofs of equivalence(*). Among these are proofs that Turing Machines running on a binary alphabet have power equivalent to Turing Machines running on arbitrary alphabet sizes. Or with an arbitrary number of tapes. Or with one way unbounded tape, etc. Or that an arbitrary TM can be simulated by a fixed finite Universal Turing Machine that is provided with a description of an emulated TM on input tape.

(*) Equivalent in the sense that if there is a TM of the one sort that can solve a given problem, there is also a TM of the other sort that can also do so.
 
321
32
The abstract definition of a program always means "infinite memory". All "real world" programs are of course really finite automatons in a sense (unless we somehow assume ability to attach bigger and bigger memories as required). But usually thinking about them in a abstract way (like an idealised program) is often quite helpful.
Ah, thank you!
 
12
4
In laymans terms i think random means that there is no exact pattern to follow thus there is no mathematical function that could describe the phenomena except the phenomenon being the function itself which means no shorter description exists than what it is. If one considers that universe is random then trying to put it in theory seems only possible as what it is here and now imo.
 
291
36
In laymans terms i think random means that there is no exact pattern to follow thus there is no mathematical function that could describe the phenomena except the phenomenon being the function itself which means no shorter description exists than what it is.
I do not think this complies with compressibility based definition (given in wiki link in post#74) [.... though I think I understand what you are trying to say .... you are equating "random" with "non-deterministic" in a sense]. One would easily give finite descriptions of functions which are simply not compressible. [but then there is a question of what ought to count as a description ..... but I want to avoid that]
 

FactChecker

Science Advisor
Gold Member
2018 Award
4,662
1,588
In laymans terms i think random means that there is no exact pattern to follow thus there is no mathematical function that could describe the phenomena except the phenomenon being the function itself which means no shorter description exists than what it is. If one considers that universe is random then trying to put it in theory seems only possible as what it is here and now imo.
As a working definition, this would have a problem with something that combines a very predictable part with a truely random part. Suppose that ##\{a_n\}## is a truely random sequence of numbers between 0 and 1. The derived sequence, ##\{n+a_n\}## would be impossible to create even though it would is easy to compress (my definition is a compression). It would not be considered random for practical use.
 
291
36
As a working definition, this would have a problem with something that combines a very predictable part with a truely random part. Suppose that ##\{a_n\}## is a truely random sequence of numbers between 0 and 1. The derived sequence, ##\{n+a_n\}## would be impossible to create even though it would is easy to compress (my definition is a compression). It would not be considered random for practical use.
I am having some difficulty understanding what you are saying. If you assume the sequence ##\{a_n\}## is not compressible (beyond a constant number), then wouldn't it be possible (at least under some circumstances) that ##\{n+a_n\}## is also not compressible? Generally speaking, I do not see any obvious way to give short programs for computing the terms of latter sequence (starting from blank), when we don't have short programs for computing terms of the former sequence? So it seems that such an assertion would need a generic justification?

But then again, maybe I have missed something since I am not fully familiar with this topic.

Or perhaps your premise for the above statement is a little different.
 
Last edited:

FactChecker

Science Advisor
Gold Member
2018 Award
4,662
1,588
I am having some difficulty understanding what you are saying. If you assume the sequence ##\{a_n\}## is not compressible (beyond a constant number), then wouldn't it be possible that ##\{n+a_n\}## is also not compressible?
It wouldn't be completely compressible. The integer parts can be compressed, but the ##\{a_n\}## part can not.
 
291
36
My feeling is that, quite likely, there are already quite general theorems in the field of computable randomness that cover the case mentioned above (and probably much more).

P.S. The identity function ##n \mapsto n## should definitely be compressible, somewhat trivially (since we can give very basic demonstrations of small programs computing very large numbers ...... hence going outside of the constant range required by definition).

Also, in a similar way, all computable functions that at least grow faster than some very basic rate (say ##n \mapsto 2n##) should also be compressible (should be fairly easy to see I think).
 
Last edited:
321
32
These criteria blow my mind. Say that I have a 1,000,000-bytes string that I can compress to 1,000 bytes using zip compression; so, great, I know for sure this is not random.

Now say I have a 1,000,000-bytes string that whatever algorithms I try, I cannot compress. Then I assume it's random. But then tomorrow someone invents some super-fractal-evolving-zop algorithm, that manages to compress that to 900,000 bytes. Now the same string is no longer random. It seems odd to me that a mathematical criteria would be true one day and false tomorrow because someone in Alpha Centauri invented a smart algorithm...
 

jbriggs444

Science Advisor
6,968
2,289
These criteria blow my mind. Say that I have a 1,000,000-bytes string that I can compress to 1,000 bytes using zip compression; so, great, I know for sure this is not random.

Now say I have a 1,000,000-bytes string that whatever algorithms I try, I cannot compress. Then I assume it's random. But then tomorrow someone invents some super-fractal-evolving-zop algorithm, that manages to compress that to 900,000 bytes. Now the same string is no longer random. It seems odd to me that a mathematical criteria would be true one day and false tomorrow because someone in Alpha Centauri invented a smart algorithm...
The mathematical criterion did not change. An algorithm "exists" whether it is known or not.

There is also a minor nit. If you want an objective measure of the compressibility of a particular string, you have to count the size of the compression algorithm along with the size of the compressed string. Anybody and his brother can compress a specific 1,000,000 byte string using a custom built 2,000,000 byte compression tool tuned for that string. That motivates the use of a minimal programming environment (e.g. a small Universal Turing Machine) if one is trying to define these things.

Suppose that we have two competing definitions for randomness/compressibility, you with one using your programming environment and me with one using my programming environment. If I can write an emulator of size n for your environment using my environment and if you can write an emulator of size m for my environment using your environment then (with a few more reasonable assumptions) there can only be n+m difference between our two measures of compressibility for any given string.

[Maybe it's max(n,m) -- it has been a while since I glanced at that result and said "oh, that makes sense"]
 
Last edited:
321
32
The mathematical criterion did not change. An algorithm "exists" whether it is known or not.
Ah, I see! Good point!!

Now, how about this question; say that P is the set of all possible programs for some given machine; P is certainly infinite. Then for a certain string s we want to consider algorithms p ∈ P whose size(p) < size(s); we don't want to consider algorithms that are too big. Even so, the number of algorithms will be immense; if that string is 1,000,000 bytes, then the programs cannot be more than say 500,000 bytes, therefore, that will be up to 256^500,000 programs, or 10106. Then how to prove that among that gazillion of programs there isn't any program that will compress that string to 500,000 bytes.

This criteria seems very interesting, but I suspect the application of it may be challenging at best! I'm so happy I didn't become an Information Theory guy, because my poor old brain would be fried many times in the first week! :biggrin:
 

jbriggs444

Science Advisor
6,968
2,289
Then how to prove that among that gazillion of programs there isn't any program that will compress that string to 500,000 bytes.
In general, it is not a solvable problem. Are you familiar with the halting problem? There is no general way to tell which of those programs compute anything at all.
 
321
32
Right; I wonder what's the benefit of the compressibility criteria thingie, other than being an interesting statement.

Man, this random string stuff is a tough cookie to crack!
 

FactChecker

Science Advisor
Gold Member
2018 Award
4,662
1,588
The compressibility criteria is a rather profound insight that allows one to immediately say that pseudorandom generated numbers are not random no matter what statistical tests they pass. So, although proving a sequence is random may still be challenging, proving that some sequences are not random can be done formally. It also allows one to say that replaying a recorded sequence of random numbers does not make the sequence nonrandom, even though the sequence is determined and repeatable.

However, that definition of random does not address the statistical properties of the sequence. We are almost always more interested in the distribution and certain statistical properties than we are in pure randomness. (There are some exceptions.)
 
649
310
Having a masters degree in probability and statistics, I can answer this question with authority. I can't be bothered to read this whole thread, so forgive me if I'm redundant.

Probability was axiomatized by Kolmogorov in 1950 or so. Probability is the study of normed measure spaces. There is nothing random about the math that is used to predict probabilities. One can do the math with no concept of probability at all if one so wishes. It can of course be applied to random processes, but like all math the responsibility for the validity of such an application lies with the applier, not the mathematician.

It doesn't help that "random" often implies a uniform distribution. Things that have a 1% chance of going wrong aren't considered random. You just have to keep that in mind.

In my opinion for the most part "random" means "unpredicatable." It is subjective.

However it seems possible to prove more, that some things can never be predicted, and are hence essentially or truly random. Sometimes but not always this is what physicists mean when they say "random."

If you want to prove that something will never be predictable, then you have to show that if you can predict it then some contraction arises. Perhaps Conway and Kochen succeeded in doing that with quantum spin in their Free Will Theorem. Arguments that the spin of entangled quantum particles cannot be used to communicate are of this sort. If you could communicate in that way then the concept of causality falls apart.

I would say that presidential elections are random. But this is not how the word is commonly used. We think -- rather dubiously, say I -- that there is some reason that so-and-so won. So this sort of thing is excluded. More often "random" is something that is done for incomprehensible reasons, or a mechanical system that is not well understood. But maybe tomorrow such reasons or systems will be better understood. Then they will no longer be random to a person who understands it. But it will be random to someone who doesn't. A good example is the magnetic field reversals of Earth. Today they are random. But sooner or later I believe they will become at least somewhat predictable. Or the orbit of the planets. They were random until Ptolemy came up with his epicycles.

I have personal experience with this. I have some videos I made on Youtube. The viewership is quite random day to day. But recently I looked at some graphs of views over the last two years. One was a linear, another exponential, and a third logarithmic. It was amazing how close the curves fit what I had thought were isolated blips. I feel I can confidently predict that these trends will continue. Then as well some of the vids had viewership graph lines that wiggled around. Random.

Is the CMB random or not? Hard to say, hard to say. So I don't worry about this issue much. I just use my simple definition, that "random" and "unpredictable" are the same, and let others discuss definitions.
 
321
32
Wow, what an impressive post! Thank you so much for it - I'll definitely re-read it several times, as it refers to several concepts I'm not familiar with! Thank you!

Let me ask a related question... say that Civilization 9 From Outer Space decides to announce themselves by sending a radio signal that is beyond any possibility of confusion with any natural process. So they send say 100 billion trillion bits of the number pi in binary. Their thought is... "the chance of any natural process to generate pulses that are the same as the number pi with 100 billion trillion bits is nil, and any Civilization 9 child with IQ of 1 billion will recognize pi in a blink!".

We know that the bits of pi in binary are not random at all; but it passes our random tests as random, and as we don't have an IQ of 1 billion we can't really distinguish that signal from random noise. So, in that case we may not be able to distinguish an intelligent, designed, procedural, calculated and predictable sequence from random noise.

The question is... where's the mathematical gap in scenarios like this? Once you know the context of a sequence (say the algorithm or a "meaning") it's easy to know if something is not random. But context is not mathematical, so short of having some additional information about a sequence or being lucky in compressing it, in general it seems to be an impossible problem of separating if something is truly random or is just generated by some mechanism that we don't understand - that would be my guess.

ps: Now extend the scenario to say alpha decay. We can get timings of alpha decay, and they may seem completely random to us... but maybe there is an underlying deterministic process for alpha decay, but it's too complicated for our low IQ and "random" there just means we don't really know the rules that control it?
 

FactChecker

Science Advisor
Gold Member
2018 Award
4,662
1,588
@Hornbein , Good post. I have a couple of thoughts.
It doesn't help that "random" often implies a uniform distribution. Things that have a 1% chance of going wrong aren't considered random.
One field where very small probabilities are studied is in reliability engineering with tools like fault trees. They often study rare events that have catastrophic consequences.
In my opinion for the most part "random" means "unpredicatable." It is subjective.
I am not sure that I would call this subjective. I agree that the interpretation of "random" is often dependent of the amount of information available. But, given a certain set of information, either there are more than one possibility to guess about or the result is predictable. That would make it objective, but dependent on the information available.

There are common examples where the lack of information is key to treating something as random. Suppose a coin is tossed, caught, and the result is covered. After that, there is nothing physically random about the result -- it has already happened and the result is determined but unknown. If we try to guess the result, we still treat this as a random process.
 
406
152
@Hornbein , Good post. I have a couple of thoughts.
One field where very small probabilities are studied is in reliability engineering with tools like fault trees. They often study rare events that have catastrophic consequences.I am not sure that I would call this subjective. I agree that the interpretation of "random" is often dependent of the amount of information available. But, given a certain set of information, either there are more than one possibility to guess about or the result is predictable. That would make it objective, but dependent on the information available.

There are common examples where the lack of information is key to treating something as random. Suppose a coin is tossed, caught, and the result is covered. After that, there is nothing physically random about the result -- it has already happened and the result is determined but unknown. If we try to guess the result, we still treat this as a random process.
And even if it's an unfair coin, if I don't know which way it's biased, my odds of guessing correctly or incorrectly which way up it is are still 50% each.
Entropy_flip_2_coins.jpg
 

Attachments

Want to reply to this thread?

"What's the meaning of "random" in Mathematics?" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Top Threads

Top