B What's the meaning of "random" in Mathematics?

Click For Summary
The discussion explores the concept of "random" in mathematics, questioning its foundational basis compared to established mathematical principles. Participants note that while randomness is often treated as a variable in probability theory, it lacks a clear axiomatic framework, leading to confusion between randomness and uncertainty. The conversation emphasizes that randomness is more about human interpretation than a distinct mathematical concept, with random variables being rigorously defined within probability distributions. Additionally, the relationship between probability theory and measure theory is highlighted as often underemphasized in introductory courses. Ultimately, the term "random" may complicate rather than clarify mathematical discourse.
  • #31
fresh_42 said:
... is the mistake. However, it hits a notorious weakness of mine. I'm no friend of the common pedagogic concepts which proceed along the lines:
  1. It <insert a content of your choice, e.g. calculating 3-5 or introducing partial differentials, or complex numbers etc.> is impossible.
  2. It is too difficult for you.
  3. We will deal with it later.
  4. Btw., it is now possible, not difficult at all, and now is the time. :-p
I really hate this approach. It is based on the assumption of stupidity, and it fools students. In my opinion we should start to teach actual mathematics instead of procrastinate content over and over again. No wonder that people think ##17 - 25 \cdot 0## is mathematics!

I do not see any difficulties in the introduction of sigma algebras...

Yes, I picked up on this in a spat over prime numbers in the pre-calc forum. I actually am (semi) sympathetic to this in general.

I don't think it applies here though, in particular the underlined part.

Feller vol 1 does not assume stupidity on the part of the reader, is rigorous, is the book that got probability accepted by mathematicians outside the USSR, explicitly constrains itself to denumerable sample spaces to focus on probabilistic, not-analytic challenges (it even tells us that the sequel, vol 2 introduces measures to generalize the settings), contains an awful lot of analysis, includes original difficult results (e.g. Feller-Erdos-Pollard), and spawned real research (e.g. one I like: subsequent to publishing the first edition of vol 1, KL Chung pointed out that using countable state Markov Chain results from Kolmogorov, and a very well chosen chain, implies Feller-Erdos-Pollard) that was incorporated in version 2 and 3 of vol 1, either directly or as footnotes.

My approach of start simple and build really is close to how Polya would proceed (I think).
 
Last edited:
Mathematics news on Phys.org
  • #32
I was just rolling in my bed, unable to sleep, and now I know the reason for that -- that's because the concept of "random" is finally getting in order in my head. What a trip this was! What makes sense to me is that there are really 3 concepts of "random", and people refer with the same word for completely different things:

(a) One is the is a well-structured, axiomatic, abstract mathematical structure that defines and studies "probability". This is not based on any actual dice rolling or some ghost taking cards out of a deck, but it is instead a logical system built around abstract concepts like a probability space and a measurement space. As such, it's rather beautiful. Here "random" doesn't really mean anything, as it's an axiom, and we could just as well call "bananility" instead of "probability" and the mathematical structure would be exactly the same.

(b) Another is the mysterious realm of quantum mechanics, where for some crazy odd reason real objects do seem to exactly follow laws derived from the abstractions above. Here "random" really means random, there's no other way to describe. Why quantum objects behave so is a mind-blowing question, and I suspect it's one of the greatest mysteries of physics, but thankfully everyday dudes like me don't have to worry about it and have no use for that, we can just have faith in physicists to get stuff to work by using those rules, and hopefully not blow the planet to pieces while doing that.

(c) Another is the macroscopic realm that we all handle everyday. Here the "random" really means unknown. There's no real random in macroscopic. We think the cards from the deck as random just because they are turned face down, and if we could calculate exactly all the forces acting in the dice we could deterministically predict which number would be rolled. One could despair with the unknown, but by making assumptions (like the deck is not missing cards and the cards are equally probable) and by applying that mathematical framework, we can make guesses and estimates of outcomes, which, if we assumed right will, in large numbers, be close to the mathematical predictions.

What's awesome is that (c) is routinely used by billions of people. It's actually very amazing if one considers of it - regular joes use it everyday, for example saying "wow, what a hail-mary pass -- he'll never be able to repeat that!" to express how unlikely p(x)2 is, without really knowing why that's correct. We joes don't care, and do not use, anything about tensors or Hilbert matrices or the 350-millionth digit on pi, but we use probability as commonly as we use algebra to verify the bills.

For that reason I now have renewed respect for probability theory, due to widespread use, and now I think that field is one of the Titans of mathematics, with the same practical utility as algebra and geometry! Once again thanks all for this very inspiring discussion!
 
  • Like
Likes Auto-Didact and FactChecker
  • #33
That is a good overview of the situation. One comment I would make is that the different ways of looking at it are really all compatible and complement each other. The axiomatic view really does describe the view of (c), even though the translation between the two may appear difficult. In a famous disagreement between Einstein and Neils Bohr, Einstein contended that (c) was the only situation and there was always a hidden, unknown cause for every "random" event. (I hope that I am not butchering this). A famous Einstein quote is "God does not play dice." At the quantum level, Bohr is considered the winner of that disagreement because of experiments by John Stewart Bell. I highly recommend that you look at the volumes "An Introduction to Probability Theory and its Applications" by Feller. They may be expensive and hard to find, but they are classics.

In the axiomatic view, it is irrelevant whether there is fundamental randomness or just unknown deterministic causes. The mathematics doesn't care. But a great strength of mathematics is that the logic and validity holds in many different applications.
 
Last edited:
  • Like
Likes StoneTemplePython and fbs7
  • #34
fbs7 said:
I was just rolling in my bed, unable to sleep, and now I know the reason for that -- that's because the concept of "random" is finally getting in order in my head. What a trip this was! What makes sense to me is that there are really 3 concepts of "random", and people refer with the same word for completely different things:

(a) One is the is a well-structured, axiomatic, abstract mathematical structure that defines and studies "probability". This is not based on any actual dice rolling or some ghost taking cards out of a deck, but it is instead a logical system built around abstract concepts like a probability space and a measurement space. As such, it's rather beautiful. Here "random" doesn't really mean anything, as it's an axiom, and we could just as well call "bananility" instead of "probability" and the mathematical structure would be exactly the same.

(b) Another is the mysterious realm of quantum mechanics, where for some crazy odd reason real objects do seem to exactly follow laws derived from the abstractions above. Here "random" really means random, there's no other way to describe. Why quantum objects behave so is a mind-blowing question, and I suspect it's one of the greatest mysteries of physics, but thankfully everyday dudes like me don't have to worry about it and have no use for that, we can just have faith in physicists to get stuff to work by using those rules, and hopefully not blow the planet to pieces while doing that.

(c) Another is the macroscopic realm that we all handle everyday. Here the "random" really means unknown. There's no real random in macroscopic. We think the cards from the deck as random just because they are turned face down, and if we could calculate exactly all the forces acting in the dice we could deterministically predict which number would be rolled. One could despair with the unknown, but by making assumptions (like the deck is not missing cards and the cards are equally probable) and by applying that mathematical framework, we can make guesses and estimates of outcomes, which, if we assumed right will, in large numbers, be close to the mathematical predictions.

What's awesome is that (c) is routinely used by billions of people. It's actually very amazing if one considers of it - regular joes use it everyday, for example saying "wow, what a hail-mary pass -- he'll never be able to repeat that!" to express how unlikely p(x)2 is, without really knowing why that's correct. We joes don't care, and do not use, anything about tensors or Hilbert matrices or the 350-millionth digit on pi, but we use probability as commonly as we use algebra to verify the bills.

For that reason I now have renewed respect for probability theory, due to widespread use, and now I think that field is one of the Titans of mathematics, with the same practical utility as algebra and geometry! Once again thanks all for this very inspiring discussion!

The late physicist E.T Jaynes wrote a provocative book "Probability Theory: the Logic of Science", Cambridge University Press, 2003, in which he essentially rejects the very idea of "randomness". That's right, a large probability book by somebody who does not believe in randomness! For Jaynes (and several others---maybe mostly physicists), probability is associated with a "degree of plausibility". He shows that using some reasonable axioms about how plausibilities combine, you can end up with multiplication laws like P(A & B) = P(A) P(B|A), etc. His book essentially tries to stay away from the whole "Kolmogorov" measure-theoretic way of doing probability, and so can only treat problems that do not involve things like ##P(\lim_{n \to \infty} A_n)## (but can certainly deal with things like ##\lim_{n \to \infty} P(A_n)##).

In his third chapter entitled "Elementary Sampling Theory", he says on pp. 73-74 (after developing the basic probability distributions):
"In the case of sampling with replacement, we apply this strategy as follows.
(1) Suppose that, after tossing the ball in, we shake up the urn. However complicated the problem was initially, it now becomes many orders of magnitude more complicated, because the solution now depends on every detail of the precise way we shake it, in addition to all the factors mentioned above.
(2) We now assert that the shaking has somehow made all these details irrelevant, so that the problem reverts back to the simple one where the Bernoulli urn rule applies.
(3) We invent the dignified-sounding word randomization to describe what we have done. This term is, evidently, a euphemism whose real meaning is: deliberately throwing away relevant information when it becomes too complicated for us to handle."

"We have described this procedure in laconic terms, because an antidote is needed for the impression created by some writers on probability theory, who attach a kind of mystical significance to it. For some, declaring a problem to be "randomized" is an incantation with the same purpose and effect as those uttered by an exorcist to drive out evil spirits; i.e., it cleanses the subsequent calculations and renders them immune to criticism. We Agnostics often envy the True Believer, who thus acquires so easily that sense of security which is forever denied to us."

Jaynes goes on some more about this issue, often revisiting it in subsequent chapters. Lest you think that his book is just "hand-waving", be assured that it is satisfying technical, presenting most of the usual equations that you will find in other books at the senior undergraduate and perhaps beginning graduate level (at least in "applied" courses). The man is highly opinionated and I do not subscribe to all he posits, but I find the approach interesting and refreshing, even though it is one I, personally, would not embrace. He does end the book with a long appendix outlining other approaches to probability, including the usual measure-theoretic edifice.
 
Last edited:
  • Like
Likes Auto-Didact, fbs7, jim mcnamara and 1 other person
  • #35
fbs7 said:
If I say x ∈ X, how do I know if this is a random variable or not?
Hi fbs:

This seems like a very strange question to me. If you say x ∈ X, you know something about x and X. Presumably you would know if x is a random variable if someone you believe to be knowledgeable tells you it is a random variable. What is needed by someone with the appropriate knowledge is that the process for obtaining values for x is a random process. So the randomness of a variable is determined by whether the process for obtaining values for the variable is a random process.

I am guessing you have some uncertainty about what it means for a process to be random. A random process is a process for which it is impossible by any means to know in advance what a particular value will be. This is the distinction between a random process and a pseudo-random process. If the process is pseudo-random, and you know the nature of this process and its initial conditions, in principle you can calculate the next value it will generate.

I hope this is helpful.

Regards,
Buzz
 
  • #36
Buzz Bloom said:
Hi fbs:

This seems like a very strange question to me. If you say x ∈ X, you know something about x and X. Presumably you would know if x is a random variable if someone you believe to be knowledgeable tells you it is a random variable. What is needed by someone with the appropriate knowledge is that the process for obtaining values for x is a random process. So the randomness of a variable is determined by whether the process for obtaining values for the variable is a random process.

I am guessing you have some uncertainty about what it means for a process to be random. A random process is a process for which it is impossible by any means to know in advance what a particular value will be. This is the distinction between a random process and a pseudo-random process. If the process is pseudo-random, and you know the nature of this process and its initial conditions, in principle you can calculate the next value it will generate.

I hope this is helpful.

Regards,
Buzz

Appreciate it. I was stuck with the matter that logic is completely deterministic. If you have propositions A, B, C that are either true or false, then you'll always get other propositions D, E, F that are true or false as consequence from that. No changes, ever. So if f(x) = x2 and x=2, then always f(x) = 4. If so, then how could a "random" value ever be the result of a logical sequence from true/false propositions?

The Cox formulation untied that knot for me, through that abstract concept called "plausibility", which isn't mathematically defined -- it's an axiom. From what I understood, you don't have to define the process of rolling a dice, we just have to assume that plausibility(rolled-a-3) exists and ∈ [0..1]. Similarly, with Cox you don't need to define a process through which a ghostly hand will "choose" a fruit from a bag of fruits -- what's the hand? what's choosing? There's no need for that; you just assume that ∃ picked-an-orange and that plausibility(picked-an-orange) = plausibility(picked-an-apple) = plausibility(picked-a-lemmon) to get all kinds of useful calculations from that. There's no violation of determinism of logic that way.

I'm probably murdering poor Cox here, but that's how I untangled that knot, in my mind :biggrin:
 
  • #37
Ray Vickson said:
The late physicist E.T Jaynes wrote a provocative book "Probability Theory: the Logic of Science", Cambridge University Press, 2003, in which he essentially rejects the very idea of "randomness". That's right, a large probability book by somebody who does not believe in randomness! For Jaynes (and several others---maybe mostly physicists), probability is associated with a "degree of plausibility". He shows that using some reasonable axioms about how plausibilities combine, you can end up with multiplication laws like P(A & B) = P(A) P(B|A), etc.

Yay! Gotta love Cox & Jaynes! Hooray to them! I somehow suspect that Kolmogorov and Cox/Jaynes are equivalent, as (I suspect) they come to the same conclusions through (I suspect) different axiomatic processes, but I did find Cox infinitely easier to grasp.
 
  • #38
fresh_42 said:
You might want to look for the alternative. It requires Cauchy sequences and equivalence classes. At least those are useful anyway, whereas Dedekind cuts are just this. Google "real numbers as Cauchy limits" or so.

Imo the better approach. It gets the student introduced to sequences, something fundamenal in analysis.
 
  • #39
fresh_42 said:
You're right. My only excuse is: far too many COBOL and RPG switches ...
Yes, 1+1=0 is not true in a Boolean Algebra. But it is true in the field ##\mathbb Z_2##.
 
  • #40
fbs7 said:
my mind screws were more in place with the idea that "random" is an interpretation thing
I suggest you hold on to that idea. The meaning of 'random' in the everyday world is a philosophical issue. There have been countless millions of words written in philosophical journals and the like about whether the universe is 'random', but few of them make sense because the definition of 'random' is not specified with sufficient clarity.

Even in mathematics there is no definition of 'random'. The word is only used in conjunction with another word, usually 'variable'. We have 'random variables' and 'stochastic processes' that are precisely defined terms, but there is no adjective 'random' in probability theory.
 
  • Like
Likes fresh_42
  • #41
fbs7 said:
Yay! Gotta love Cox & Jaynes! Hooray to them! I somehow suspect that Kolmogorov and Cox/Jaynes are equivalent, as (I suspect) they come to the same conclusions through (I suspect) different axiomatic processes, but I did find Cox infinitely easier to grasp.

I think that Cox/Jaynes is "equivalent" to probability as done in volume I of Feller---and that is saying a lot. However, some "deeper" modern results (seem to) need the "measure-theoretic" apparatus, so would essentially be rejected by Jaynes. Certainly, the two approaches would no longer be equivalent in any easily-described sense.

Admittedly, some treatments in the modern way of doing things look like they might just be re-statements of results done in the old-fashioned way, but the resulting statements of the results are more cumbersome in the old way. For example, Feller, Vol. I, proves the so-called Strong Law of Large Numbers without using any measure theory or other more abstract methods. However, the result looks less appealing than the modern statement. The modern statement would amount to ##P(\lim_{n \to \infty} \bar{X}_n = \mu) = 1.## In pre-measure language the same result would say "For every ##\epsilon > 0## with probability 1 there occur only finitely many of the events ##|\bar{X}_n - \mu| > \epsilon##" How much nicer is the first way of saying it compared with the second way.
 
  • #42
Ray Vickson said:
I think that Cox/Jaynes is "equivalent" to probability as done in volume I of Feller---and that is saying a lot.

I've long suspected something like this.. though I've only read part of Jaynes-- I couldn't shake the feeling while reading him that he was repackaging old ideas as new ones while using Feller as some kind of strawman to attack. There are some good ideas in Jaynes but I am leery of polemics these days. You're probably the one person on PF who has cited Feller even more than I have, so this seems satisfying.
 
  • #43
I don't think that anybody here came close to the crux of the problem, so let me try to direct the discussion into a different direction.

Are the digits of ##\pi## random?

Intuitively they are not because there is a deterministic algorithm that determines them, and yet all general tests of randomness suggest that they are random. It is this kind of problem that still seems to lack a satisfactory mathematical and/or philosophical solution.
 
  • Like
Likes Auto-Didact
  • #44
fbs7 said:
Oh... randomness is interpretation...

The intuitive idea behind probability is that an event can have "tendencies" to occur in different possible ways, but only occurs in one of those ways. Yes, this intuitive idea is NOT implemented in the mathematical axioms of probability theory. So when people apply mathematical probability theory to situations and reason about various outcomes being "possible", but only one outcome being "actual", they are making an interpretation of probability theory that is not present in its mathematical formulation.

So a "random" variable is really just another variable, just like "time" is just a variable without anything different than say a "mass" variable.

No. There is a saying: "A random variable is not random and it is not a variable".

As mentioned above, the mathematical assumptions use in defining a "random variable" do not treat the concept of "random" in the intuitive and common language meaning of the word "random".

A "random variable" is not a variable in the same sense that a symbol representing time or mass is a variable nor is it a "variable" in the sense used in mathematical logic or in computer languages. The mathematical properties that define a random variable are stated in terms of functions called distributions. Of course, the definition of a function may contain variables (e.g. f(x) = 4x ). But the same function can be defined using different symbols. (e.g. f(x) = 4x and f(w) = 4w are definitions of the same function).

The inutuive idea of a "random variable" is that it is defined by a distribution function that can be use to compute "the fraction of times that particular sets of values of the random variable will occur". The logical problem with that interpretation is that "will occur" is a definite guarantee of something happening. Such a definite guarantee is a contradiction to the intuitive notion of "probability", which is a concept we apply when there are no definite guarantees. In mathematical probability theory, the distribution function can be used to calculate the probability of particular sets of values - without saying what physical interpretation we assign to "probability". (i.e. There is no mathematical definition of "probability" in terms of a "tendency" or "fraction of times" for an "actual" occurence).

Mathematical probability theory is essentially circular. Given certain distributions , it tells how to compute other distributions. Given various probabilities, we can compute other probabilities. There is no breaking down of the concept of "probability" into more detailed concepts.

( It's amusing and understandable that the terminology used in mathematical statistics (e.g. "significance", "confidence", "uncertainty") strongly suggests that mathematical "probability" must have a specific interpretation along the lines of a "tendency" or "fraction of times". Applications of statistics were made before the invention of modern mathematical probability theory, so it developed such terminology on the basis of applications before the foundations of the subject were properly organized. )

Demystifier said:
I don't think that anybody here came close to the crux of the problem,

That depends on how you define "the problem".

If the problem is to state the content of mainstream mathematical probability theory (i.e. measure theory) , various posters have done this.

If the problem is to find an alternative mathematical theory that implements the intuitive notion of "randomness", then your question hints that such an approach can be founded on notions of computational complexity.
 
  • Like
Likes Auto-Didact
  • #45
@Demystifier
The original question was how a mathematician deals with "random". That is not as deep a question because a mathematician only needs to know that random is being assumed -- not that the assumption is physically correct. So the mathematician only has to know if we are assuming that the digits of ##\pi## are random or not.
I think that your question is different since it is asking if we should accept an assumption that the digits of ##\pi## are random. Suppose there is no statistical test that proves it is not random beyond a reasonable doubt. I would only consider it to be pseudo-random, just like any other algorithm for a pseudo-random number generator. A person as great as Einstein was able to retain the belief that there was no true randomness in the universe till his disagreement with Bohr became famous. When people as brilliant as they are disagree, I will stay out of the debate.
 
  • Like
Likes Klystron and fresh_42
  • #46
Demystifier said:
I don't think that anybody here came close to the crux of the problem, so let me try to direct the discussion into a different direction.

Are the digits of ##\pi## random?

Intuitively they are not because there is a deterministic algorithm that determines them, and yet all general tests of randomness suggest that they are random. It is this kind of problem that still seems to lack a satisfactory mathematical and/or philosophical solution.
I'm not that surprised with that... these "randomness" tests also pass pseudo-random numbers, even if they are fully deterministic... I'd suspect that these "randomness" tests are incomplete, and pass as random sequences that are deterministic.
 
  • #47
fbs7 said:
I'm not that surprised with that... these "randomness" tests also pass pseudo-random numbers, even if they are fully deterministic... I'd suspect that these "randomness" tests are incomplete, and pass as random sequences that are deterministic.
I would like to point out that any truly random sequence can be turned into a deterministic sequence just by recording the random numbers and replaying them as deterministic. Therefore, it is not possible to have a statistical test that would distinguish between the two. There are large tables of "random" numbers available and used.
 
  • #48
FactChecker said:
I would like to point out that any truly random sequence can be turned into a deterministic sequence just by recording the random numbers and replaying them as deterministic. Therefore, it is not possible to have a statistical test that would distinguish between the two. There are large tables of "random" numbers available and used.
I suppose that one could change the question to asking if a process generates random numbers. In that case, repeating the process of replaying the table of numbers would produce the exact same sequence of numbers and be easily identified as deterministic.
 
  • Like
Likes Buzz Bloom
  • #49
fbs7 said:
I'm not that surprised with that... these "randomness" tests also pass pseudo-random numbers, even if they are fully deterministic... I'd suspect that these "randomness" tests are incomplete, and pass as random sequences that are deterministic.
One can go down the road of Kolmogorov randomness. I am no expert on it.

Roughly speaking, if you take this approach, the information content of a string is the length of the shortest computer program that can produce the string as output. In the context of infinite strings (such as pi) one has to get a little fancier and talk about a program that can generate the output stream from some input stream. If no program can produce output bytes at a better than 1 to 1 ratio to input bytes then the output stream is "random".

The gotcha with this approach is that "finding the best program" is not a feasible problem in general. Kolmorogorov randomness can be defined but it can not always be determined.

Edit: If you have an output stream produced by a loaded die then the Kolmogorov definition will (with probability 1) match the Shannon notion of information content in the stream.
 
Last edited:
  • #50
FactChecker said:
I suppose that one could change the question to asking if a process generates random numbers. In that case, repeating the process of replaying the table of numbers would produce the exact same sequence of numbers and be easily identified as deterministic.
We could say that a process generates sequences of random numbers if there are no statistical tests on repetitions of the process which would identify it as deterministic beyond a reasonable doubt. Then we could call any output sequence of the process as random. That opens several cans of worms, one of which is that "no statistical tests" is poorly defined and it is concievable that there will always be a statistical test that a given process will fail.
 
  • #51
fbs7 said:
I'm not that surprised with that... these "randomness" tests also pass pseudo-random numbers, even if they are fully deterministic... I'd suspect that these "randomness" tests are incomplete, and pass as random sequences that are deterministic.
Hi fbs:

A randomness test gives a value that estimates the degree of close approximation a particular pseudo-random number sequence (e.g., the digits in pi) has to true random numbers. It is not intended to be complete.

Regards,
Buzz
 
Last edited:
  • #52
fbs7 said:
Appreciate it. I was stuck with the matter that logic is completely deterministic.
Hi fbs:

A deterministic process is by definition non-random. It is possible to voice logical statements about randomness, but such statements are not processes for generating random values. What they might be are statements that describe requirements about a process which are necessary for the process to create random values. For example, it is logical to say with respect to QM that measurements of the spin of a sequence of non-entangled particles produces a sequence of random values each of which is either "up" or "down".

Regards,
Buzz
 
  • #53
I don't know if this is too basic, but a variable is random if its outcome cannot be predicted with certainty. At best we can describe the distribution of the outcomes.
 
  • #54
WWGD said:
I don't know if this is too basic, but a variable is random if its outcome cannot be predicted with certainty. At best we can describe the distribution of the outcomes.
That certainly has advantages. For one thing, it is simple and as usable as any other definition. For another thing, it allows one to discuss the aspect that lack of available information plays in the inability to predict. One can reasonably think of randomness and probabilities in terms of the ability to guess and predict, given the information available. So the question of whether a result is really physically random or not is no longer appropriate -- it becomes a question of whether enough information is known to make the prediction. The second question is easier to agree on.
 
  • #55
FactChecker said:
That certainly has advantages. For one thing, it is simple and as usable as any other definition. For another thing, it allows one to discuss the aspect that lack of available information plays in the inability to predict. One can reasonably think of randomness and probabilities in terms of the ability to guess and predict, given the information available. So the question of whether a result is really physically random or not is no longer appropriate -- it becomes a question of whether enough information is known to make the prediction. The second question is easier to agree on.
Well, true, I am doing a good amount of assuming/black boxing. We may need to conduct tests on whether the variable ( and not a single output) is random. But at least these are some guide posts/ goalposts, and, yes, pretty tricky: what information , if any, would be needed to do a better approximation or estimation of the output? We may also have trouble with (somewhat_- pathological cases like Cauchy variables with infinite variance.

I can, tho, think of genuinely random variables when, e.g., using a pendulum oscillating between ( more than 1) magnets and seeing where it settles.
 
  • #56
Wow, many exquisite insights!

Let me ask you mathematicians this question: a pseudo-random sequence will repeat, even if after an absurd amount of time. But the digits of π do not repeat.

We know that neither of them are truly random, given that number n-th can be exactly calculated, although both cases "seem" to be random up to some measure.

But, given the fact that the digits of π do not repeat, does that make that sequence a bit more "random-ny" than a pseudo-random generator, if there's such thing? I fear I'm butchering proper mathematic-inglish again! :biggrin:
 
  • #57
fbs7 said:
Wow, many exquisite insights!

Let me ask you mathematicians this question: a pseudo-random sequence will repeat, even if after an absurd amount of time.
That is a property of some specific methods of generating pseudo-random numbers. It is not a consequence of "pseudo-random" alone. Evan very good methods may fail sophisticated statistical tests, but they are adequate for most uses.
 
  • #58
fbs7 said:
But, given the fact that the digits of π do not repeat, does that make that sequence a bit more "random-ny" than a pseudo-random generator, if there's such thing? I fear I'm butchering proper mathematic-inglish again!
Hi fbs:

I am guessing that your question is whether it is possible for a randomness test method to determine if one pseudo-random number generator is better than another. I am also guessing that the answer is yes, and I am pretty confident this second guess is correct.

You should keep in mind that pseudo-random number generators are used with a specific purpose. It is desired that the pseudo random number generator used will give a result adequately close to what would be obtained using a true random number generator, e.g., something based on QM. If you can know theoretically what statistical result to expect form a sequence of random numbers used for a specific purpose, e.g., Monte-Carlo calculations, then the results of performing several Monte-Carlo runs with different pseudo-random number generators can be compared to determine which is the best for that particular purpose.

Regards,
Buzz
 
  • #59
fbs7 said:
Wow, many exquisite insights!

Let me ask you mathematicians this question: a pseudo-random sequence will repeat, even if after an absurd amount of time. But the digits of π do not repeat.

We know that neither of them are truly random, given that number n-th can be exactly calculated, although both cases "seem" to be random up to some measure.

But, given the fact that the digits of π do not repeat, does that make that sequence a bit more "random-ny" than a pseudo-random generator, if there's such thing? I fear I'm butchering proper mathematic-inglish again! :biggrin:
A pseudo-random sequence of numbers is just a sequence that has no readily apparent pattern, but is generated by a finite algorithm that takes no external inputs after the generation of the first M values (M is a fixed positive integer). Since the digits of pi are generated by an algorithm and its calculation takes no external inputs, they are a pseudo-random sequence.

So far as I am aware, it is not the case that a pseudo-random sequence must eventually repeat. It is just that non-repeating sequences become too slow to compute, the longer they continue, so the ones used by computers tend to be repeating for efficiency reasons. One way that I think will make a sequence non-repeating is as follows:

First, a definition: we say that a sequence of numbers, which may be finite or infinite, is 'repeating with lag L' if, for any j that is an index of the sequence such that j+L is also an index, the j-th and the (j+L)th element are equal.

The algorithm will use a simple, repeating, pseudo-random generator as its base. It generates positive integers.

Say we have already generated the first n numbers. Then for every k from 2 to (n+1), let m(n,k) be the quotient from dividing (n+1) by k and let r(n,k) be the quotient on dividing k by 2. For each k, there will be zero or one numbers such that, if that number is chosen, the last r(n,k) . m(n,k) elements out of the first (n+1) sequence elements, is repeating with lag m(n,k). Such a number is a 'forbidden number', as it creates a temporary repeat for a significant tail of the sequence thus far. Let F(n) be the set of all forbidden numbers at the step for generating the (n+1)th number.

Then generate a new number h(n+1) from the simple generator. Now chose as the (n+1)th number the closest positive integer to h(n+1) that is not in F(n).

I believe such a sequence will never repeat. But it will become progressively slower, because the number of checks that need to be made for each number increases as the sequence progresses.

But the sequence is pseudo-random because it satisfies the definition.
 
Last edited:
  • Like
Likes Klystron
  • #60
Hmm... true point! I'm a programmer, so I should have seen that too, appreciate you pointing out.

Another example of a generator that will (eventually, and very very very slowly) not repeat is this one, for say 8-bit integers (or whatever size integers we choose):

Code:
initialize a[i] = 0 for i in 1..N
repeat
    calculate randomness of a[]
    for i = 1..N, for j = 1 to 8:
         invert bit j of a[i]
         recalculate randomness of a[]
         if new sequence has lower randomness, revert the bit back
   until randomness of a[] > desired-target

that algorithm (a brute-force search, actually) will eventually generate a non-repeating, whatever-length sequence that will pass whatever randomness criteria we set... at a cost of taking ages to process... hmm... actually this is probably the slowest random number generator ever... I should get a prize or something for that :biggrin:

hmm... actually I'm a bad programmer.. I should check for the thing hitting a hole, where any one-bit changes does not increase randomness; if one checks for valleys and holes and backtracks, the thing gets even slower...
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
312
  • · Replies 31 ·
2
Replies
31
Views
6K
Replies
29
Views
5K
  • · Replies 7 ·
Replies
7
Views
3K
Replies
5
Views
4K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 9 ·
Replies
9
Views
4K