Definition of randomness (sidebar: truth is random)

Click For Summary
The discussion centers on the definition of randomness, particularly in relation to infinite sequences of numbers. A sequence is deemed random if it cannot be defined by a finite formula, with examples illustrating that sequences like {1,1,1,...} are not random due to their definability. The conversation also touches on the implications of Tarski's undefinability of truth theorem, suggesting that the set of all tautologies can be considered random since it is undefinable in certain mathematical structures. The participants argue about whether true randomness exists in nature, with skepticism towards examples like rolling dice, which are viewed as chaotic rather than truly random. Ultimately, the discussion raises questions about the nature of randomness and its relationship to definability and predictability in mathematical and philosophical contexts.
  • #31
Given two infinite strings s and t, s <= t iff there exists some infinite sequence K = {n1, n2, ... } of natural numbers and a constant C, such if n in K then given the prefix of t of length n, it is possible to compute n - C elements of s along with their positions in s, though these elements need not be consecutive.
There's a problem with this one, though.

Let s = {0, 0, 0, 0, ...} and t = {0, r1, 0, r2, 0, r3, ... } where the ri are a "random" sequence.

s and t would compare equal according to this definition, since I can always compute n bit & position pairs of one string given the first n of the other.
 
Mathematics news on Phys.org
  • #32
What brought this on was another thread on another board. I'm looking for a definition of random I can actually apply to individual sequences. He's arguing that the actual definition of random is "unpredictable" or "uncaused" and that the two are equivalent. I pointed out that you can have unpredictable with cause, so they're not equivalent. Uncaused, I went on to say, is hard (for me at least) to define mathematically so I will stick to "unpredictable." Predictable to me means compressibility in your Kolmogorov sense. What I'm ultimately trying to figure out is whether quantum events are truly random, and maybe uncaused. To get anywhere I have to have a definition of random. I'm trying to decide if "God plays dice." Of course one wonders where the "dice" (i.e., quarks and leptons and such) came from...
 
  • #33
There's an entirely different approach you can take to quantum randomness -- take modern probability theory seriously. Stop trying to think of experiments having outcomes: they're merely random variables. The dice never actually took on a value, although we would have facts such as

P((I saw a 1) or (I saw a 2) or ... or (I saw a 6)) = 1

and

P(you saw a 3 | I saw a 3) = 1

This is related to the many worlds interpretation (MWI), but I get the impression that the two aren't synonymous.
 
  • #34
Given two infinite strings s and t, s <= t iff there exists some infinite sequence K = {n1, n2, ... } of natural numbers and a constant C, such if n \in K then given the prefix of t of length n, it is possible to compute n - C elements of s along with their positions in s, though these elements need not be consecutive.

Hurkyl said:
There's a problem with this one, though.

Let s = {0, 0, 0, 0, ...} and t = {0, r1, 0, r2, 0, r3, ... } where the ri are a "random" sequence.

s and t would compare equal according to this definition, since I can always compute n bit & position pairs of one string given the first n of the other.
You're right... it can be patched by declaring the additional condition that for a given m, it is possible to choose an n large enough from K so that you can compute the first m digits of s given the first n digits of k. But I have no good reason for making that change.

The situation I made that definition to try to deal with was this: let s = {r1, r2, r3, ...} and let u be defined by {r1, r2, r4, r8, r16, ...}. Let v = the sequence of bits in s other than those in u. Now define t by the following:
i is even => ti = u(i/2) (edit from ui)
i is odd => ti = v((i+1)/2) (edit from vi)

t and s, it seems to me, should compare equal. But how to define it so it works that way?


Here is a definition of a totally random string. s is a totally random string iff for any n there exists a Turing machine M with an infinite output tape that can compute the first n bits of s on input 0, but for any such machine M there exists a number m (and m > n) such that the m'th letter on the output tape of M on input 0 does not agree with the m'th letter in s.
 
Last edited:
  • #35
Maybe they shouldn't compare equal! You've rearranged the terms, and maybe that rearrangement conveys some information? (or destroys some?)

Maybe it's similar to the problem of the information in concatenated strings: for example, for any scheme K and integer c, there exists (finite) strings x and y such that K(xy) > K(x) + K(y) + c.

Maybe we should try postulating the properties we would like our notion of complexity to have, and then trying to work out if there's any way to attain it! :smile:
 
  • #36
Ah, I actually didn't say what I meant there--I edited so it now reads how I intended it.

I don't think the rearrangement conveys or destroys any information because it doesn't depend on the bits in s--if I rearranged the terms to, for example, order them, then I would be reducing the amount of information, but since the rearrangement here is fixed from the start and you CAN directly derive n bits in s from n bits in t and vice versa, I think they should be looked at as the same. The bits are totally random, and s and t both have compression ratios 1.
 
  • #37
Here is a definition of a totally random string. s is a totally random string iff for any n there exists a Turing machine M with an infinite output tape that can compute the first n bits of s on input 0, but for any such machine M there exists a number m (and m > n) such that the m'th letter on the output tape of M on input 0 does not agree with the m'th letter in s.
This is equivalent to computability!

Theorem: the following are equivalent:
(1) s is not a totally random string.
(2) There exists a turing machine M with an auxiliary output tape that correctly prints the letters of s in order. (on any input)
(3) There exists a turing machine M such that M(n) is the n-th letter of s.



I imagine you're right about rearrangement -- as long as the rearrangement is computable. But to be honest, I would have thought K(xy) = O(K(x) + K(y)) at first too.
 
Last edited:
  • #38
(by the way after thinking about this for a bit I think that a better name for what I defined is a "random" rather than "totally random" string because the compression ratio for what I defined can be less than 1).

No, randomness as I defined it is not equivalent to noncomputability, though. Let t be an infinite binary string and let s = kt, where k is the number, printed in binary, that is the longest string that a terminating turing machine of size 20 can print. (did I get that right? I mean for k to be a finite noncomputable number). Then s is noncomputable but it is also not random as I defined it because there does not exist any machine that can compute the first |k| symbols of s.
 
Last edited:
  • #39
There is no such thing as a noncomputable integer. For every integer n, there exists a turing machine M such that M(w) = n for any input w.

It's the function that's noncomputable: there does not exist a turing machine M such that, for every integer n:

M(n) = the longest string that a terminating turing machine of size n can print
 
Last edited:
  • #40
Is there already a formal concept of a "knowable" infinite string, by which I mean a string for which any prefix can be listed explicitly and a proof exists that the listing is the prefix? I guess that such an idea would both have to refer to definitions of strings, rather than strings themselves.
 
Last edited:
  • #41
One thing worth noting is that you've not really shown that the set of tautologies is undefineable. You've given a 1-1 correspondence between tautologies and a set that is undefinable. Actually, I'm not sure that you've shown this set to be undefinable, you just say that it is Tarski's theorem. What exactly does Tarski's theorem say, and what more do you say? Note that since there exists an undefinable subset of the naturals, and since any finite set is definable, there exists an uncountable undefinable subset of the naturals. The naturals themselves form a definable uncountable subset. So, the naturals can be put into a 1-1 correspondence with some undefinable set, but this says nothing about the definability of the naturals.

Also, you don't want something like f(n) = 1. A set is definable if there is some wff f(n) of one free variable (n) such that f(a) holds true in the standard model iff a is in the set. So the set {1} can be defined by the wff n=1, or more precisely, n=S0, where S is the successor function in the standard model.
 
  • #42
0rthodontist said:
Is there already a formal concept of a "knowable" infinite string, by which I mean a string for which any prefix can be listed explicitly and a proof exists that the listing is the prefix? I guess that such an idea would both have to refer to definitions of strings, rather than strings themselves.
We are starting computability in my theory of computation class so I was thinking about this again. There are no knowable, noncomputable infinite strings because every knowable string can be computed by a machine that does a proof search.

Here is what I think is a better definition of randomness:
Let A and B be two sets of axioms, with A a subset of B. A string S is random in A, "given" B, if it is knowable with respect to B but not knowable with respect to A.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 11 ·
Replies
11
Views
3K