entropy1
- 1,232
- 72
Is there a definition of "random(ness)"? Is it defined?
That's the common understanding of it. The twist comes in the meaning of 'can be predicted'. A process may be unpredictable in one theory but predictable in a more sophisticated theory. There's no way we can know that there isn't some currently unknown, more sophisticated theory that can predict outcomes that currently seem random to us. So what we can say is that a given process is random with respect to theory T. That is, predictability depends on what theory we are using to predict.WWGD said:As I understand , a Random process is one that cannot be predicted but can be described probabilistically.
Are you talking about the population average or a sample average?entropy1 said:If you can't predict the next outcome, how come you can predict the average outcome?
True. Depending on the tools, information available at a given point, I should have said, and within a theory. A good question is whether there are phenomena that are somehow intrinsically unpredictable, i.e., not predictable within any system.andrewkirk said:That's the common understanding of it. The twist comes in the meaning of 'can be predicted'. A process may be unpredictable in one theory but predictable in a more sophisticated theory. There's no way we can know that there isn't some currently unknown, more sophisticated theory that can predict outcomes that currently seem random to us. So what we can say is that a given process is random with respect to theory T. That is, predictability depends on what theory we are using to predict.
WWGD said:True. Depending on the tools, information available at a given point, I should have said, and within a theory. A good question is whether there are phenomena that are somehow intrinsically unpredictable, i.e., not predictable within any system.
That line of thought would open a can of worms. Every coin flip has an outcome that is completely determined by the down-side of the coin. It would not be correct to say that that makes the up-side outcome less random.entropy1 said:Could one say that if we have two variables A and B that correlate for, say, 50%, that A (or B) is less random because knowledge of the outcome of B (or A) increases the likelyhood of a correct prediction of the outcome of A (or B)?
That seems to me comparable to predicting the outcome after observing it, which would not do justice to the notion of prediction.FactChecker said:That line of thought would open a can of worms. Every coin flip has an outcome that is completely determined by the down-side of the coin. It would not be correct to say that that makes the up-side outcome less random.
That's a good question. I think so, yes. If there is dependence, correlation, I would suggest randomness has been limited. That is the issue I was getting at.WWGD said:Would an intrinsically random process necessarily have correlation 0 with any other process?
Wonder how this would pan out Mathematically and Physically.entropy1 said:That's a good question. I think so, yes. If there is dependence, correlation, I would suggest randomness has been limited. That is the issue I was getting at.![]()
I think you are taking this in a direction that will not pay off. There are too many things that occur together, where you would not want to say that either one makes the other less random.entropy1 said:That's a good question. I think so, yes. If there is dependence, correlation, I would suggest randomness has been limited. That is the issue I was getting at.![]()
I wrote an essay about this a few years back, which you may find interesting:WWGD said:A good question is whether there are phenomena that are somehow intrinsically unpredictable, i.e., not predictable within any system.
Then, perhaps, the non-random factor lies in picking a person rather than a cat, dog or snake? The properties are inherent to the person. If we take 'properties' as 'outcome', they are correlated. It would be like measuring circles on both sides and finding that they are both round.FactChecker said:I think you are taking this in a direction that will not pay off. There are too many things that occur together, where you would not want to say that either one makes the other less random.
Example: Pick a random person out of a crowd. His height is related to the length of his left arm, right arm, left leg, right leg, weight, belt size, sex, age, etc., etc., etc. None of this makes anyone of them more or less random.
Yet they are correlated, so knowing one does help to make the others more predictable. But the one you need to know is itself random.
andrewkirk said:I wrote an essay about this a few years back, which you may find interesting:
https://wordpress.com/post/sageandonions.wordpress.com/75
My conclusion was that, unless we put artificial constraints on what counts as a theory, there is no such thing as intrinsically unpredictable, since we can imagine a theory that I call the 'M-law', which lists every event that happens anywhere in spacetime. No event is unpredictable under that theory. Such a theory would be unknowable by humans, but that's beside the point.
Now you are trying to isolate the cause of the random behavior of the selected person's right arm length (for example). That is possible but it does not change the fact that the selected arm length is random. A non-constant function of a random variable is a random variable.entropy1 said:Then, perhaps, the non-random factor lies in picking a person rather than a cat, dog or snake? The properties are inherent to the person. If we take 'properties' as 'outcome', they are correlated. It would be like measuring circles on both sides and finding that they are both round.
Sorry, there is a misunderstanding I see: I was talking here, and here, about a correlation between two strings of data A and B. I see now that I never introduced that I was talking about that. Sorry.FactChecker said:Now you are trying to isolate the cause of the random behavior of the selected person's right arm length (for example). That is possible but it does not change the fact that the selected arm length is random. A non-constant function of a random variable is a random variable.
Similarly, I could attempt to isolate the random behavior of a coin toss to the motion of the hand that flips the coin. That does not make the result of the coin toss any less random.
entropy1 said:If you can't predict the next outcome, how come you can predict the average outcome?
No, I understood that. I only brought up the example of a function because that can be the strongest correlation possible. The relationship between correlated variables is usually weaker than a functional relationship. If a function of a random variable is random, then we have to conclude that a correlated variable with a weaker relationship than a function is random.entropy1 said:
Would you be willing to illustrate that mathematically a bit? I can't seem to see what you mean by text only.FactChecker said:No, I understood that. I only brought up the example of a function because that can be the strongest correlation possible. The relationship between correlated variables is usually weaker than a functional relationship. If a function of a random variable is random, then we have to conclude that a correlated variable with a weaker relationship than a function is random.
Suppose we have random variables X, Y = 2X and Z = Y + ε = 2X + ε, where ε is an independent random variable. All three variables X, Y, Z, are correlated. Y is a function of X, but is still considered a random variable. Z is related to X, but is not a deterministic function of X. It is still considered a random variable. It's connection (correlation) to X is weaker than Y's is. There is no reason to say that Y or Z are less random than X. We could have just as easily started with Y, X = 0.5Y, Z = Y + ε.entropy1 said:Would you be willing to illustrate that mathematically a bit? I can't seem to see what you mean by text only.
It is predictable by the M-law.PeroK said:A measurement of the spin of an electron will return a value which is not predictable.
So do I. But the question is not whether there exists a theory or being that could predict the spin of the electron, but rather, is it in principle impossible that such a theory or being could exist? Of course we can't say, even though we may strongly suspect that no such theory or being exists in fact.PeroK said:I would question the existence of either.
andrewkirk said:It is predictable by the M-law.
Indistinguishability of particles has nothing to do with the predictability of future observations.PeroK said:There is, in fact, an interesting quotation from Griffith's book on QM: "even God doesn't know which electron is which". If God doesn't know, then neither do your hypothetical 25-dimensional beings.
Perhaps what you are looking for is this:entropy1 said:Do you mean sample_A=f(human_A) and sample_B=f(human_B), with human_A=g(general_human, random_value) and human_B=g(general_human, random_value)?
P(X=1|Y=1) is a specific probability number, not a random variable. (If P(Y=1)=0, then P(X=1|Y=1) is not even a valid probability.)entropy1 said:Is it correct that if X and Y are random variables, that X given Y, P(X=1|Y=1) is random too?
Excellent question. No, I would not consider it random, but rather deterministic chaos. (However, I would not be too dogmatic about it.) The question forces at least a partial definition of randomness, highlighting the differences between non-predictable deterministic behavior (chaos) and non-deterministic behavior (randomness). Chaos is a state where knowledge of the present determines the future but knowledge of the approximate present does not. To put it in terms of algorithmic complexity applied to Wolfram 30, there can exist an algorithmic computer program which can produce the result given the initial conditions, and it will be the same result each time (unlike, say, certain experiments on the quantum level). However, perhaps for a specific computer program we could bring up a concept of "relative randomness" or "randomness for this program" (or "axiom system", to take it back out from IT). Wiser people than I can take it from here...jimfarned said:@nomadreid: sorry, I had not seen yours when I posted concerning algorithmic complexity. thus, you would hold that Wolfram's Rule #30 is not random? just askin'.
entropy1 said:So I can imagine that outcomes tell us not necassarily anything about the probability.
So outcomes define probabilities, and probabilities predict (in a way) (averages of) outcomes? And in practice that goes well?Stephen Tashi said:The mathematical theory of probability doesn't make definite predictions about actual outcomes. It only makes statements about the probabilities of actual outcomes. [..] real life problems are often approached by assuming data from actual outcomes gives us their probabilities.
That is not an conclusion you can draw from the theory of probability. In applications, people often assume the data about outcomes gives their probabilities.entropy1 said:So outcomes define probabilities,
It depends on what you mean by "in a way" and "predict". Probability theory gives the probabilities of outcomes.and probabilities predict (in a way) (averages of) outcomes?
It "probably" does, but there is no absolute guarantee.And in practice that goes well?
This is a matter of notation. In the notation you are using "P(A)" denotes the probability of A without any other conditions. So the fact that P(A|B) = 1 does not show that there are two different values of P(A). The event "A|B" is a different event than the event "A".entropy1 said:Suppose that A,B∈{0,1} P(A)=0.5 and P(B)=0.5, while P(A|B)=1. We could write: P(A|anything)=0.5 and P(A|B)=1. So perhaps we have (at least) two different values for P(A), depending on some condition.
You should distinguish between an "event" and (random) "variable". It is correct that one may write different notations involving a random variable or an event and these notations may represent different probabilities.So perhaps any probability for a particular variable is variable depending on how you look at that variable?
Also, the probability of correlation in this example is 1. So we have a phenomenon, correlation, that has a probability on its own, right?
Chaitlin's definition is:nomadreid said:As the original question was about the definition (or lack thereof) of randomness, I am surprised no one has mentioned the work of Gregory Chaitin; e.g., Part III of https://www.cs.auckland.ac.nz/~chaitin/ait/index.html
andrewkirk said:A process may be unpredictable in one theory but predictable in a more sophisticated theory. There's no way we can know that there isn't some currently unknown, more sophisticated theory that can predict outcomes that currently seem random to us. So what we can say is that a given process is random with respect to theory T. That is, predictability depends on what theory we are using to predict.
Indeed QM says measurements are random variables. It is impossible to test that in the real world. I can produce binary sequences that satisfy all know randomness tests that are produce by an algorithm.PeroK said:A measurement of the spin of an electron will return a value which is not predictable.
Like Bohr with Einstein, I told David several times to stop telling God what he can't do.PeroK said:There is, in fact, an interesting quotation from Griffith's book on QM: "even God doesn't know which electron is which".
Zafa Pi said:Indeed QM says measurements are random variables. It is impossible to test that in the real world. I can produce binary sequences that satisfy all know randomness tests that are produce by an algorithm.
Like Bohr with Einstein, I told David several times to stop telling God what he can't do.
That is what I mean.stevendaryl said:Tests for randomness are a little strange. If you have some random process for generating a sequence of 0s and 1s, then the sequence
01010101010101010101010101...
is just as possible as the sequence
001001000011111101101010100...
Daryl
Zafa Pi said:Like Bohr with Einstein, I told David several times to stop telling God what he can't do.
Zafa Pi said:Indeed QM says measurements are random variables. It is impossible to test that in the real world. I can produce binary sequences that satisfy all know randomness tests that are produce by an algorithm.