Register to reply

What is random?

by Jabbu
Tags: random
Share this thread:
Jabbu
#1
Aug17-14, 05:58 PM
P: 180
Does "random" have different meaning in classical physic from SR, GR or QM? What is the difference between random, deterministic and probabilistic? Is probabilistic either random-probabilistic or deterministic-probabilistic, or is probabilistic a truly separate category on its own?

If we flip a coin 100,000 times and the number of heads match the number of tails 50-50% every time +/- some tiny variation, then how's that random outcome? Wouldn't it be truly random if we could flip 90% heads at one go and then 20% heads in another go just as easy, and just as easily as 40% heads, 1%, or 72%?
Phys.Org News Partner Physics news on Phys.org
Three's a charm: NIST detectors reveal entangled photon triplets
How did evolution optimize circadian clocks?
New webcast series brings cutting-edge physics talks to the world
Jabbu
#2
Aug17-14, 06:01 PM
P: 180
Is "random" the same thing as "without cause", or can something have a cause and still be random?
Ashiataka
#3
Aug17-14, 06:20 PM
P: 21
One working definition of random is something that can't be predicted. In your case each flip of the coin gives a random result, but over time lots of random results give a clear picture about the nature of the coin.

Drakkith
#4
Aug17-14, 06:25 PM
Mentor
Drakkith's Avatar
P: 11,997
What is random?

I'd give this article a read if you haven't already: http://en.wikipedia.org/wiki/Randomness
Drakkith
#5
Aug17-14, 06:29 PM
Mentor
Drakkith's Avatar
P: 11,997
Also, note that probability is inherently tied to randomness. A single event (like the flip of a coin) may be random in the sense that you don't know with certainty what side it will land on, but you can say there there is some probability for the coin to land on each side. I think the first part of that last sentence is key here. Any single event is random if we can't say for certain what will happen, even though we can define probabilities for each possible outcome.
micromass
#6
Aug17-14, 06:31 PM
Mentor
micromass's Avatar
P: 18,346
You touch many different and interesting concepts here.

First, deterministic means that the outcome of an experiment is fixed before doing the experiment. So even before doing an experiment, there is only one outcome which will happen and (in principle) we can predict this outcome. All of classical physics is deterministic. For example, when I throw a ball, I can (in principle) calculate exactly where it is going to land and how long it is going to take. When I flip a coin, I can calculate (in principle) what side of the coin is going to be up and what side is going to be down. However, the variables involved and the equations involved are so immensly complicated that we can never do these calculations. Furthermore, our measurements can never be done precisely enough to know exactly which state we are in now. This is where probability theory comes in. While flipping a coin, the outcome is predetermined exactly. But the outcome is unkown to us. Probability theory does give us some way of accessing some information about the coin flips.

As another example, the number 0.1234567891011121314151617... is called the Champernowne constant. It is completely determined, it is clear to everybody how exactly this number continues. However, if I ask you for the 1000th digit, then you would have to do a tedious calculation in order to find out. So again, the number is determined, but this determination is unknown to us. We can again use probability theory to study the number. So we can figure out the chance of getting a 1 as 1000th digit. This is also the idea behind pseudo-random number generators.

True random processes are very difficult to generate and do not exist in classical physics. However, they do come up in quantum mechanics. Whether the processes which come up are actually deterministic in some sense is currently unknown. Probability theory is currently the only tool available to study these processes.

You also mention cause. Here you must specify what exactly you mean with cause. The term is rather vague and philosophical. I don't even know if it is a meaningful term in science, but others probably know more.

If we flip a coin 100,000 times and the number of heads match the number of tails 50-50% every time +/- some tiny variation, then how's that random outcome? Wouldn't it be truly random if we could flip 90% heads at one go and then 20% heads in another go just as easy, and just as easily as 40% heads, 1%, or 72%?
Yes, it is truly amazing that most of the processes we encounter satisfy the Law of Large Numbers. That is, if we repeat the experiment, then the averages will converge to a fixed number. This is still random because we have no way of predicting one specific outcome. Whatever information we have about the probabilities is totally useless in predicting the outcome of next coin toss. This is what it means to be a random outcome. When you have done sufficiently many experiment however, then some order in the chaos does appear. But this only happens when you talk about the outcomes of many experiments at once.

Also, not all processes satisfy the Law of Large Numbers. One famous example is this experiment:
"Choose a number at random from ##(-\pi/2,\pi/2)## (where all numbers are equally likely). Then shine a light on a wall where the angle between the light and the wall is the number you have chosen. The outcome of the experiment is the place on the wall where the light hits"
This experiment has the curious property that the averages do not converge. This means that it does exactly what you described in your post: in one run, the average place where the light hits the wall can be entirely different from the the average place where the light hits in the second run. Even if you make the runs long enough, you will see no pattern in the average place where the light hits.

http://www.math.uah.edu/stat/applets...xperiment.html

Luckily for us, these types of situations are very rare.
Matterwave
#7
Aug17-14, 06:39 PM
Sci Advisor
Matterwave's Avatar
P: 2,945
Quote Quote by micromass View Post
Also, not all processes satisfy the Law of Large Numbers. One famous example is this experiment:
"Choose a number at random from ##(-\pi/2,\pi/2)## (where all numbers are equally likely). Then shine a light on a wall where the angle between the light and the wall is the number you have chosen. The outcome of the experiment is the place on the wall where the light hits"
This experiment has the curious property that the averages do not converge. This means that it does exactly what you described in your post: in one run, the average place where the light hits the wall can be entirely different from the the average place where the light hits in the second run. Even if you make the runs long enough, you will see no pattern in the average place where the light hits.

http://www.math.uah.edu/stat/applets...xperiment.html

Luckily for us, these types of situations are very rare.
It looks to me that the expectation value would be 0? Why is it not? D:

The blue curve looks almost like a Gaussian centered at 0.
micromass
#8
Aug17-14, 06:46 PM
Mentor
micromass's Avatar
P: 18,346
Quote Quote by Matterwave View Post
It looks to me that the expectation value would be 0? Why is it not? D:

The blue curve looks almost like a Gaussian centered at 0.
It indeed looks very much like a Gaussian, but it's not. The tails are much fatter. This means that it is more likely (with respect to the Gaussian) to have a extremely high or low outcome. This screw everything up. And this causes the expectation value not to exist (so it's not 0). You can do the experiment in the applet I linked, you will start off close to 0 and the more experiments you do you will not get very close to 0, and then suddenly there will be this extreme outcome which causes the average to go nuts.

Also, we see that the distribution is symmetric, and if the expectation value were to exist, then it would be 0. But it doesn't exist. In fact, if you try to calculate it, then you will constantly hit ##\infty-\infty## situations which are not well-defined (and which should not be well-defined in this case since the averages don't converge). So from that point-of-view, we see that 0 is no more special than any other value. It might be the mode of the distribution and the median, but it is no more special than any other point in terms of expectation value.
Jabbu
#9
Aug17-14, 06:52 PM
P: 180
Quote Quote by Drakkith View Post
Also, note that probability is inherently tied to randomness. A single event (like the flip of a coin) may be random in the sense that you don't know with certainty what side it will land on, but you can say there there is some probability for the coin to land on each side. I think the first part of that last sentence is key here. Any single event is random if we can't say for certain what will happen, even though we can define probabilities for each possible outcome.
Yes. The key point for me here is to differentiate between "just complex" and "truly random". We can make completely deterministic computer simulation of 5-body gravity interaction. Even if it is deterministic and initial conditions known it's still unpredictable, or chaotic. But rather than merely "hard to compute in head", I'm trying to find out what does it take for something to be truly random and what is that really supposed to mean.
micromass
#10
Aug17-14, 07:04 PM
Mentor
micromass's Avatar
P: 18,346
Quote Quote by Jabbu View Post
Yes. The key point for me here is to differentiate between "just complex" and "truly random". We can make completely deterministic computer simulation of 5-body gravity interaction. Even if it is deterministic and initial conditions known it's still unpredictable, or chaotic. But rather than merely "hard to compute in head", I'm trying to find out what does it take for something to be truly random and what is that really supposed to mean.
Truly random means that there is no way to predict the outcome of an experiment even in principle. So before you do the experiment, the outcome is not fixed and can still be everything.

In that sense, tossing a coin is not truly random, it is only pseudo-random, however our lack of knowledge means that it is truly random for all practical purposes.

I don't think it is currently known whether something truly random exists or not.
AlephZero
#11
Aug17-14, 07:12 PM
Engineering
Sci Advisor
HW Helper
Thanks
P: 7,279
Quote Quote by micromass View Post
As another example, the number 0.1234567891011121314151617... is called the Champernowne constant. It is completely determined, it is clear to everybody how exactly this number continues. However, if I ask you for the 1000th digit, then you would have to do a tedious calculation in order to find out.
That doesn't seem a particularly tedious or long calculation - unless I'm misunderstanding something, it is straightforward to find where the sequences "10", "100", "1000", "10000", etc occur and then calculate from those points.

Maybe a better example would be 0.23571113171923293137.... where the digits are the sequence of all prime numbers. There probably isn't a way to find the n'th digit of that sequence without tabulating a sufficient number of primes.

Quote Quote by Matterwave View Post
It looks to me that the expectation value would be 0? Why is it not? D:

The blue curve looks almost like a Gaussian centered at 0.
The basic issue is that the Cauchy probability distribution (Follow the link on the web page for the app) has mean 0, but infinite variance. This is an elephant trap for people who like fitting distributions to experimental data. If you try to estimate the variance of an unknown distribution from a finite sized sample by any "common sense" method, the estimate is almost guaranteed to be finite. (And if your sample contained a data point which was wildly different from the rest, you would probably discard it because there was something wrong with it!)

If the underlying distribution was Cauchy, your finite estimate of the variance will always be infinitely wrong

http://en.wikipedia.org/wiki/Fat-tailed_distribution
micromass
#12
Aug17-14, 07:15 PM
Mentor
micromass's Avatar
P: 18,346
Quote Quote by AlephZero View Post
That doesn't seem a particularly tedious or long calculation - unless I'm misunderstanding something, it is straightforward to find where the sequences "10", "100", "1000", "10000", etc occur and then calculate from those points.
I guess my point was that it is not clear immediately clear what the number is without calculations.

The basic issue is that the Cauchy probability distribution (Follow link on the web page) has mean 0, but infinite variance.
No, the Cauchy distribution does not have a mean. It does not have a variance (so in particular, the variance is not infinite).
Jabbu
#13
Aug17-14, 07:26 PM
P: 180
Quote Quote by micromass View Post
True random processes are very difficult to generate and do not exist in classical physics. However, they do come up in quantum mechanics. Whether the processes which come up are actually deterministic in some sense is currently unknown. Probability theory is currently the only tool available to study these processes.
Can we really know whether QM interactions are truly random and not just seemingly random like deterministic-chaotic systems?


You also mention cause. Here you must specify what exactly you mean with cause. The term is rather vague and philosophical. I don't even know if it is a meaningful term in science, but others probably know more.
To me with or without cause is the most meaningful difference, I think that would make it clear what is random and what is not. But if random can be both with and without cause, then it seems to me it would be far more difficult, if not impossible, to distinguish one from the other.

I'm not sure how to define "cause", but I'd say it has to do with limits and constraints, some range or degrees of freedom, where things perhaps can be more or less random rather than just random or not. I think if we could find meaningful and persistent definition for "cause" it would bring us that much closer to some definite answer, even if that answer is that there is no answer.


Also, not all processes satisfy the Law of Large Numbers. One famous example is this experiment:
"Choose a number at random from ##(-\pi/2,\pi/2)## (where all numbers are equally likely). Then shine a light on a wall where the angle between the light and the wall is the number you have chosen. The outcome of the experiment is the place on the wall where the light hits"
This experiment has the curious property that the averages do not converge. This means that it does exactly what you described in your post: in one run, the average place where the light hits the wall can be entirely different from the the average place where the light hits in the second run. Even if you make the runs long enough, you will see no pattern in the average place where the light hits.

http://www.math.uah.edu/stat/applets...xperiment.html
There it is, amazing. I kind of expect such thing to be more common, and yet I find myself surprised about it, as if there is something utterly indescribable about it, something I can't even point a finger at.
Jabbu
#14
Aug17-14, 07:31 PM
P: 180
Quote Quote by micromass View Post
Truly random means that there is no way to predict the outcome of an experiment even in principle.
Yes, I'd just like something more specific than that, if possible.


I don't think it is currently known whether something truly random exists or not.
I fear that is likely the case, but I still hope something more can be said about the whole thing.
micromass
#15
Aug17-14, 07:46 PM
Mentor
micromass's Avatar
P: 18,346
Quote Quote by Jabbu View Post
Yes, I'd just like something more specific than that, if possible.
I'm afraid it is too difficult to say more. Even in mathematics, we don't define what random is precisely, we just circumvent the entire thing by giving properties of what a random process should satisfy and then treating those as axioms.

There are some interesting proposals of what random means (for example: http://en.wikipedia.org/wiki/Kolmogo...rov_randomness), but I don't really think this is the definition we're looking for.

I personally consider a definition of randomness to be closer to philosophy than to science.
Pythagorean
#16
Aug17-14, 08:24 PM
PF Gold
Pythagorean's Avatar
P: 4,292
I've always taken random to mean that a phase particle has more than one future, and solutions are not unique.

Its not necessary that a random distribution be Guassian. There are lots of different distribution shapes.
Jabbu
#17
Aug17-14, 08:25 PM
P: 180
Quote Quote by micromass View Post
I'm afraid it is too difficult to say more. Even in mathematics, we don't define what random is precisely, we just circumvent the entire thing by giving properties of what a random process should satisfy and then treating those as axioms.

There are some interesting proposals of what random means (for example: http://en.wikipedia.org/wiki/Kolmogo...rov_randomness), but I don't really think this is the definition we're looking for.
Kolmogorov randomness makes sense, I think I see what are they trying to compare there. It reminds me of information compression, where the more we can compress the less random it is.

It is indeed something more specific to say about randomness, but still far from apparent. With any given supposedly random sequence, I don't think we can say with certainty that there really does not exist a simple recursive function that would actually duplicate it.


I personally consider a definition of randomness to be closer to philosophy than to science.
I think we could say the same thing about whole of mathematics and quantum mechanics. It's really hard for physics not be philosophical when it is supposed to describe and explain how reality works. Randomness is intrinsically related to free will debate, but I'm not interested in philosophical musings, only in physical or practical implications, objective rather than subjective.
Matterwave
#18
Aug17-14, 08:33 PM
Sci Advisor
Matterwave's Avatar
P: 2,945
Quote Quote by micromass View Post
It indeed looks very much like a Gaussian, but it's not. The tails are much fatter. This means that it is more likely (with respect to the Gaussian) to have a extremely high or low outcome. This screw everything up. And this causes the expectation value not to exist (so it's not 0). You can do the experiment in the applet I linked, you will start off close to 0 and the more experiments you do you will not get very close to 0, and then suddenly there will be this extreme outcome which causes the average to go nuts.

Also, we see that the distribution is symmetric, and if the expectation value were to exist, then it would be 0. But it doesn't exist. In fact, if you try to calculate it, then you will constantly hit ##\infty-\infty## situations which are not well-defined (and which should not be well-defined in this case since the averages don't converge). So from that point-of-view, we see that 0 is no more special than any other value. It might be the mode of the distribution and the median, but it is no more special than any other point in terms of expectation value.
I'm at n=14,000 and so far the mean has remained roughly at 0...? Slightly skewed right now to .005, but I did not see it wildly fluctuate to more than like +/- .05... o.o

EDIT: Oh, I saw it go up to 1.2 maybe I just have to wait longer lol.


Register to reply

Related Discussions
Random Process vs Random Variable vs Sample Space Set Theory, Logic, Probability, Statistics 6
A question in random variables and random processes Set Theory, Logic, Probability, Statistics 1
MATLAB - random allocation of random number Engineering, Comp Sci, & Technology Homework 1
Proof of the statement: sum of two random variables is also a random variable Set Theory, Logic, Probability, Statistics 8
Expectation and variance of a random number of random variables Calculus & Beyond Homework 3