Understanding Randomness: Differences in Classical Physics, SR, GR, and QM

  • Thread starter Jabbu
  • Start date
  • Tags
    Random
In summary: ...1 to 100...1...2...3...4...5...6...7...8...9...10...11...12...13...14...15...16...17...18...19...20...21...22...23...24...25...26...27...28...29...30...31...32...33...34...35...36
  • #1
Jabbu
180
0
Does "random" have different meaning in classical physics from SR, GR or QM? What is the difference between random, deterministic and probabilistic? Is probabilistic either random-probabilistic or deterministic-probabilistic, or is probabilistic a truly separate category on its own?

If we flip a coin 100,000 times and the number of heads match the number of tails 50-50% every time +/- some tiny variation, then how's that random outcome? Wouldn't it be truly random if we could flip 90% heads at one go and then 20% heads in another go just as easy, and just as easily as 40% heads, 1%, or 72%?
 
Physics news on Phys.org
  • #2
Is "random" the same thing as "without cause", or can something have a cause and still be random?
 
  • #3
One working definition of random is something that can't be predicted. In your case each flip of the coin gives a random result, but over time lots of random results give a clear picture about the nature of the coin.
 
  • #5
Also, note that probability is inherently tied to randomness. A single event (like the flip of a coin) may be random in the sense that you don't know with certainty what side it will land on, but you can say there there is some probability for the coin to land on each side. I think the first part of that last sentence is key here. Any single event is random if we can't say for certain what will happen, even though we can define probabilities for each possible outcome.
 
  • #6
You touch many different and interesting concepts here.

First, deterministic means that the outcome of an experiment is fixed before doing the experiment. So even before doing an experiment, there is only one outcome which will happen and (in principle) we can predict this outcome. All of classical physics is deterministic. For example, when I throw a ball, I can (in principle) calculate exactly where it is going to land and how long it is going to take. When I flip a coin, I can calculate (in principle) what side of the coin is going to be up and what side is going to be down. However, the variables involved and the equations involved are so immensly complicated that we can never do these calculations. Furthermore, our measurements can never be done precisely enough to know exactly which state we are in now. This is where probability theory comes in. While flipping a coin, the outcome is predetermined exactly. But the outcome is unkown to us. Probability theory does give us some way of accessing some information about the coin flips.

As another example, the number 0.1234567891011121314151617... is called the Champernowne constant. It is completely determined, it is clear to everybody how exactly this number continues. However, if I ask you for the 1000th digit, then you would have to do a tedious calculation in order to find out. So again, the number is determined, but this determination is unknown to us. We can again use probability theory to study the number. So we can figure out the chance of getting a 1 as 1000th digit. This is also the idea behind pseudo-random number generators.

True random processes are very difficult to generate and do not exist in classical physics. However, they do come up in quantum mechanics. Whether the processes which come up are actually deterministic in some sense is currently unknown. Probability theory is currently the only tool available to study these processes.

You also mention cause. Here you must specify what exactly you mean with cause. The term is rather vague and philosophical. I don't even know if it is a meaningful term in science, but others probably know more.

If we flip a coin 100,000 times and the number of heads match the number of tails 50-50% every time +/- some tiny variation, then how's that random outcome? Wouldn't it be truly random if we could flip 90% heads at one go and then 20% heads in another go just as easy, and just as easily as 40% heads, 1%, or 72%?

Yes, it is truly amazing that most of the processes we encounter satisfy the Law of Large Numbers. That is, if we repeat the experiment, then the averages will converge to a fixed number. This is still random because we have no way of predicting one specific outcome. Whatever information we have about the probabilities is totally useless in predicting the outcome of next coin toss. This is what it means to be a random outcome. When you have done sufficiently many experiment however, then some order in the chaos does appear. But this only happens when you talk about the outcomes of many experiments at once.

Also, not all processes satisfy the Law of Large Numbers. One famous example is this experiment:
"Choose a number at random from ##(-\pi/2,\pi/2)## (where all numbers are equally likely). Then shine a light on a wall where the angle between the light and the wall is the number you have chosen. The outcome of the experiment is the place on the wall where the light hits"
This experiment has the curious property that the averages do not converge. This means that it does exactly what you described in your post: in one run, the average place where the light hits the wall can be entirely different from the the average place where the light hits in the second run. Even if you make the runs long enough, you will see no pattern in the average place where the light hits.

http://www.math.uah.edu/stat/applets/CauchyExperiment.html

Luckily for us, these types of situations are very rare.
 
  • #7
micromass said:
Also, not all processes satisfy the Law of Large Numbers. One famous example is this experiment:
"Choose a number at random from ##(-\pi/2,\pi/2)## (where all numbers are equally likely). Then shine a light on a wall where the angle between the light and the wall is the number you have chosen. The outcome of the experiment is the place on the wall where the light hits"
This experiment has the curious property that the averages do not converge. This means that it does exactly what you described in your post: in one run, the average place where the light hits the wall can be entirely different from the the average place where the light hits in the second run. Even if you make the runs long enough, you will see no pattern in the average place where the light hits.

http://www.math.uah.edu/stat/applets/CauchyExperiment.html

Luckily for us, these types of situations are very rare.

It looks to me that the expectation value would be 0? Why is it not? D:

The blue curve looks almost like a Gaussian centered at 0.
 
  • #8
Matterwave said:
It looks to me that the expectation value would be 0? Why is it not? D:

The blue curve looks almost like a Gaussian centered at 0.

It indeed looks very much like a Gaussian, but it's not. The tails are much fatter. This means that it is more likely (with respect to the Gaussian) to have a extremely high or low outcome. This screw everything up. And this causes the expectation value not to exist (so it's not 0). You can do the experiment in the applet I linked, you will start off close to 0 and the more experiments you do you will not get very close to 0, and then suddenly there will be this extreme outcome which causes the average to go nuts.

Also, we see that the distribution is symmetric, and if the expectation value were to exist, then it would be 0. But it doesn't exist. In fact, if you try to calculate it, then you will constantly hit ##\infty-\infty## situations which are not well-defined (and which should not be well-defined in this case since the averages don't converge). So from that point-of-view, we see that 0 is no more special than any other value. It might be the mode of the distribution and the median, but it is no more special than any other point in terms of expectation value.
 
  • #9
Drakkith said:
Also, note that probability is inherently tied to randomness. A single event (like the flip of a coin) may be random in the sense that you don't know with certainty what side it will land on, but you can say there there is some probability for the coin to land on each side. I think the first part of that last sentence is key here. Any single event is random if we can't say for certain what will happen, even though we can define probabilities for each possible outcome.

Yes. The key point for me here is to differentiate between "just complex" and "truly random". We can make completely deterministic computer simulation of 5-body gravity interaction. Even if it is deterministic and initial conditions known it's still unpredictable, or chaotic. But rather than merely "hard to compute in head", I'm trying to find out what does it take for something to be truly random and what is that really supposed to mean.
 
  • #10
Jabbu said:
Yes. The key point for me here is to differentiate between "just complex" and "truly random". We can make completely deterministic computer simulation of 5-body gravity interaction. Even if it is deterministic and initial conditions known it's still unpredictable, or chaotic. But rather than merely "hard to compute in head", I'm trying to find out what does it take for something to be truly random and what is that really supposed to mean.

Truly random means that there is no way to predict the outcome of an experiment even in principle. So before you do the experiment, the outcome is not fixed and can still be everything.

In that sense, tossing a coin is not truly random, it is only pseudo-random, however our lack of knowledge means that it is truly random for all practical purposes.

I don't think it is currently known whether something truly random exists or not.
 
  • #11
micromass said:
As another example, the number 0.1234567891011121314151617... is called the Champernowne constant. It is completely determined, it is clear to everybody how exactly this number continues. However, if I ask you for the 1000th digit, then you would have to do a tedious calculation in order to find out.

That doesn't seem a particularly tedious or long calculation - unless I'm misunderstanding something, it is straightforward to find where the sequences "10", "100", "1000", "10000", etc occur and then calculate from those points.

Maybe a better example would be 0.23571113171923293137... where the digits are the sequence of all prime numbers. There probably isn't a way to find the n'th digit of that sequence without tabulating a sufficient number of primes.

Matterwave said:
It looks to me that the expectation value would be 0? Why is it not? D:

The blue curve looks almost like a Gaussian centered at 0.

The basic issue is that the Cauchy probability distribution (Follow the link on the web page for the app) has mean 0, but infinite variance. This is an elephant trap for people who like fitting distributions to experimental data. If you try to estimate the variance of an unknown distribution from a finite sized sample by any "common sense" method, the estimate is almost guaranteed to be finite. (And if your sample contained a data point which was wildly different from the rest, you would probably discard it because there was something wrong with it!)

If the underlying distribution was Cauchy, your finite estimate of the variance will always be infinitely wrong :smile:

http://en.wikipedia.org/wiki/Fat-tailed_distribution
 
Last edited:
  • #12
AlephZero said:
That doesn't seem a particularly tedious or long calculation - unless I'm misunderstanding something, it is straightforward to find where the sequences "10", "100", "1000", "10000", etc occur and then calculate from those points.

I guess my point was that it is not clear immediately clear what the number is without calculations.

The basic issue is that the Cauchy probability distribution (Follow link on the web page) has mean 0, but infinite variance.

No, the Cauchy distribution does not have a mean. It does not have a variance (so in particular, the variance is not infinite).
 
  • #13
micromass said:
True random processes are very difficult to generate and do not exist in classical physics. However, they do come up in quantum mechanics. Whether the processes which come up are actually deterministic in some sense is currently unknown. Probability theory is currently the only tool available to study these processes.

Can we really know whether QM interactions are truly random and not just seemingly random like deterministic-chaotic systems?


You also mention cause. Here you must specify what exactly you mean with cause. The term is rather vague and philosophical. I don't even know if it is a meaningful term in science, but others probably know more.

To me with or without cause is the most meaningful difference, I think that would make it clear what is random and what is not. But if random can be both with and without cause, then it seems to me it would be far more difficult, if not impossible, to distinguish one from the other.

I'm not sure how to define "cause", but I'd say it has to do with limits and constraints, some range or degrees of freedom, where things perhaps can be more or less random rather than just random or not. I think if we could find meaningful and persistent definition for "cause" it would bring us that much closer to some definite answer, even if that answer is that there is no answer.


Also, not all processes satisfy the Law of Large Numbers. One famous example is this experiment:
"Choose a number at random from ##(-\pi/2,\pi/2)## (where all numbers are equally likely). Then shine a light on a wall where the angle between the light and the wall is the number you have chosen. The outcome of the experiment is the place on the wall where the light hits"
This experiment has the curious property that the averages do not converge. This means that it does exactly what you described in your post: in one run, the average place where the light hits the wall can be entirely different from the the average place where the light hits in the second run. Even if you make the runs long enough, you will see no pattern in the average place where the light hits.

http://www.math.uah.edu/stat/applets/CauchyExperiment.html

There it is, amazing. I kind of expect such thing to be more common, and yet I find myself surprised about it, as if there is something utterly indescribable about it, something I can't even point a finger at.
 
  • #14
micromass said:
Truly random means that there is no way to predict the outcome of an experiment even in principle.

Yes, I'd just like something more specific than that, if possible.


I don't think it is currently known whether something truly random exists or not.

I fear that is likely the case, but I still hope something more can be said about the whole thing.
 
  • #15
Jabbu said:
Yes, I'd just like something more specific than that, if possible.

I'm afraid it is too difficult to say more. Even in mathematics, we don't define what random is precisely, we just circumvent the entire thing by giving properties of what a random process should satisfy and then treating those as axioms.

There are some interesting proposals of what random means (for example: http://en.wikipedia.org/wiki/Kolmogorov_randomness#Kolmogorov_randomness), but I don't really think this is the definition we're looking for.

I personally consider a definition of randomness to be closer to philosophy than to science.
 
  • #16
I've always taken random to mean that a phase particle has more than one future, and solutions are not unique.

Its not necessary that a random distribution be Guassian. There are lots of different distribution shapes.
 
  • #17
micromass said:
I'm afraid it is too difficult to say more. Even in mathematics, we don't define what random is precisely, we just circumvent the entire thing by giving properties of what a random process should satisfy and then treating those as axioms.

There are some interesting proposals of what random means (for example: http://en.wikipedia.org/wiki/Kolmogorov_randomness#Kolmogorov_randomness), but I don't really think this is the definition we're looking for.

Kolmogorov randomness makes sense, I think I see what are they trying to compare there. It reminds me of information compression, where the more we can compress the less random it is.

It is indeed something more specific to say about randomness, but still far from apparent. With any given supposedly random sequence, I don't think we can say with certainty that there really does not exist a simple recursive function that would actually duplicate it.


I personally consider a definition of randomness to be closer to philosophy than to science.

I think we could say the same thing about whole of mathematics and quantum mechanics. It's really hard for physics not be philosophical when it is supposed to describe and explain how reality works. Randomness is intrinsically related to free will debate, but I'm not interested in philosophical musings, only in physical or practical implications, objective rather than subjective.
 
  • #18
micromass said:
It indeed looks very much like a Gaussian, but it's not. The tails are much fatter. This means that it is more likely (with respect to the Gaussian) to have a extremely high or low outcome. This screw everything up. And this causes the expectation value not to exist (so it's not 0). You can do the experiment in the applet I linked, you will start off close to 0 and the more experiments you do you will not get very close to 0, and then suddenly there will be this extreme outcome which causes the average to go nuts.

Also, we see that the distribution is symmetric, and if the expectation value were to exist, then it would be 0. But it doesn't exist. In fact, if you try to calculate it, then you will constantly hit ##\infty-\infty## situations which are not well-defined (and which should not be well-defined in this case since the averages don't converge). So from that point-of-view, we see that 0 is no more special than any other value. It might be the mode of the distribution and the median, but it is no more special than any other point in terms of expectation value.

I'm at n=14,000 and so far the mean has remained roughly at 0...? Slightly skewed right now to .005, but I did not see it wildly fluctuate to more than like +/- .05... o.o

EDIT: Oh, I saw it go up to 1.2 maybe I just have to wait longer lol.
 
  • #19
Jabbu said:
I think we could say the same thing about whole of mathematics and quantum mechanics. It's really hard for physics not be philosophical when it is supposed to describe and explain how reality works.

Quantum mechanics and mathematics are not supposed to describe and explain how reality works. It is supposed to quantify reality. So if we do an experiment, quantum mechanics can be used to find out the possible outcomes and probabilities of the outcomes. It never tells us why things are that way or how it actually works, because that would be outside the realm of science.

The deeper you go in physics or mathematics, you find out that we care only about calculating certain things. We care about whether these calculations match reality. However, the way we got to the outcome of the calculation might be very far away from how reality does things, but we don't care about that.

https://www.youtube.com/watch?v=6TI1M3abAM8
 
  • #20
Matterwave said:
I'm at n=14,000 and so far the mean has remained roughly at 0...? Slightly skewed right now to .005, but I did not see it wildly fluctuate to more than like +/- .05... o.o

EDIT: Oh, I saw it go up to 1.2 maybe I just have to wait longer lol.

Yeah, the various simulations look not alike at all. What you should observe is that the pointer remains fixed for a certain amount of time and then suddenly it jumps to another position. If you continue the simulation ad infinitum, you will see that this pattern persists: it will not get closer to anything because of the sudden jumps.
 
  • #21
micromass said:
Quantum mechanics and mathematics are not supposed to describe and explain how reality works. It is supposed to quantify reality. So if we do an experiment, quantum mechanics can be used to find out the possible outcomes and probabilities of the outcomes. It never tells us why things are that way or how it actually works, because that would be outside the realm of science.

The deeper you go in physics or mathematics, you find out that we care only about calculating certain things. We care about whether these calculations match reality. However, the way we got to the outcome of the calculation might be very far away from how reality does things, but we don't care about that.

I very much agree, except for the last part. I think it's something to care about what equations practically mean and that many people do, which can be both good and bad thing.

Equations can be ambiguous and vague, direct and indirect, but every equation does describe reality, and there is some theory behind every equation. There can be different interpretations, but correct interpretation is likely to move us forward. That's why I think it's important and can be a good thing, also a bad thing for the same reason, which only makes it even more important to care about.
 
  • #22
micromass said:
Yeah, the various simulations look not alike at all. What you should observe is that the pointer remains fixed for a certain amount of time and then suddenly it jumps to another position. If you continue the simulation ad infinitum, you will see that this pattern persists: it will not get closer to anything because of the sudden jumps.

How does that compare to raindrops and locations where the drops hit the ground within specific area, for example? Or roulette?
 
  • #23
Jabbu said:
How does that compare to raindrops and locations where the drops hit the ground within specific area, for example? Or roulette?

Raindrops and roulette can be modeled deterministically as a mechanics problem. It's just particles and forces. Of course, it would be very complicated and so sensitive to initial conditions and external perturbations that it would be impossible to make reliable predictions with.

Of course, the random number generators that computers use for such simulations aren't truly random either and rely on a similarly complicated, but ultimately deterministic, process.
 

1. What is randomness and how is it different in classical physics and quantum mechanics?

Randomness refers to the unpredictability of a system or phenomenon. In classical physics, randomness is often attributed to unknown or unmeasured factors, while in quantum mechanics, randomness is a fundamental aspect of the behavior of particles at the subatomic level.

2. What is the role of Special and General Relativity in understanding randomness?

Special Relativity (SR) and General Relativity (GR) provide a framework for understanding the behavior of objects in space and time. SR explains how the laws of physics apply in inertial reference frames, while GR explains the effects of gravity on the fabric of spacetime. Both theories play a role in understanding randomness, particularly in the context of predicting and measuring the behavior of objects moving at high speeds or in strong gravitational fields.

3. How does quantum mechanics explain randomness?

Quantum mechanics is a branch of physics that describes the behavior of particles at the subatomic level. In this theory, randomness is a fundamental aspect of the universe and is described by the principle of indeterminacy. This principle states that it is impossible to simultaneously know the position and momentum of a particle with absolute certainty, leading to inherent randomness in the behavior of particles.

4. Can randomness be explained by classical physics alone?

No, classical physics alone cannot fully explain randomness. While classical physics can make predictions about the behavior of macroscopic objects, it struggles to explain the behavior of particles at the subatomic level. Quantum mechanics is needed to fully understand the randomness inherent in these particles.

5. How does our understanding of randomness impact our daily lives?

Our understanding of randomness has many practical applications, from predicting weather patterns to creating secure encryption methods. It also plays a role in technologies such as quantum computing and quantum cryptography. Additionally, our understanding of randomness in quantum mechanics has led to advancements in fields such as medicine and materials science.

Similar threads

Replies
3
Views
667
  • Quantum Physics
3
Replies
88
Views
6K
  • Quantum Interpretations and Foundations
3
Replies
76
Views
3K
  • Set Theory, Logic, Probability, Statistics
2
Replies
45
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Quantum Physics
Replies
6
Views
2K
Replies
3
Views
1K
Replies
9
Views
5K
  • Special and General Relativity
Replies
20
Views
2K
  • Math Proof Training and Practice
2
Replies
38
Views
8K
Back
Top