Does "random" have different meaning in classical physic from SR, GR or QM? What is the difference between random, deterministic and probabilistic? Is probabilistic either random-probabilistic or deterministic-probabilistic, or is probabilistic a truly separate category on its own? If we flip a coin 100,000 times and the number of heads match the number of tails 50-50% every time +/- some tiny variation, then how's that random outcome? Wouldn't it be truly random if we could flip 90% heads at one go and then 20% heads in another go just as easy, and just as easily as 40% heads, 1%, or 72%?
One working definition of random is something that can't be predicted. In your case each flip of the coin gives a random result, but over time lots of random results give a clear picture about the nature of the coin.
Also, note that probability is inherently tied to randomness. A single event (like the flip of a coin) may be random in the sense that you don't know with certainty what side it will land on, but you can say there there is some probability for the coin to land on each side. I think the first part of that last sentence is key here. Any single event is random if we can't say for certain what will happen, even though we can define probabilities for each possible outcome.
You touch many different and interesting concepts here. First, deterministic means that the outcome of an experiment is fixed before doing the experiment. So even before doing an experiment, there is only one outcome which will happen and (in principle) we can predict this outcome. All of classical physics is deterministic. For example, when I throw a ball, I can (in principle) calculate exactly where it is going to land and how long it is going to take. When I flip a coin, I can calculate (in principle) what side of the coin is going to be up and what side is going to be down. However, the variables involved and the equations involved are so immensly complicated that we can never do these calculations. Furthermore, our measurements can never be done precisely enough to know exactly which state we are in now. This is where probability theory comes in. While flipping a coin, the outcome is predetermined exactly. But the outcome is unkown to us. Probability theory does give us some way of accessing some information about the coin flips. As another example, the number 0.1234567891011121314151617... is called the Champernowne constant. It is completely determined, it is clear to everybody how exactly this number continues. However, if I ask you for the 1000th digit, then you would have to do a tedious calculation in order to find out. So again, the number is determined, but this determination is unknown to us. We can again use probability theory to study the number. So we can figure out the chance of getting a 1 as 1000th digit. This is also the idea behind pseudo-random number generators. True random processes are very difficult to generate and do not exist in classical physics. However, they do come up in quantum mechanics. Whether the processes which come up are actually deterministic in some sense is currently unknown. Probability theory is currently the only tool available to study these processes. You also mention cause. Here you must specify what exactly you mean with cause. The term is rather vague and philosophical. I don't even know if it is a meaningful term in science, but others probably know more. Yes, it is truly amazing that most of the processes we encounter satisfy the Law of Large Numbers. That is, if we repeat the experiment, then the averages will converge to a fixed number. This is still random because we have no way of predicting one specific outcome. Whatever information we have about the probabilities is totally useless in predicting the outcome of next coin toss. This is what it means to be a random outcome. When you have done sufficiently many experiment however, then some order in the chaos does appear. But this only happens when you talk about the outcomes of many experiments at once. Also, not all processes satisfy the Law of Large Numbers. One famous example is this experiment: "Choose a number at random from ##(-\pi/2,\pi/2)## (where all numbers are equally likely). Then shine a light on a wall where the angle between the light and the wall is the number you have chosen. The outcome of the experiment is the place on the wall where the light hits" This experiment has the curious property that the averages do not converge. This means that it does exactly what you described in your post: in one run, the average place where the light hits the wall can be entirely different from the the average place where the light hits in the second run. Even if you make the runs long enough, you will see no pattern in the average place where the light hits. http://www.math.uah.edu/stat/applets/CauchyExperiment.html Luckily for us, these types of situations are very rare.
It looks to me that the expectation value would be 0? Why is it not? D: The blue curve looks almost like a Gaussian centered at 0.
It indeed looks very much like a Gaussian, but it's not. The tails are much fatter. This means that it is more likely (with respect to the Gaussian) to have a extremely high or low outcome. This screw everything up. And this causes the expectation value not to exist (so it's not 0). You can do the experiment in the applet I linked, you will start off close to 0 and the more experiments you do you will not get very close to 0, and then suddenly there will be this extreme outcome which causes the average to go nuts. Also, we see that the distribution is symmetric, and if the expectation value were to exist, then it would be 0. But it doesn't exist. In fact, if you try to calculate it, then you will constantly hit ##\infty-\infty## situations which are not well-defined (and which should not be well-defined in this case since the averages don't converge). So from that point-of-view, we see that 0 is no more special than any other value. It might be the mode of the distribution and the median, but it is no more special than any other point in terms of expectation value.
Yes. The key point for me here is to differentiate between "just complex" and "truly random". We can make completely deterministic computer simulation of 5-body gravity interaction. Even if it is deterministic and initial conditions known it's still unpredictable, or chaotic. But rather than merely "hard to compute in head", I'm trying to find out what does it take for something to be truly random and what is that really supposed to mean.
Truly random means that there is no way to predict the outcome of an experiment even in principle. So before you do the experiment, the outcome is not fixed and can still be everything. In that sense, tossing a coin is not truly random, it is only pseudo-random, however our lack of knowledge means that it is truly random for all practical purposes. I don't think it is currently known whether something truly random exists or not.
That doesn't seem a particularly tedious or long calculation - unless I'm misunderstanding something, it is straightforward to find where the sequences "10", "100", "1000", "10000", etc occur and then calculate from those points. Maybe a better example would be 0.23571113171923293137.... where the digits are the sequence of all prime numbers. There probably isn't a way to find the n'th digit of that sequence without tabulating a sufficient number of primes. The basic issue is that the Cauchy probability distribution (Follow the link on the web page for the app) has mean 0, but infinite variance. This is an elephant trap for people who like fitting distributions to experimental data. If you try to estimate the variance of an unknown distribution from a finite sized sample by any "common sense" method, the estimate is almost guaranteed to be finite. (And if your sample contained a data point which was wildly different from the rest, you would probably discard it because there was something wrong with it!) If the underlying distribution was Cauchy, your finite estimate of the variance will always be infinitely wrong http://en.wikipedia.org/wiki/Fat-tailed_distribution
I guess my point was that it is not clear immediately clear what the number is without calculations. No, the Cauchy distribution does not have a mean. It does not have a variance (so in particular, the variance is not infinite).
Can we really know whether QM interactions are truly random and not just seemingly random like deterministic-chaotic systems? To me with or without cause is the most meaningful difference, I think that would make it clear what is random and what is not. But if random can be both with and without cause, then it seems to me it would be far more difficult, if not impossible, to distinguish one from the other. I'm not sure how to define "cause", but I'd say it has to do with limits and constraints, some range or degrees of freedom, where things perhaps can be more or less random rather than just random or not. I think if we could find meaningful and persistent definition for "cause" it would bring us that much closer to some definite answer, even if that answer is that there is no answer. There it is, amazing. I kind of expect such thing to be more common, and yet I find myself surprised about it, as if there is something utterly indescribable about it, something I can't even point a finger at.
Yes, I'd just like something more specific than that, if possible. I fear that is likely the case, but I still hope something more can be said about the whole thing.
I'm afraid it is too difficult to say more. Even in mathematics, we don't define what random is precisely, we just circumvent the entire thing by giving properties of what a random process should satisfy and then treating those as axioms. There are some interesting proposals of what random means (for example: http://en.wikipedia.org/wiki/Kolmogorov_randomness#Kolmogorov_randomness), but I don't really think this is the definition we're looking for. I personally consider a definition of randomness to be closer to philosophy than to science.
I've always taken random to mean that a phase particle has more than one future, and solutions are not unique. Its not necessary that a random distribution be Guassian. There are lots of different distribution shapes.
Kolmogorov randomness makes sense, I think I see what are they trying to compare there. It reminds me of information compression, where the more we can compress the less random it is. It is indeed something more specific to say about randomness, but still far from apparent. With any given supposedly random sequence, I don't think we can say with certainty that there really does not exist a simple recursive function that would actually duplicate it. I think we could say the same thing about whole of mathematics and quantum mechanics. It's really hard for physics not be philosophical when it is supposed to describe and explain how reality works. Randomness is intrinsically related to free will debate, but I'm not interested in philosophical musings, only in physical or practical implications, objective rather than subjective.
I'm at n=14,000 and so far the mean has remained roughly at 0...? Slightly skewed right now to .005, but I did not see it wildly fluctuate to more than like +/- .05... o.o EDIT: Oh, I saw it go up to 1.2 maybe I just have to wait longer lol.
Quantum mechanics and mathematics are not supposed to describe and explain how reality works. It is supposed to quantify reality. So if we do an experiment, quantum mechanics can be used to find out the possible outcomes and probabilities of the outcomes. It never tells us why things are that way or how it actually works, because that would be outside the realm of science. The deeper you go in physics or mathematics, you find out that we care only about calculating certain things. We care about whether these calculations match reality. However, the way we got to the outcome of the calculation might be very far away from how reality does things, but we don't care about that. https://www.youtube.com/watch?v=6TI1M3abAM8
Yeah, the various simulations look not alike at all. What you should observe is that the pointer remains fixed for a certain amount of time and then suddenly it jumps to another position. If you continue the simulation ad infinitum, you will see that this pattern persists: it will not get closer to anything because of the sudden jumps.