Uncertainty of Randomness: Exploring a Phenomenon

In summary: I don't think it would be considered a fixed state.In summary, different people mean different things when they use the word "random".
  • #1
sina89
7
0
maybe this is too basic question but it is not so clear for me. when we refer to a random experiment, can a phenomenon be absolutely random by itself or its all about our uncertainty about the outcome that we call it random?
 
Physics news on Phys.org
  • #2
That's actually a fairly deep question in philosophy of science. Educated opinions differ on the matter.

Fortunately, the math of probability theory works out the same way in either case, which allows scientists to make progress without necessarily agreeing about what goes on under the hood. :-)
 
  • Like
Likes sina89
  • #3
This is not a easy question at all. I read a story somewhere (I do not have the reference) where astronomers needed to select a random star. First, one astronomer proposed to use the computer to generate a random point (ascension and declination). Then look at a sky map to find the nearest star to that point. One would think that would generate a random star. Another astronomer thought to label all the stars from 1 to 1 trillion (or so). Get the computer to generate a random number and the star corresponding to this point would be the random star. A third astronomer chose to do some other procedure (I do not remember).

The point is these are three different algorithms which will lead to far different results. Another illustration from probability textbooks (e.g. Papoulis, Probability, Random Variables, and Stochastic Processes) is labeled the Buffon's needle problem. Three different ways to implement "random" lead to three different probabilities (1/4, or 1/3, or 1/2) the "random" chord generated on a circle has a greater length than the side of a equilateral triangle inscribed within that circle. All three answers can be interpreted as correct.
 
  • #4
In mathematics, a random event is defined in the context of a population of events and an assignment of probabilities to events in this population. When people toss off phrases like "Is the weather random?" or "What is the probability that I go to the kitchen", they aren't asking a specific mathematical question because they haven't defined the population of events.

If you want to discuss randomness in a vague, general sense perhaps you should post in the section of the forum called "General Discussions".
 
  • #5
OP: are you asking whether the universe is "ultimately" determinsitic vs. stochastic?

If so, there is some interesting discussion on a (closed) thread here:
https://www.physicsforums.com/showthread.php?t=384130
The parts I find most interesting are on Page 2.

I still hold the basic position I explained in that thread; we simply do not know whether the universe is ultimately stochastic or deterministic.
 
  • #6
Thank all of you for reply. I am new here and i didn't know where to post my question. i wanted to what do people exactly mean when they use the word RANDOM in physics.
 
  • #7
The moral of this story: different people mean different things, and if it makes a difference, you should ask them to clarify. :)
 
  • #8
I probably should correct my earlier thread. The problem giving ambiguous answers to the probability of a random chord greater than a corresponding side of the inscribed equilateral triangle is known as Bertrand's paradox, not Buffon's needle problem.
 
  • #9
In most applications of "random" probability it is theoretically possible, but just impractical, to know enough to predict outcomes. Einstein refused to accept that anything was intrinsically random. He said "God doesn't play dice". But he was proven wrong at the quantum theory level. Information theory tries to reconcile these facts.

Here are three distinct examples.
1) An experiment that has not been done, and whose outcome is unpredictable with current information is "random".
2) An experiment that has already been done but we do not know the result (a coin toss whose result is hidden) is clearly already determined. There is nothing "random" about the result since it is already determined, but there is still uncertainty about what we should guess. This is clearly an information problem. New information can change what we should guess about the result (Bayesian statistics).
3) In quantum theory there are experiments that can quantitatively distinguish between something that is already determined, but we do not know the result, versus something that is really undetermined (still in several states simultaneously). Those experiments have shown that there really are examples that are intrinsically random. It is called a "superposition of states" when something exists but its state is not fixed. (I hope I am not butchering this)
 
Last edited:
  • Like
Likes sina89
  • #10
FactChecker said:
It is called a "superposition of states" when something exists but its state is not fixed. (I hope I am not butchering this)

I guess it depends what you mean by "fixed". If you mean it's in a well-defined state, I'd say a superposition of states is a fixed state. It's just a state where your experiment's result can't be predicted ahead of time.

For example: say you're doing an experiment to measure [itex]S_z[/itex], the [itex]z[/itex]-component of spin, for a spin-[itex]\frac{1}{2}[/itex] particle. If your system is in a state [itex]\frac{1}{\sqrt{2}}\left(|z_+\rangle + |z_-\rangle\right)[/itex], you'll get either outcome with 50% probability. But that doesn't mean it doesn't have a state; in fact, its state is [itex]|x_+\rangle[/itex] (for suitable choice of [itex]x[/itex]-axis ;-).

Contrast this with the case of quantum mechanical density matrices, where there really isn't a definite state. In this case, I would say that the actual, physical system does have a definite state; we simply use the density matrix to represent our ignorance of that state.
 
  • Like
Likes sina89
  • #11
chogg said:
Contrast this with the case of quantum mechanical density matrices, where there really isn't a definite state.

I believe that this is what I was trying to remember -- an entangled pair where there isn't a definite state. Isn't this is an example of something that is intrinsically random?
 
  • #12
FactChecker said:
I believe that this is what I was trying to remember -- an entangled pair where there isn't a definite state. Isn't this is an example of something that is intrinsically random?

An entangled pair is neither more nor less random than a single particle. Entanglement doesn't add "intrinsic randomness". In fact, what makes it so interesting is the very non-random correlations between measurements on the entangled particles!

A density matrix is something else entirely. It has to do with uncertainty about what state a quantum system is in. (By contrast, an entangled state is a definite state of the system as a whole.)

Here's an instructive comparison.

In the first case, imagine a particle in the state [itex]\frac{1}{\sqrt{2}}\left( |z_+\rangle + |z_-\rangle \right)[/itex]. Suppose this corresponds to [itex]|x_+\rangle[/itex] for the coordinates you've chosen. If you take a measurement of [itex]S_x[/itex], you'll get [itex]\frac{1}{2}[/itex] with probability 1. This corresponds to a superposition state.

In the second case, imagine a particle in state [itex]|z_+\rangle[/itex] with probability 0.5, and in state [itex]|z_-\rangle[/itex] with probability 0.5. If you measure [itex]S_x[/itex], you'll get [itex]\frac{-1}{2}[/itex] with probability 0.5, and [itex]\frac{1}{2}[/itex] with probability 0.5. This corresponds to the density matrix case. We're not just uncertain about some measurement outcomes; we're uncertain about the state of the particle itself.
 
  • #13
@sina89, I hope this is not hijacking your OP, but I think it goes to the hearth of your question. There were situations in quantum theory where there was a debate as to whether a system was in an internal "hidden state" that we just could not determine (Einstein believed this), or if its state was really still uncertain. I have been calling the latter case "intrinsically random", but @chogg has explained that this is a system with a density matrix. The thread he points to is very interesting. But I thought that the issue was settled. Experiments have been done where the probabilities of outcomes were different in the two cases (in a hidden state versus still random with a density matrix) . The results of the experiments showed that the system was not in a "hidden state". It was still in no fixed simple state, but rather has a density matrix. I think that this indicates that there are truly random processes.
 
  • Like
Likes sina89
  • #14
im happy that despite my seemingly general question the conversation went thr way i expected. actually this was a confusion for me since one of my friends was trying to tell me about quantum physics (my major is computer science and i don't have any education in the field of quantum physics but some hearsay) when he was talking about these intrinsically random states and made me confused and i felt need to ask some people specialized in the subject about this.
so can i conclude your statements as " there is consensus today among physicists about intrinsically random states, and they all(or mostly) believe that einstein's point of view is PROVEN wrong"? if quantum actually did prove this, i don't allow myself to question you about the details of how it is proved in quantum physics .
 
  • #15
this confusion made me think that maybe i have some misconception in my view of probability so i put my question here.
 
  • #16
chogg said:
OP: are you asking whether the universe is "ultimately" determinsitic vs. stochastic?

If so, there is some interesting discussion on a (closed) thread here:
https://www.physicsforums.com/showthread.php?t=384130
The parts I find most interesting are on Page 2.

I still hold the basic position I explained in that thread; we simply do not know whether the universe is ultimately stochastic or deterministic.

Well, I believe you cannot actually generate a real random sequence of events, cause "random" isn't really definable because the problem of infinite regression pops in which is not solvable.
 
  • #17
sina89 said:
so can i conclude your statements as " there is consensus today among physicists about intrinsically random states, and they all(or mostly) believe that einstein's point of view is PROVEN wrong"? if quantum actually did prove this,

There is something going on now that may interest you. It seems that some extremely complicated particle physics (Feynman diagrams) can be done using relatively simple multi-dimensional geometry, called "Amplituhedrons". Amplituhedrons are so much simpler that they might be a profound breakthrough in our understanding. In Amplituhedrons, space, time, and probability are not fundamental, and "probabilities" do not always add up to 1. So it might change what probability and "random" means. Maybe nothing is random, as we think of it now.
 
  • #18
FactChecker said:
@sina89, I hope this is not hijacking your OP, but I think it goes to the hearth of your question. There were situations in quantum theory where there was a debate as to whether a system was in an internal "hidden state" that we just could not determine (Einstein believed this), or if its state was really still uncertain. I have been calling the latter case "intrinsically random", but @chogg has explained that this is a system with a density matrix. The thread he points to is very interesting. But I thought that the issue was settled. Experiments have been done where the probabilities of outcomes were different in the two cases (in a hidden state versus still random with a density matrix) . The results of the experiments showed that the system was not in a "hidden state". It was still in no fixed simple state, but rather has a density matrix. I think that this indicates that there are truly random processes.

Quite so: local hidden variable theories have been ruled out.

The (main) reason I say randomness is an open question is this: the time-dependent Schrodinger equation is deterministic. The quantum state of your system evolves smoothly: if you know it at one time, you know it at all times.

Traditionally, the "randomness" is thought to come in during a measurement, where the smooth Schrodinger time evolution is said not to apply. If the system is not in an eigenstate, the measurement leaves it in one, but we can't say for sure which one; all we can do is give probabilities. The problem with this view is that the measurement device is itself a quantum mechanical system (and if it isn't, I would love to know where you obtained it!). If we consider the Schrodinger equation for the composite system -- measuring device plus original system -- the time evolution should once again be smooth.

This seems to contradict our experience of taking measurements. Everett showed, however, that the resulting total quantum state is a superposition of states in a basis we would find useful: something like [itex]\frac{1}{\sqrt{2}} \left( |M_+\rangle |z_+\rangle + |M_-\rangle |z_-\rangle \right)[/itex], where [itex]|M_+\rangle[/itex] means the measuring device registered a '[itex]+[/itex]'. If you observe spin-up, there is another you who observed a spin-down in the same experiment.

This is commonly known as the (ineptly named) "Many Worlds" interpretation of quantum mechanics. It's not universally accepted, but it explains one way we could have an ultimately deterministic universe with the appearance of randomness.
 
  • Like
Likes 1 person
  • #19
Statistically, it is impossible to prove the result of any process is truly random. Even a coin that comes up heads 1000 consecutive times still has a much higher statistical possibility of being a 'fair' coin than you might suspect. To achieve a 3 sigma confidence interval on a pass - fail basis, you need nearly 1100 consecutive 'pass' outcomes, and there is still about a 1 in 370 chance it was just blind luck.
 
  • #20
Chronos said:
Statistically, it is impossible to prove the result of any process is truly random. Even a coin that comes up heads 1000 consecutive times still has a much higher statistical possibility of being a 'fair' coin than you might suspect. To achieve a 3 sigma confidence interval on a pass - fail basis, you need nearly 1100 consecutive 'pass' outcomes, and there is still about a 1 in 370 chance it was just blind luck.

Could you please clarify what you mean? In particular, I don't understand what you mean by "a 3 sigma confidence interval on a pass - fail basis". I'm also not sure which possibilities you're comparing -- is it only a perfectly fair coin and a perfectly biased coin, or are there other intermediate possibilities?
 
  • #21
chogg said:
Quite so: local hidden variable theories have been ruled out.

The (main) reason I say randomness is an open question is this: the time-dependent Schrodinger equation is deterministic. The quantum state of your system evolves smoothly: if you know it at one time, you know it at all times.

Thanks, @chogg. This is fascinating stuff. I wish I understood more about it.
 
  • #22
You and me both! :-)
 
  • #23
3 sigma corresponds to a 99.73% probability a process output will fall within a certain range of values. For an analog output [measured numerically], it follows a gaussian distribution. For a discrete output [pass fail], it follows a binomial distribution. Sigma is the term used to express the probability an output will fall within certain limiting values. 3 sigma is generally considered a reasonable probability a process output is, or is not random. In some applications, 3 sigma is considered pretty sloppy and higher confidence intervals are demanded [e.g. particle physics]. You can characterize the probability any particular outcome, or series of outcomes is, or is not random, but, never with certainty.
 
  • #24
Chronos said:
3 sigma corresponds to a 99.73% probability a process output will fall within a certain range of values. For an analog output [measured numerically], it follows a gaussian distribution. For a discrete output [pass fail], it follows a binomial distribution. Sigma is the term used to express the probability an output will fall within certain limiting values. 3 sigma is generally considered a reasonable probability a process output is, or is not random. In some applications, 3 sigma is considered pretty sloppy and higher confidence intervals are demanded [e.g. particle physics]. You can characterize the probability any particular outcome, or series of outcomes is, or is not random, but, never with certainty.

I'm sorry, but I still don't think you've defined your terms very well, and I still have a hard time understanding what you're trying to say. It sounds like you're saying a coin showing heads 1100 times in a row only has a 99.73% chance to be non-random. If so, you're right that it's surprising and counter-intuitive, but I think intuition is actually correct here!

Let me explain myself more precisely. (After another couple iterations, I'll probably come to understand what you're saying.)

In my view, we've got two models.
  • [itex]\text{fair}[/itex] means that the coin is fair: when flipped, it gives heads (H) with probability 0.5 and tails (T) with probability 0.5.
  • [itex]\text{biased}[/itex] means that the coin has some other probability, [itex]p[/itex], to yield H. Since we don't know what that probability is a priori, we'll assign p a uniform distribution from 0 to 1.

We go and observe some data, and then we compute the probability for each model. That's going to depend on our prior probabilities for each model, [itex]P(\text{fair})[/itex] and [itex]P(\text{biased})[/itex]. Different people can have different priors, so we'll factor them out for now and focus on the likelihood -- i.e., the probability each model assigns to the data you actually saw. In our case, this data was simply [itex]\text{H}^N[/itex]: we saw heads (H) [itex]N[/itex] times in a row.

For the [itex]\text{fair}[/itex] model, the likelihood is simple:
[tex]
P(\text{H}^N | \text{fair}) = \frac{1}{2^N}.
[/tex]

For the [itex]\text{biased}[/itex] model, we consider the likelihood for a given parameter value [itex]p[/itex]; then we integrate over all possible values of [itex]p[/itex]:
[tex]
\begin{align}
P(\text{H}^N | \text{biased}) &= \int\limits_0^1 p^N \,\,dp \\
&= \frac{1}{N + 1}
\end{align}
[/tex]

Note that [itex]P(\text{H}^N | \text{biased})[/itex] drops very slowly compared to [itex]P(\text{H}^N|\text{fair})[/itex], which sinks like a stone. This means that our belief shifts towards the [itex]\text{biased}[/itex] model very rapidly, as we keep observing heads-and-only-heads.

Now let's set [itex]N=1100[/itex]. The odds that the coin is fair are given by
[tex]
\begin{align}
\text{Odds}(\text{fair} | \text{H}^N) &= \frac{P(\text{fair})}{P(\text{biased})} \frac{P(\text{H}^N | \text{fair})}{P(\text{H}^N | \text{biased})} \\
&= \frac{1 - P(\text{biased})}{P(\text{biased})}\frac{N + 1}{2^N}
\end{align}
[/tex]
When the odds are tiny compared to 1, the probability basically equals the odds. I'm going to go out on a limb and assume that [itex]\text{Odds}(\text{fair} | \text{H}^N)[/itex] is indeed tiny.

To be concrete, let's take equal priors, so that [itex]P(\text{biased}) = 0.5[/itex]. Then we have [itex]P(\text{fair} | \text{H}^{1100}) \approx \frac{1101}{2^{1100}} \approx 1.2 \times 10^{-329}[/itex]: this is basically zero. Even if you think the coin has only [itex]1:10^{100}[/itex] prior odds of being biased -- a prior which strains credibility beyond the breaking point -- you will still have only [itex]P(\text{fair} | \text{H}^{1100}) \approx 1.2 \times 10^{-229}[/itex] -- again, basically zero.

This is way beyond 3 sigma.

Have I missed something?
 
  • #25
"Thus quantum mechanics is a statistical theory. It can make definite predictions about ensembles of identical systems, but it can generally tell us nothing definite about an individual system. Where it differs from other statistical theories, such as statistical mechanics, weather forecasting or economics, is that the chance element is inherent in the nature of the quantum system and not merely imposed by our limited grasp of all the variables that affect the system."
in the paul davis's inroduction for "physics and philosophy"

i take the chogg's statement "randomness is an open question" as my answer. I'm quite happy with this because its very difficult for me to believe that anyone can assert something about intrinsicly randomness of events.
 
  • #26
Randomness usually is connected to entropy and the highest entropy distribution is a discrete uniform where entropy typically looks at a finite number of states.

Entropy is also a function of information in some basis and if something is random then you typically need the maximum amount of information to define that system.

Note that entropy has a lot of similarities to probability as well.

In random processes conditional probability doesn't give any extra information. If two things are related then they are also related in probability and have a connection. Correlation and co-variance give indicators of this and both of these are connected with conditional entropies.

Also note that I mention the word basis - like a vector space basis or a function basis. Entropy also has a basis like a function and it has to be taken into account.

You can have a block of information that in one basis actually has very low entropy while in another has very high entropy. For example you could look at the digits of pi and conclude they are random as absolute digits but then look at the sigma definition and see a lot of order.

The lesson from this is that entropy and randomness is relative to a basis and if you don't take that into account then you could conclude things about randomness that may not necessarily hold up.

You can always look at randomness objectively with respect to some fixed basis or bases but you can't do it in an absolute manner in much the same way that a vector can't be interpreted unless it has an appropriate frame of reference like a geometric space definition. All information though is like this regardless of what it conveys.
 
  • #28
sina89 said:
maybe this is too basic question but it is not so clear for me. when we refer to a random experiment, can a phenomenon be absolutely random by itself or its all about our uncertainty about the outcome that we call it random?
Random just means unpredictable. It is subjective. An event may be random to one observer and not random to another.

There is the next question, are there "intrinsically random" types of events? These would be types of event that we are sure will never be predictable.

I think not: that is, we can never be sure that certain types of events will never be predictable. But...who knows? It's just my opinion.
 
  • Like
Likes sina89
  • #29
Hornbein said:
I think not: that is, we can never be sure that certain types of events will never be predictable. But...who knows? It's just my opinion.
Erwin's pet tells another story.
 
  • #30
mpresic said:
This is not a easy question at all. I read a story somewhere (I do not have the reference) where astronomers needed to select a random star. First, one astronomer proposed to use the computer to generate a random point (ascension and declination). Then look at a sky map to find the nearest star to that point. One would think that would generate a random star. Another astronomer thought to label all the stars from 1 to 1 trillion (or so). Get the computer to generate a random number and the star corresponding to this point would be the random star. A third astronomer chose to do some other procedure (I do not remember).

The point is these are three different algorithms which will lead to far different results. Another illustration from probability textbooks (e.g. Papoulis, Probability, Random Variables, and Stochastic Processes) is labeled the Buffon's needle problem. Three different ways to implement "random" lead to three different probabilities (1/4, or 1/3, or 1/2) the "random" chord generated on a circle has a greater length than the side of a equilateral triangle inscribed within that circle. All three answers can be interpreted as correct.

This is a height of insanity. How far people could go for randomness?
 
  • #31
sina89 said:
Thank all of you for reply. I am new here and i didn't know where to post my question. i wanted to what do people exactly mean when they use the word RANDOM in physics.
I think there is one more think to say about "random" as it is used to describe physics - and I'll try to describe it as non-technically as possible. I will use a binary measurement as an example - a measurement that results in one of two results. A QM experiment can be set up where "locally" the result of the measurement is entirely unpredictable - as if a bit of information had been added to the universe completely unknown and unknowable to that measuring site. However, if the state being measure is entangled, then another QM experimenter a distance away may be discovering that same information.

Now if these two sites compare their results, each will see the others results as completely predictable - a copy of their own result.
But if they never compared results, each might think he had an original and unique string of random bits.

In general, there is no apparent "entanglement" and so there is no possibility on comparing results with another measuring site, and so QM measurements are commonly taken to be "random" in the "original and unique" sense of the term. Moreover, there are QM experiments that convincingly demonstrate that specific information about the results of future measurements is unavailable until the measurement is made - in that sense "random". But at this point we still don't know whether QM results are truly random in the sense that they are determined only by "luck" - or even ever determined by "luck".
 

1. What is randomness and how does it relate to uncertainty?

Randomness is the lack of pattern or predictability in a sequence of events. It is closely related to uncertainty because it means that the outcome of an event cannot be predicted with complete certainty.

2. How is randomness measured and quantified?

Randomness can be measured and quantified using statistical methods such as probability and entropy. These measures can help determine the degree of uncertainty present in a given situation.

3. What are some real-life examples of randomness and uncertainty?

Examples of randomness and uncertainty can be found in various aspects of life, such as weather patterns, stock market fluctuations, and the outcome of a coin toss. These events cannot be predicted with absolute certainty due to their inherent randomness.

4. How does the concept of randomness impact scientific research?

Randomness plays a crucial role in scientific research, particularly in fields such as statistics and genetics. It allows for the exploration of new patterns and relationships, and helps scientists make predictions and draw conclusions from data.

5. Can randomness be controlled or manipulated?

While randomness itself cannot be controlled or manipulated, its effects can be mitigated through the use of statistical techniques and experimental design. However, true randomness cannot be fully eliminated as it is a fundamental aspect of the natural world.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
355
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
301
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
280
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
Back
Top