How might determinism affect probability?

In summary, the probability for red to appear at roulette is 18/37 and the probability for non-red to appear is 19/37. In a deterministic universe, the actual probabilities for a given spin would be 1 for red and 0 for non-red. However, this is a philosophical question and there are two schools of thought regarding probabilities - the frequentists and the Bayesians. The frequentists view probability as the fraction of results after an infinite number of trials, making it undefined for a single spin. The Bayesians view probability as a measure of belief and it is well defined for a single event. In terms of the roulette wheel, the frequentists would say that the probability for red to appear is 1 after
  • #36
cliffhanley203 said:
“The mathematical probability of an event has no definite connection with the non-mathematical concepts of "happening" or "not happening".”

Q. What do you mean by no “definite” connection?

Q. What connection does it have with those concepts?
In this context, it is worth thinking about @Dale 's post #31. Suppose that a number has been selected from a continuous distribution. Then something has happened. Call that selected number r. Then r has "happened" but its probability is 0. That exact number, r, will never happen again.
 
Last edited:
  • Like
Likes cliffhanley203
Physics news on Phys.org
  • #37
FactChecker said:
In this context, it is worth thinking about @Dale 's post #31. Suppose that a number has been selected from a continuous distribution. Then something has happened. Call that selected number r. Then r has "happened" but its probability is 0. That exect number, r, will never happen again.
Also, the complement of r has probability 1.
 
  • Like
Likes cliffhanley203
  • #38
cliffhanley203 said:
Where is my fundamental misunderstanding?
This is what you wrote:
Wouldn’t the P of it not happening be zero if it literally cannot not happen

Yes, your example corrects this statement.
Q. What do you mean by no “definite” connection?

Assigning an event a probability does not permit us to make statements about the event happening or not happening unless those statements also speak of probabilities. If we say Pr( "The probability the coin lands heads") = 1/2, this does not imply the statement "The coin will land heads 50 out of 100 times".

As another example, there is deductive principle in logic known as modus ponens that says:

Given:
If A then B
A is true
Conclude:
B is true

There is no principle in logic that allows the argument:

Given:
If A then B
A has probability 1 of being true
Conclude:
B is true

In applying probability to specific problems, people may assume the latter argument to be valid, but the latter argument is not a general principle of logic. It may be valid in specific situations and specific interpretations of probability models.

Q. What connection does it have with those concepts?

Assigning probabilities to whether events happen allows us to make deductions about the probabilities of other events happening.
 
  • Like
Likes cliffhanley203
  • #39
“Yes, your example corrects this statement.”

So, just to be clear; using the classical interpretation in a deterministic world where it’s been determined that the ball will land in a red pocket the P of red is;

P(r) = 1.

P(r complement) = 1-1=0 ?
 
  • #40
cliffhanley203 said:
So, just to be clear; using the classical interpretation in a deterministic world where it’s been determined that the ball will land in a red pocket the P of red is;

P(r) = 1.
Yes, that is usual interpretation for the probability of an event that definitely must happen.

P(r complement) = 1-1=0 ?

Yes.
 
  • #41
“Assigning an event a probability does not permit us to make statements about the event happening or not happening unless those statements also speak of probabilities.”

Even if the P is 1? Or if the P is O? I was under the impression that if the P is 1 it must happen, and if the P is 0 it can’t happen; because if the P is said to be 1 and it doesn’t happen then the assertion that the P was 1 was incorrect; and if the P is said to be zero and it does happen then the assertion that the P was zero was incorrect.

“If we say Pr( "The probability the coin lands heads") = 1/2, this does not imply the statement "The coin will land heads 50 out of 100 times".

I get that. In the short run we could see 60 heads, 40 tails, or 80 heads, 20 tails, or even 100 heads, 0 tails. But the more we toss the coin the closer we will get to the expected value. Which is demonstrated by the profitability of roulette for casinos. I even hear mathematicians say that if you tossed the coin an infinite number of times you would get exactly 50% heads, 50% tails. Or if you spun the roulette wheel an infinite number of times you would get exactly 18/37 red, 19/37 non-red. Although that seems to me to be impossible to know so I don’t know how they would justify that assertion.
 
  • #42
cliffhanley203 said:
Even if the P is 1? Or if the P is O? I was under the impression that if the P is 1 it must happen, and if the P is 0 it can’t happen; because if the P is said to be 1 and it doesn’t happen then the assertion that the P was 1 was incorrect; and if the P is said to be zero and it does happen then the assertion that the P was zero was incorrect.

If the probability is 1, it means you should definitely bet on it happening, but it doesn't mean that it must happen. Similarly, probability 0 means that it almost surely will not happen, but not that it is impossible to happen.

Think about flipping a coin repeatedly. It's not impossible to get all "heads", but it's vanishingly unlikely.
 
  • Like
Likes cliffhanley203
  • #43
FactChecker said:
In this context, it is worth thinking about @Dale 's post #31. Suppose that a number has been selected from a continuous distribution. Then something has happened. Call that selected number r. Then r has "happened" but its probability is 0. That exact number, r, will never happen again.

But it's probability wasn't zero prior to it happening, that would be impossible. If, in a deterministic world, it is determined that red will appear, the P of non-red appearing is (using the classical interpretation) zero. In this case non-red can't happen (if it does then it wasn't determined that red would appear).

"Then r has "happened" but its probability is 0. That exact number, r, will never happen again"

Shouldn't that be; Then r has happened and now that it's happened and will never happen again it's probability is 0?
 
  • #44
“No, a standard normal random variable has a normal (bell shaped curve) distribution with mean 0 and standard deviation”

Thanks, Dale, but that’s a bit advanced for me at the moment.

“So p=0 doesn't necessarily mean that it cannot happen.”

Is this an instance of the P of something being zero prior to it happening, and then it happening? If so, wouldn’t it's happening contradict the earlier assertion that it’s P was zero?
 
  • #45
stevendaryl said:
If the probability is 1, it means you should definitely bet on it happening, but it doesn't mean that it must happen. Similarly, probability 0 means that it almost surely will not happen, but not that it is impossible to happen.

Think about flipping a coin repeatedly. It's not impossible to get all "heads", but it's vanishingly unlikely.
Thanks, Steven. But I'm talking about a deterministic world where it's been determined that red will win. If, in a deterministic world, it was determined that the next toss of a coin would come up heads then the P for heads is 1, the P for heads is 0, yes? And heads must happen; tails can't happen (anything else would contradict the fact that it was determined to be that way).
 
  • #46
cliffhanley203 said:
“I even hear mathematicians say that if you tossed the coin an infinite number of times you would get exactly 50% heads, 50% tails.

Let's hope not. It's well known that equalizing (i.e. having exactly ##\text{count} = \text{number of heads} - \text{number of tails}= 0##) will happen with probability 1, and hence infinitely many times over these trials. However the expected amount of time until equalization (renewal) is ##\infty## which immediately tells you that the probability of being equalized at any time tends to zero. Infinity is a delicate business.

So suppose a different interpretation of the statement is needed -- they may have been referring to averaging many different trial's counts -- and that average tends to the a count of zero. Or they may have been averaging (read cesaro mean) the score of the trials you are quoting, then applying strong law of large numbers (I suspect that this is what is being done, though the statement has a different interpretations as a symmetric random walk.)

- - - -
As for your other questions -- impossible events have probability zero. But not all probability zero events are impossible. There are a lot of subtleties related to infinity that keep coming up in these questions, over and over. Bottom line -- wielding infinity properly takes a lot of work and care.

- - - -
Btw, even if you discover world is 'truly deterministic' I don't think that is going to change how your car insurance, life insurance, etc. is priced.
 
Last edited:
  • Like
Likes cliffhanley203 and Dale
  • #47
cliffhanley203 said:
Thanks, Steven. But I'm talking about a deterministic world where it's been determined that red will win. If, in a deterministic world, it was determined that the next toss of a coin would come up heads then the P for heads is 1, the P for heads is 0, yes? And heads must happen; tails can't happen (anything else would contradict the fact that it was determined to be that way).

Sorry, I missed the context. It seems to me that there is no point in talking about probability if everything has probability 0 or 1. But yes, if things are deterministic, then an event is either definitely going to happen, with probability 1, or is definitely not going to happen, so it has probability 0.

On the other hand, even in a deterministic universe, we could probability to quantify our lack of information about the current state of the world, in which case we would introduce probabilities that are neither 0 nor 1, and we would also introduce the possibility of something being given a subjective probability of 0 even though it wasn't actually impossible.
 
  • Like
Likes cliffhanley203
  • #48
cliffhanley203 said:
Even if the P is 1? Or if the P is O?
You are correct that a common way to apply probability to real life situations is to assign a probability of 1 to events that must happen and a probability of zero to events that cannot happen.

(As I keep saying, the mathematics of probability theory does not deal with the topic of events "actually happening" or "actually not happening". That subject is a matter of interpreting the matematical theory when applying it to specific situations. By analogy, the mathematical theory of trigonometry does not make specific claims about ladders, distances between cities, heights of trees etc. Those topics involve interpreting the mathematical theory when applying it to specific situations.)

The difficulties in interpreting probabilities of 0 or 1 arise in the applying probability theory to outcomes from an infinite set of values. Your roulette examples have only a finite number of outcomes, so interpreting probabilites of 0 or 1 in those examples is not controversial.
 
  • #49
cliffhanley203 said:
So, just to be clear; using the classical interpretation in a deterministic world where it’s been determined that the ball will land in a red pocket the P of red is;
The classical interpretation of P is not defined in such a case. I feel like this is getting annoyingly repetitive.

The classical interpretation of probability is the ratio of successes (red) over the course of an infinite number of trials. It simply is not defined for a sample space consisting of a single unique event. There is no classical interpretation of probability which can apply to this case.

If you wish to extend this to a scenario where the classical interpretation of probability does apply then you will need to extend it beyond a single determined roulette spin to an infinite set of determined spins. Then the question is which infinite set of spins are you considering? Are you considering an infinite set where every spin is determined to be red, or are you considering an infinite set where every spin is determined but what it is determined to be varies. The classical probability does not depend on the determinism, only the long run frequency in the infinite set under consideration.

cliffhanley203 said:
I was under the impression that if the P is 1 it must happen, and if the P is 0 it can’t happen;
I gave an explicit counterexample where the thing that happened had a probability of 0 and the thing that had a probability of 1 did not happen.

cliffhanley203 said:
But it's probability wasn't zero prior to it happening, that would be impossible.
Yes, it was. That is the whole point.

cliffhanley203 said:
Thanks, Dale, but that’s a bit advanced for me at the moment.
Then maybe you should focus on the basic mechanics of using probability before you get too deep into the philosophy. I can’t help but feel that you are tying yourself in mental knots here.

https://en.m.wikipedia.org/wiki/Normal_distribution

cliffhanley203 said:
Is this an instance of the P of something being zero prior to it happening, and then it happening?
Yes.

cliffhanley203 said:
If so, wouldn’t it's happening contradict the earlier assertion that it’s P was zero?
No. In the frequentist case probability is defined as the ratio over an infinite number of trials, and that ratio would still converge to 0. In the Bayesian case it represents our prior subjective belief. Either way the actual result does not contradict the assigned probability.
 
Last edited:
  • Like
Likes cliffhanley203
  • #50
Looking at many comments on this thread it's interesting how deeply ingrained the idea of determinism is in (non QM) physics.

I don't agree with this at all. It seems to ignore the theoretical unknowability of all the information associated with a complex dynamic system, such as weather or human behaviour.

Those who think physics can predict a coin toss seem to assume that they are able to wait until the coin has been tossed. But, you only have to postulate a future coin toss with unknowns - where and precisely when the toss is to take place, the choice of coin, and the choice of who tosses the coin - to rule out the physics solution.

And, if you believe you could use physics to predict all future human behaviour, then how do you gather the information without changing the people involved?

It's the same with weather systems. Whatever equipment you put into the atmosphere to measure things become part of the system and need themselves to be measured. Plus, the weather is fundamentally affected by the Sun. So, you'll need probes throughout the Sun to predict the weather on Earth.

And, of course, you cannot have probes everywhere. as then the Sun and the Earth word be literally nothing but those probes.

The absolute determinism postulated in this thread is a theoretical impossibility. Probability remains at the heart of physical phenomena, even without the atomic and subatomic uncertainties modeled by QM.
 
  • #51
PeroK said:
it's interesting how deeply ingrained the idea of determinism is in (non QM) physics.

I don't agree with this at all
I think that the question is still reasonable (even though the answer seems to be ignored). Determinism is certainly part of classical mechanics, and probability is certainly used to model classical systems.
 
  • #52
@PeroK , I would call your examples "practically unknowable" rather than "theoretically unknowable". Whereas there are too many factors to determine practically, anyone of them can be determined, in theory. I think that the only "theoretically unknowable" aspects are those that are intrinsically random, like quantum phenominum.

That being said, I like to think that probability is the theory of guessing, given incomplete information. That avoids the issue of why the information is incomplete. It also allows the application of probabilities to things that have already happened and can be determined, but are still unknown (like a coin already tossed whose result is hidden.)
 
  • #53
PeroK said:
Looking at many comments on this thread it's interesting how deeply ingrained the idea of determinism is in (non QM) physics.

I don't agree with this at all. It seems to ignore the theoretical unknowability of all the information associated with a complex dynamic system, such as weather or human behaviour.

Well, that's the two different interpretations of probability at play. A system can be modeled as a stochastic process, where the next state is only probabilistically connected to the current state. Or a system can be modeled as a deterministic process, but may depend on parameters that are not perfectly known. Both ways of modeling the system can give rise to probabilistic predictions. But when people say that a system is deterministic, I think they mean that it is not stochastic. It might still be unpredictable (because of lack of perfect knowledge of the current state), so probabilities may still be appropriate.
 
  • Like
Likes FactChecker
  • #54
Yes, there are difficulties with the idea that a physical system can be deterministic, but compare those to the difficulties with the idea that a system is "probabilistic". It is unclear how to define probability as physical phenomenon that has an objective existence. And if we resort to the concept that probability is a measure" of information or ignorance then how do we define information or ignorance? The way that approach ends up is circuilar - probability is a measure of information or ignorance and information or ignorance is something measured by probability.

Perhaps we should discuss this in the thread https://www.physicsforums.com/threads/how-to-better-define-information-in-physics-comments.949993/
 
  • #55
Regardless of the theoretical difficulties involved, we should not give up the ability to treat a future coin toss as a 50/50 probability. We should also not give up the ability to treat a coin toss which has already happened but whose outcome is unknown as a 50/50 probability, even though the outcome is already completely determined. Any satisfactory theory must allow those at least.
 
  • Like
Likes Dale
  • #56
FactChecker said:
We should also not give up the ability to treat a coin toss which has already happened but whose outcome is unknown as a 50/50 probability, even though the outcome is already completely determined.
This is a great point. It shows that determinism does not need to impact probability.

In the frequentist approach this would be considered to be the frequency of an infinite number of coins already tossed but not examined. In the Bayesian approach it would simply be our subjective assessment prior to learning the data.
 
Last edited:
  • Like
Likes FactChecker
  • #57
“Let's hope not[that some mathematicians say that if you tossed the coin an infinite number of times you would get exactly 50% heads, 50% tails.].”

Thanks, I will challenge the next one that tells me that.

“It's well known that equalizing (i.e. having exactly count = number of heads − number of tails =0) will happen with probability 1, and hence infinitely many times over these trials. However the expected amount of time until equalization (renewal) is ∞∞ which immediately tells you that the probability of being equalized at any time tends to zero. Infinity is a delicate business.”

I Googled equalizing but I wasn’t sure from the results which one applied to what you said. What do you mean here by equalzing?

“So suppose a different interpretation of the statement is needed -- they may have been referring to averaging many different trial's counts -- and that average tends to the a count of zero.”

Looking at actual data, from real life spins of the wheel, or tosses of the coin? Then averaging them? Does ‘tend to the count of zero mean get closer and closer to zero as more and more trials are performed?

“As for your other questions -- impossible events have probability zero.”

So on a European roulette wheel the P of the ball landing in a blue pocket marked 37 is 0?

And in a world where it is determined that the ball will land in a red pocket the P of it landing in a non-red pocket is 1?

“But not all probability zero events are impossible.”

Yes, Dale (kindly, and patiently, explained this, but his example was too advanced for me – I’m, as you will probably have gathered) a novice). Can you give a simple example of an even with a P of zero that is possible?

“There are a lot of subtleties related to infinity that keep coming up in these questions, over and over. Bottom line -- wielding infinity properly takes a lot of work and care.”

Yes, I’m coming to realize that. The notion of “approaching infinity” regards spins of a roulette wheel, for example, seems to me problematic. If we run larger and larger numbers of trials, say a trillion, then a quadrillion, all the way up to the largest numbers that we have names for, then run further trials that require us coming up with new names for the numbers (numbers that dwarf the numbers that we already have names for) we are no more closer to infinity than had we ran a mere 37 trials. Once we get to a ‘gazillion’ trials, infinity is still infinitely far away. What is infinity minus 1? What is infinity minus a quadrillion?

Also, I’ve read that Cantor showed that there are various infinites, and demonstrated it using pure maths, but it seems to be problematic to say that one infinity can be larger than another. Take the cherries in a bowl example. If you have 10 bowls of cherries with 10 cherries in each do you have more cherries than bowls. Well, it appears that you have 10 times as many cherries as bowls yet you have an infinite number of bowls so...(?)
- - - -

“Btw, even if you discover world is 'truly deterministic' I don't think that is going to change how your car insurance, life insurance, etc. is priced.”

:eek:)
 
  • #58
Dale said:
In the Bayesian approach it would simply be our subjective assessment prior to learning the data.
I also think it is intellectually satisfying to think of it as the "theory of guessing, given incomplete information" when Bayes' Rule is used to calculate a mixed situation of the unknown outcome where some information is given (Bayesian updating).
 
  • #59
FactChecker said:
I also think it is intellectually satisfying to think of it as the "theory of guessing, given incomplete information" when Bayes' Rule is used to calculate a mixed situation of the unknown outcome where some information is given (Bayesian updating).
Yes, although I think the Bayesians prefer “reasoning under uncertainty” over “guessing, given incomplete information”. At least that is the terminology I have seen used.
 
  • Like
Likes FactChecker
  • #60
cliffhanley203 said:
Yes, Dale (kindly, and patiently, explained this, but his example was too advanced for me – I’m, as you will probably have gathered) a novice). Can you give a simple example of an even with a P of zero that is possible?

Suppose you start flipping a coin. It's possible that you will always get "heads", although the probability of that is 0%.
 
  • #61
“It seems to me that there is no point in talking about probability if everything has probability 0 or 1.”

I think I know what you mean but I’ll ask why anyway?

“But yes, if things are deterministic, then an event is either definitely going to happen, with probability 1, or is definitely not going to happen, so it has probability 0.”

That’s what I thought was the case but I’m not so sure anymore given the various interpretations (classical; frequentist; bayesian).

“On the other hand, even in a deterministic universe, we could [use] probability to quantify our lack of information about the current state of the world, in which case we would introduce probabilities that are neither 0 nor 1...”

But in doing so we would be, technically, incorrect to say that an event had a P of neither 0 nor 1 when it actually does have a P of either 0 or 1 (objectively speaking).

“...and we would also introduce the possibility of something being given a subjective probability of 0 even though it wasn't actually impossible.”

Yes, a subjective P (but the objective P would be fixed as 0 or 1).
 
  • #62
Stephen Tashi said:
You are correct that a common way to apply probability to real life situations is to assign a probability of 1 to events that must happen and a probability of zero to events that cannot happen.

So there are basically two types of probability. The probability of pure maths (theoretical only, like a Euclidean line – infinite in length in both directions). And there’s applied probability (casinos realising that they can’t lose provided they get the punters money on the table for long enough to see the expected value of their house edge)?

“(As I keep saying, the mathematics of probability theory does not deal with the topic of events "actually happening" or "actually not happening". That subject is a matter of interpreting the matematical theory when applying it to specific situations. By analogy, the mathematical theory of trigonometry does not make specific claims about ladders, distances between cities, heights of trees etc. Those topics involve interpreting the mathematical theory when applying it to specific situations.)”

But trigonometry works when applied to ladders, distances etc, yes? As does probability applied to roulette, yes?
 
  • #63
cliffhanley203 said:
Can you give a simple example of an even with a P of zero that is possible?

Imagine a continuous roulette wheel where instead of pockets, there is smooth track of length 37. If we want to say there is an equal probability of the center of the ball stopping anywhere, we can assign a probability of 1/37 to it landing between 6 and 7, a probability of 1/74 of it landing between 6 and 6.5 , etc. However, we must assign the probability of zero to the event it lands exactly on a given point such as 6.5. Yet (in theory) it does land exactly at some point.
 
  • #64
cliffhanley203 said:
But trigonometry works when applied to ladders, distances etc, yes? As does probability applied to roulette, yes?

That wouldn't show that the facts of life in roulette determine the facts of life about probability. The fact that theory A empirically works when interpreted as real world situation B does not imply that the two are the same thing - or that situation B is the only way to interpret theory A.

There is an overwhelming desire (even among contributors to this thread who understand the Kolmogorov formulation of probability theory) to cast probability theory as a theory covering topics such as information or guessing or situations where events actually happen or don't or happen, or must happen in a infinite number of trials etc. However, none of these topics are covered by the mathematical theory of probability. All these topics involve interpreting probability in the context of some real or imagined situation.
 
  • Like
Likes Dale
  • #65
Stephen Tashi said:
There is an overwhelming desire (even among contributors to this thread who understand the Kolmogorov formulation of probability theory) to cast probability theory as a theory covering topics such as information or guessing or situations where events actually happen or don't or happen, or must happen in a infinite number of trials etc. However, none of these topics are covered by the mathematical theory of probability.
If you mean that probability theory does not concern itself with several of these issues and is agnostic, then I agree. But if you are saying that probability theory does not apply in these situations, then I'm afraid that I have to disagree.
 
Last edited:
  • #66
Stephen Tashi said:
Imagine a continuous roulette wheel where instead of pockets, there is smooth track of length 37. If we want to say there is an equal probability of the center of the ball stopping anywhere, we can assign a probability of 1/37 to it landing between 6 and 7, a probability of 1/74 of it landing between 6 and 6.5 , etc. However, we must assign the probability of zero to the event it lands exactly on a given point such as 6.5. Yet (in theory) it does land exactly at some point.

Or, in theory, the ball is never truly at rest. Or, in theory, beyond a certain a certain accuracy the centre of the ball is not physically well-defined.

In a way, QM comes to the rescue. We could reduce the ball to a single elementary particle: it's not possible to know where the particle is without measuring it. And, any measurement has its margin of error.

And, the argument that at some time ##t## the particle was definitely at some physical location ##x## also fails.

If you don't like the idea that everything that is happening has probability zero, then in a way QM comes to the rescue.

In general, you cannot pretend that classical physics ultimately extends to point like particles and point like positions. With these arguments you have to take into account a non-classical, sub-atomic theory. Namely, QM.
 
  • #67
FactChecker said:
If you mean that probability theory does not concern itself with several of these issues and is agnostic, then I agree.
That's what I mean.
 
  • Like
Likes FactChecker
  • #68
cliffhanley203 said:
“Let's hope not[that some mathematicians say that if you tossed the coin an infinite number of times you would get exactly 50% heads, 50% tails.].”

Thanks, I will challenge the next one that tells me that.

just ask the person to clarify what they mean -- it can be interpreted a couple ways (or more).

cliffhanley203 said:
“It's well known that equalizing (i.e. having exactly count = number of heads − number of tails =0) will happen with probability 1, and hence infinitely many times over these trials. However the expected amount of time until equalization (renewal) is ∞∞ which immediately tells you that the probability of being equalized at any time tends to zero. Infinity is a delicate business.”

I Googled equalizing but I wasn’t sure from the results which one applied to what you said. What do you mean here by equalzing?...

I defined equalizing as having exactly the number of heads tosses equal to the number of tails. It isn't a technical term per se (renewal would be) so you can call having an equal count something else if you want.
cliffhanley203 said:
Yes, Dale (kindly, and patiently, explained this, but his example was too advanced for me – I’m, as you will probably have gathered) a novice). Can you give a simple example of an even with a P of zero that is possible?

@stevendaryl 's coin tossing example seems fine here to me. If you want to unravel this a bit -- suppose you toss a coin with probability ##p## of heads, with ##p \in (0,1)##. Now toss the coin ##n## times. What's the probability that you see no tails ? That is every toss is a "heads" -- ##p^n##. Now let ##n \to \infty## and you see the probability of all heads tends to zero.

I think a lot of people on this forum like Morin's Probability for Enthusiastic Beginner -- maybe worth looking into getting this -- it's relatively light on math background and has solutions to problems.
 
  • #69
Dale said:
If you wish to extend this to a scenario where the classical interpretation of probability does apply then you will need to extend it beyond a single determined roulette spin to an infinite set of determined spins. Then the question is which infinite set of spins are you considering? Are you considering an infinite set where every spin is determined to be red, or are you considering an infinite set where every spin is determined but what it is determined to be varies.
@cliffhanley203 if you are interested in the classical interpretation of probability for your example then you must answer the above question
 
  • #70
Two cases where, IMO, determinism fails are at opposite extremes: very small (quantum mechanics) and very large (entropy). In QM, Bell's theorem rules out the "hidden variable" theory. And entropy, by its very nature, deals with the probabilities of configurations of a very large number of items. I don't think that there is any way to use "deterministic" analysis to explain either one.
 

Similar threads

  • Set Theory, Logic, Probability, Statistics
2
Replies
48
Views
5K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
5
Replies
147
Views
7K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
4K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
1K
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
4K
  • Programming and Computer Science
Replies
18
Views
1K
Back
Top