Differences between binomial distribution and forced probability distribution

In summary: Look up "Markov Chains" and "Stochastic Processes".In summary, the conversation discussed the differences between binomial distribution and "forced" probability distribution. The concept of "forced probability" was explained using an example of going to the gym. The question posed was whether it is possible to discern between the two cases just by observing statistics. The possibility of using mathematical approaches such as Markov Chains and Stochastic Processes to distinguish between the two distributions was also mentioned.
  • #1
Daaavde
30
0
Differences between binomial distribution and "forced" probability distribution

Hi everyone.

Yesterday I was thinking about probability and real life and about the fact that we always expect life's facts to behave according to probability theory.
If we flip a coin and we get 6 times heads in a row we get suspicious.

So what happen if we "force" probabilities when we temporarily see them under statistical fluctuations and we "make" them behave according to binomial distribution?

For example, I say that, in a month, I go to gym 50% of the days.
One day I go, the next two I don't, but the fourth day chance is not 50% anymore because I'm afraid that I'm skipping gym, so now is 60% (I feel guilty, I don't want to starting skip gym).

If the week starts and I don't go to gym on Monday, Tuesday, Wednesday and Thursday chance I'm going on Friday is now 70% and so on. But when I go to gym, probability reset itself at 50% again (I feel like I've done my duties).

It's just a silly example, but I hope you got the idea.

So this is the model of "forced probability"
If there's a fail -> Chance of winning increases by (let's say) 10%
If there's a winning -> Chance of winning goes back to 50%

My question is, it's possible to discern between the two cases just observing statistics?
What happen if, in physics, some phenomena seems to act like "coin flipping" while it behaves like "forced probability"?

I created a program in C and I empirically saw that a model in which there's 36% of winning but with increasing of 20% of winning everytime there's a failure shows a distribution very similar to the 50%-50% model. (Mean Value: 0.501758 / Standard Deviation: 0.00338905)
Probably a much more similar distribution can be obtained using float numbers (for simplicity of the program, I've set probability as an integer between 0 and 100, but using much more sophisticated approach would probably give better results).
If someone is interested I can send him the program.

I could have tried a mathematical approach, but honestly, I don't really feel into starting to fight with differential equations. That's why I'm posting this here.

So, do you think it's possible that by combining initial probability and increasing rate values the two distribution can be made not discirnible by statistics methods?

(I really apologize for my poor scholastic english, I'm not native speaker)
 
Physics news on Phys.org
  • #2


You haven't described what data is available for statistical testing. For example, is the data one long series that begins at trial 1? Or do you have several series of trials, each series beginning at trial 1?

Statistics doesn't "discern" differences with certainty. Sometimes you can find a test that has a given probability of making the correct decision. In many cases, you don't even know that probability.

Suppose you have several series of trials of flipping a coin. You can pick out the trials that begin with pattern of results: H H H * , where '*' means a 'H' or a 'T" on the last flip. If the trials are independent flips of a fair coin then you might base a test on whether the fraction of trials that begins H H H T is about 1/2 of those trials. If the fraction that begins with H H H T is much more or less than 1/2, the test could decide that the flips were not independent. The exact details of how to do this depend on how you define "discerning" and what data you have.
 
  • #3


Daaavde said:
Hi everyone.

Yesterday I was thinking about probability and real life and about the fact that we always expect life's facts to behave according to probability theory.
If we flip a coin and we get 6 times heads in a row we get suspicious.

So what happen if we "force" probabilities when we temporarily see them under statistical fluctuations and we "make" them behave according to binomial distribution?

For example, I say that, in a month, I go to gym 50% of the days.
One day I go, the next two I don't, but the fourth day chance is not 50% anymore because I'm afraid that I'm skipping gym, so now is 60% (I feel guilty, I don't want to starting skip gym).

If the week starts and I don't go to gym on Monday, Tuesday, Wednesday and Thursday chance I'm going on Friday is now 70% and so on. But when I go to gym, probability reset itself at 50% again (I feel like I've done my duties).

It's just a silly example, but I hope you got the idea.

So this is the model of "forced probability"
If there's a fail -> Chance of winning increases by (let's say) 10%
If there's a winning -> Chance of winning goes back to 50%

My question is, it's possible to discern between the two cases just observing statistics?
What happen if, in physics, some phenomena seems to act like "coin flipping" while it behaves like "forced probability"?

I created a program in C and I empirically saw that a model in which there's 36% of winning but with increasing of 20% of winning everytime there's a failure shows a distribution very similar to the 50%-50% model. (Mean Value: 0.501758 / Standard Deviation: 0.00338905)
Probably a much more similar distribution can be obtained using float numbers (for simplicity of the program, I've set probability as an integer between 0 and 100, but using much more sophisticated approach would probably give better results).
If someone is interested I can send him the program.

I could have tried a mathematical approach, but honestly, I don't really feel into starting to fight with differential equations. That's why I'm posting this here.

So, do you think it's possible that by combining initial probability and increasing rate values the two distribution can be made not discirnible by statistics methods?

(I really apologize for my poor scholastic english, I'm not native speaker)

Sure. This has to do with stochastic processes. It would be easy to tell. It is a bit much to teach here though.
 

1. What is a binomial distribution?

A binomial distribution is a probability distribution that models the probability of success or failure in a series of independent trials. It is characterized by two parameters: the number of trials and the probability of success in each trial.

2. How is a binomial distribution different from a forced probability distribution?

A binomial distribution is based on a series of independent trials, while a forced probability distribution is predetermined and does not involve independent trials. In a forced probability distribution, the outcome of each trial is predetermined and does not change.

3. What are some real-life examples of binomial distribution?

Examples of binomial distribution can be seen in situations such as flipping a coin, rolling a dice, or conducting a series of medical tests where the outcome is binary (e.g. positive or negative).

4. How are the parameters of a binomial distribution determined?

The parameters of a binomial distribution, namely the number of trials and the probability of success, are determined by the specific situation being modeled. For example, in a coin-flipping scenario, the number of trials would be the number of times the coin is flipped, and the probability of success would be 0.5 for each trial.

5. Can a forced probability distribution be converted to a binomial distribution?

No, a forced probability distribution cannot be converted to a binomial distribution because they are fundamentally different in their nature and characteristics. A forced probability distribution does not involve independent trials, while a binomial distribution is based on a series of independent trials.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
15
Views
1K
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
312
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
8K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
479
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
777
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
842
Back
Top