- #1
Daaavde
- 30
- 0
Differences between binomial distribution and "forced" probability distribution
Hi everyone.
Yesterday I was thinking about probability and real life and about the fact that we always expect life's facts to behave according to probability theory.
If we flip a coin and we get 6 times heads in a row we get suspicious.
So what happen if we "force" probabilities when we temporarily see them under statistical fluctuations and we "make" them behave according to binomial distribution?
For example, I say that, in a month, I go to gym 50% of the days.
One day I go, the next two I don't, but the fourth day chance is not 50% anymore because I'm afraid that I'm skipping gym, so now is 60% (I feel guilty, I don't want to starting skip gym).
If the week starts and I don't go to gym on Monday, Tuesday, Wednesday and Thursday chance I'm going on Friday is now 70% and so on. But when I go to gym, probability reset itself at 50% again (I feel like I've done my duties).
It's just a silly example, but I hope you got the idea.
So this is the model of "forced probability"
If there's a fail -> Chance of winning increases by (let's say) 10%
If there's a winning -> Chance of winning goes back to 50%
My question is, it's possible to discern between the two cases just observing statistics?
What happen if, in physics, some phenomena seems to act like "coin flipping" while it behaves like "forced probability"?
I created a program in C and I empirically saw that a model in which there's 36% of winning but with increasing of 20% of winning everytime there's a failure shows a distribution very similar to the 50%-50% model. (Mean Value: 0.501758 / Standard Deviation: 0.00338905)
Probably a much more similar distribution can be obtained using float numbers (for simplicity of the program, I've set probability as an integer between 0 and 100, but using much more sophisticated approach would probably give better results).
If someone is interested I can send him the program.
I could have tried a mathematical approach, but honestly, I don't really feel into starting to fight with differential equations. That's why I'm posting this here.
So, do you think it's possible that by combining initial probability and increasing rate values the two distribution can be made not discirnible by statistics methods?
(I really apologize for my poor scholastic english, I'm not native speaker)
Hi everyone.
Yesterday I was thinking about probability and real life and about the fact that we always expect life's facts to behave according to probability theory.
If we flip a coin and we get 6 times heads in a row we get suspicious.
So what happen if we "force" probabilities when we temporarily see them under statistical fluctuations and we "make" them behave according to binomial distribution?
For example, I say that, in a month, I go to gym 50% of the days.
One day I go, the next two I don't, but the fourth day chance is not 50% anymore because I'm afraid that I'm skipping gym, so now is 60% (I feel guilty, I don't want to starting skip gym).
If the week starts and I don't go to gym on Monday, Tuesday, Wednesday and Thursday chance I'm going on Friday is now 70% and so on. But when I go to gym, probability reset itself at 50% again (I feel like I've done my duties).
It's just a silly example, but I hope you got the idea.
So this is the model of "forced probability"
If there's a fail -> Chance of winning increases by (let's say) 10%
If there's a winning -> Chance of winning goes back to 50%
My question is, it's possible to discern between the two cases just observing statistics?
What happen if, in physics, some phenomena seems to act like "coin flipping" while it behaves like "forced probability"?
I created a program in C and I empirically saw that a model in which there's 36% of winning but with increasing of 20% of winning everytime there's a failure shows a distribution very similar to the 50%-50% model. (Mean Value: 0.501758 / Standard Deviation: 0.00338905)
Probably a much more similar distribution can be obtained using float numbers (for simplicity of the program, I've set probability as an integer between 0 and 100, but using much more sophisticated approach would probably give better results).
If someone is interested I can send him the program.
I could have tried a mathematical approach, but honestly, I don't really feel into starting to fight with differential equations. That's why I'm posting this here.
So, do you think it's possible that by combining initial probability and increasing rate values the two distribution can be made not discirnible by statistics methods?
(I really apologize for my poor scholastic english, I'm not native speaker)