Uptime of Successful Event in 60s Experiments

  • Thread starter Thread starter Nasta
  • Start date Start date
Nasta
Messages
4
Reaction score
0
I have the following problem:

A trial of an experiment can yield a "successful" event which lasts for 10 seconds with probability 0.1. The experiment is repeated every 3 seconds. If it yields a successful event while the 10 seconds of a previous one still haven't expired, it will just renew the timer to 10 again. The question is, how should the uptime of the successful event over the course of 60 seconds be modeled?

I was thinking the following:

There are 20 experiments over the course of 60 seconds, and since this is a Bernoulli experiment in a way, the mean number of successes will be n*p, i.e. 2 and their uptime is 10 each, thus 20/60 = 1/3 of the time.

But then again, I am not sure if this takes into consideration the possibility that a successful event will renew the timer of a successful event that happened less than 10 seconds before.

Any ideas how exactly to model the uptime of the successful event from this experiment?
 
Last edited:
Physics news on Phys.org
What do you need to find, the mean or a distribution model? Do you need a proof or just a number?

A very easy way would be to test it a million times, giving a distribution but no proof. Another easy way would be to find the average for the first 10 seconds and then multiply out the expected value for the others as 0.1(0.1 * 3 + 0.9 * 0.1 * 6 + 0.9^2 * 0.1 * 9 + 0.9^3 * 10), giving a proof of only the mean.
 
This is a nice problem. I have not solved it out, but can see your solution being wrong.
Consider the first experiment is a success then we start to count. The probability of discontinuity in the first 10 second is (9/10)^3 = 0.729. And the probability of discontinuity until second 60 will be higher than that, if not very much.
 
haiha said:
This is a nice problem. I have not solved it out, but can see your solution being wrong.
Consider the first experiment is a success then we start to count. The probability of discontinuity in the first 10 second is (9/10)^3 = 0.729. And the probability of discontinuity until second 60 will be higher than that, if not very much.

I calculate the chance of a collision in 60 seconds at 98.784%. I think the chance of collision in the first 10 seconds is 5.23%.
 
Turned out it is simper than I thought actually...I just had to find the probability of at least one successful event in the 10 seconds, like you said in the 2nd post, and that is 1-P(No successful event) = 1-(1-0.1)^(3+1/3) (since there are 3+1/3 trials in 10 seconds), which is around 0.29 and the uptime is thus roughly 29%. What bothers me in this way of "solving" the problem is, firstly, is there any theoretical basis for looking at at the probability of at least one successful event (and thus the mean) for exactly the length of a success? And the other thing is - have we made an implicit assumption in this way about the distribution of the experiment? I still don't have the possibility to test the empirical distribution of that experiment (i.e. it is an object that does this certain effect, but i am still not in the possession of it), but when i get my hands on it, how can i go around finding out the theoretical distribution? i.e. if i know for sure that the parameter of the distribution is 0.1 and have an empirical uptime of 30% or 40% or 50% how shall i proceed?
 
Last edited:
If you ran the experiment for 9 seconds before starting the timer and counted results that ran over the period you could use the formula I gave above, which would yield 29.53% uptime. Not running the experiment before the 60 seconds increases this, and not measuring the leftover runtime decreases it, so your figure is believable.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top