Uptime of Successful Event in 60s Experiments

  • Context: Graduate 
  • Thread starter Thread starter Nasta
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around modeling the uptime of a successful event in a series of experiments conducted over 60 seconds, where each successful event lasts for 10 seconds and occurs with a probability of 0.1 every 3 seconds. Participants explore different approaches to calculate the uptime, considering the implications of overlapping successful events and the statistical properties of the experiment.

Discussion Character

  • Exploratory
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant suggests that there are 20 experiments in 60 seconds and calculates the mean number of successes as 2, leading to an uptime of 1/3 of the time, but expresses uncertainty about overlapping successful events.
  • Another participant questions whether the goal is to find a mean or a distribution model and proposes a simulation approach to derive a distribution without proof, as well as a method to calculate the mean uptime based on probabilities.
  • Some participants challenge the initial calculations, suggesting that the probability of discontinuity during the first 10 seconds is significant and may affect the overall uptime calculation.
  • A later reply indicates a simpler approach to find the probability of at least one successful event in 10 seconds, leading to an estimated uptime of around 29%, while questioning the theoretical basis for this method and the assumptions about the distribution.
  • Another participant offers a calculation for uptime based on running the experiment for 9 seconds before starting the timer, yielding a slightly different uptime percentage, while noting that variations in experimental setup can affect results.

Areas of Agreement / Disagreement

Participants express differing views on the correct approach to model the uptime, with no consensus reached on the best method or the implications of overlapping successful events. Uncertainty remains regarding the theoretical basis for the calculations and the assumptions made about the distribution of outcomes.

Contextual Notes

Participants highlight potential limitations in their calculations, including assumptions about the independence of trials and the effects of overlapping successful events on uptime. There is also mention of the need for empirical testing to validate theoretical distributions.

Nasta
Messages
4
Reaction score
0
I have the following problem:

A trial of an experiment can yield a "successful" event which lasts for 10 seconds with probability 0.1. The experiment is repeated every 3 seconds. If it yields a successful event while the 10 seconds of a previous one still haven't expired, it will just renew the timer to 10 again. The question is, how should the uptime of the successful event over the course of 60 seconds be modeled?

I was thinking the following:

There are 20 experiments over the course of 60 seconds, and since this is a Bernoulli experiment in a way, the mean number of successes will be n*p, i.e. 2 and their uptime is 10 each, thus 20/60 = 1/3 of the time.

But then again, I am not sure if this takes into consideration the possibility that a successful event will renew the timer of a successful event that happened less than 10 seconds before.

Any ideas how exactly to model the uptime of the successful event from this experiment?
 
Last edited:
Physics news on Phys.org
What do you need to find, the mean or a distribution model? Do you need a proof or just a number?

A very easy way would be to test it a million times, giving a distribution but no proof. Another easy way would be to find the average for the first 10 seconds and then multiply out the expected value for the others as 0.1(0.1 * 3 + 0.9 * 0.1 * 6 + 0.9^2 * 0.1 * 9 + 0.9^3 * 10), giving a proof of only the mean.
 
This is a nice problem. I have not solved it out, but can see your solution being wrong.
Consider the first experiment is a success then we start to count. The probability of discontinuity in the first 10 second is (9/10)^3 = 0.729. And the probability of discontinuity until second 60 will be higher than that, if not very much.
 
haiha said:
This is a nice problem. I have not solved it out, but can see your solution being wrong.
Consider the first experiment is a success then we start to count. The probability of discontinuity in the first 10 second is (9/10)^3 = 0.729. And the probability of discontinuity until second 60 will be higher than that, if not very much.

I calculate the chance of a collision in 60 seconds at 98.784%. I think the chance of collision in the first 10 seconds is 5.23%.
 
Turned out it is simper than I thought actually...I just had to find the probability of at least one successful event in the 10 seconds, like you said in the 2nd post, and that is 1-P(No successful event) = 1-(1-0.1)^(3+1/3) (since there are 3+1/3 trials in 10 seconds), which is around 0.29 and the uptime is thus roughly 29%. What bothers me in this way of "solving" the problem is, firstly, is there any theoretical basis for looking at at the probability of at least one successful event (and thus the mean) for exactly the length of a success? And the other thing is - have we made an implicit assumption in this way about the distribution of the experiment? I still don't have the possibility to test the empirical distribution of that experiment (i.e. it is an object that does this certain effect, but i am still not in the possession of it), but when i get my hands on it, how can i go around finding out the theoretical distribution? i.e. if i know for sure that the parameter of the distribution is 0.1 and have an empirical uptime of 30% or 40% or 50% how shall i proceed?
 
Last edited:
If you ran the experiment for 9 seconds before starting the timer and counted results that ran over the period you could use the formula I gave above, which would yield 29.53% uptime. Not running the experiment before the 60 seconds increases this, and not measuring the leftover runtime decreases it, so your figure is believable.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
4K
Replies
147
Views
10K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 2 ·
Replies
2
Views
4K
Replies
79
Views
9K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 6 ·
Replies
6
Views
848