A seemingly simple problem about probability

In summary: In this problem, the short life bulbs self select themselves to be in the sample, and they, being short life, have a lower average. So the answer is lower than what your intuition would be without the self selection.
  • #1
Mayan Fung
131
14
My friend is now taking an introductory course about statistics. The professor raised the following question:

A light bulb has a lifespan with a uniform distribution from 0 to 2/3 years (i.e. with a mean of 1/3 years). You change a light bulb when it burns. How many light bulbs are expected to burn in 2 years?

Intuition tells us the answer is 6. However, the professor said that the answer is smaller than 6. I tried a simulation with python and found that the value is around 5.6-5.7. The professor didn't provide any rigorous proof.

Here's my thought:
Let ##Y## be the number of light bulbs burnt in 2 years
Let ##x_i## be the lifespan of the ##i-th## light bulb
My goal is to find ##P(Y=n)##, i.e. the probability distribution, then find its expectation ##E(Y)##

Exactly ##n## bulbs burnt in 2 years = lifespan sum of first ##n## bulbs <2 AND lifespan sum of first ##n+1## bulbs > 2, i.e.

##P(Y=n) = P( \sum_{i=1}^{n+1} x_i > 2 \cap \sum_{i=1}^n x_i < 2)##

I cannot proceed as I don't know how to evaluate the probability. I feel like that the question is about Markov chain in stochastic process but I am not sure.
 
Last edited:
Physics news on Phys.org
  • #2
You left out an important piece of information. How many light bulbs are you starting with?
 
  • #3
magoo said:
You left out an important piece of information. How many light bulbs are you starting with?
Sorry that I am not clear enough. Only one light bulb is operating at the same time and when it burns, we use a new one.
 
  • #4
Chan Pok Fung said:
My friend is now taking an introductory course about statistics. The professor raised the following question:

A light bulb has a lifespan with a uniform distribution from 0 to 2/3 years (i.e. with a mean of 1/3 years). How many light bulbs are expected to burn in 2 years?

Intuition tells us the answer is 6.

The issue is your intuition is answering a slightly different question than what is being asked -- more on this at the end. If the question was: how long would you expect it to take for 6 light bulbs to burn out? 2 years.
Chan Pok Fung said:
However, the professor said that the answer is smaller than 6. I tried a simulation with python and found that the value is around 5.6-5.7.

In general doing something like this is an extremely smart move as simulations are good for flushing out bugs in your thinking.
Chan Pok Fung said:
The professor didn't provide any rigorous proof.

Here's my thought:
Let ##Y## be the number of light bulbs burnt in 2 years
Let ##x_i## be the lifespan of the ##i-th## light bulb
My goal is to find ##P(Y=n)##, i.e. the probability distribution, then find its expectation ##E(Y)##

Exactly ##n## bulbs burnt in 2 years = lifespan sum of first ##n## bulbs <2 AND lifespan sum of first ##n+1## bulbs > 2, i.e.

##P(Y=n) = P(\sum_{i=1}^n x_i < 2| \sum_{i=1}^{n+1} x_i > 2)##

I cannot proceed as I don't know how to evaluate the probability. I feel like that the question is about Markov chain in stochastic process but I am not sure.

My suggestion is to consider an easier problem first:

suppose you have an unbounded number of bulbs, and they each are uniform, lasting up to 1 year in life. How many bulbs do you expect to 'plug-in' in 1 year. (Subtract 1 from the plug-in number and you get the burn out number -- i.e there is zero probability that a bulb burns out exactly when your time threshold of 1 year hits, so with probability 1, you have a non burnt-out bulb at expiry.)

There are lots of ways to model this. Here's one:

you plug in one bulb no matter what : 1.

The expect number of bulbs to plug-in ##= 1 + \text{ExpectedSuccessfulTurns}##

burnout on 1st bulb, (a), you have ##\int_{0}^{1}\, 1\,da = \frac{1}{1!}##

burnout on 2nd turn: ##\int_{0}^{1}\int_{a}^{1} 1\, db\, da = \frac{1}{2!}##

burnout on 3rd turn (c): ##\int_{0}^{1}\int_{a}^{1}\int_{b}^{1} 1\, dc\, db\,da =\frac{1}{3!}##

burnout on 4th turn (d): ##\int_{0}^{1}\int_{a}^{1}\int_{b}^{1}\int_{c}^{1} 1\, dd\, dc\, db\, da = \frac{1}{4!}##

burnout on 5th turn (e): ## \int_{0}^{1}\int_{a}^{1}\int_{b}^{1}\int_{c}^{1}\int_{d}^{1} 1\, de\, dd\, dc\, db\, da = \frac{1}{5!}##

burnout on 6th turn (f): ## \int_{0}^{1}\int_{a}^{1}\int_{b}^{1}\int_{c}^{1}\int_{d}^{1}\int_{e}^{1} 1\, df\, de\, dd\, dc\, db\, da = \frac{1}{6!}##

and so on...

##\text{ExpectedPlugIns} = 1 +\big(\frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \frac{1}{5!} + \frac{1}{6!} + ...##

##\text{ExpectedPlugIns} = e##

##\text{ExpectedBurnOuts} = e-1##

If you are so inclined, you can work through the "and so on" with

https://en.wikipedia.org/wiki/Cauchy_formula_for_repeated_integration
- - - -
The above is a famous problem of sorts as it recovers euler's number. It was one of 538's weekly problems (which I had my solution saved down from...)

https://fivethirtyeight.com/features/should-you-pay-250-to-play-this-casino-game/

- - - - -
note: each bulb has a life of 1 year here, and is uniform, so on average ##\frac{1}{2}## year. Your intuition may indicate 2 bulbs burn out over 1 years, but in fact it's a bit less.

The issue is that your 'drop dead' function is linear, with scaling of ##1##, with respect to time used --which is always real non-negative-- up until the drop dead threshold. Then the payoff is zero afterward. Hence the underlying idea is of concavitiy, though I'm not happy with my current explanation of why.
 
Last edited:
  • Like
Likes Mayan Fung, NFuller and FactChecker
  • #5
@Chan Pok Fung , I think this is a hint about what your intuition is missing:
By setting a cut-off of 2 years, there will be fewer examples of long-life bulbs that fit into 2 years. There will be more examples of short life bulbs that fit into 2 years. So that puts a bias on the average that shortens it.

This is called a "self selecting sample". Another example that is easier to see is in an experiment where the velocities of gas molecules going past a measurement device are averaged. That average is larger than the average of the gas molecules in the container because more fast molecules "self-select" themselves to go past the measurement device.

The bias of a self-selecting sample is very easy to overlook. It is often the reason that two observers, using perfectly valid methods, may see the same thing differently -- they are in a situation that gives them two different self-selected samples.
 
  • Like
Likes NFuller
  • #6
StoneTemplePython said:
My suggestion is to consider an easier problem first:

...

The issue is that your 'drop dead' function is linear, with scaling of ##1##, with respect to time used --which is always real non-negative-- up until the drop dead threshold. Then the payoff is zero afterward. Hence the underlying idea is of concavitiy, though I'm not happy with my current explanation of why.
Good idea! So you are assuming the expected lifespan of the bulb to be 1 year each with a uniform distribution, right?
 
  • #7
FactChecker said:
@Chan Pok Fung , I think this is a hint about what your intuition is missing:
By setting a cut-off of 2 years, there will be fewer examples of long-life bulbs that fit into 2 years. There will be more examples of short life bulbs that fit into 2 years. So that puts a bias on the average that shortens it.

This is called a "self selecting sample". Another example that is easier to see is in an experiment where the velocities of gas molecules going past a measurement device are averaged. That average is larger than the average of the gas molecules in the container because more fast molecules "self-select" themselves to go past the measurement device.

The bias of a self-selecting sample is very easy to overlook. It is often the reason that two observers, using perfectly valid methods, may see the same thing differently -- they are in a situation that gives them two different self-selected samples.
If more short life bulbs fit into 2 years, that means the expected number should be >2 instead of <2. And I am trying to derive a precise mathematical form for the general case.
 
  • #8
Chan Pok Fung said:
I feel like that the question is about Markov chain in stochastic process but I am not sure.

The question is treated by "renewal theory". ( https://en.wikipedia.org/wiki/Renewal_theory See the section "Renewal equation") However, there are only a few special cases where the renewal equation can be solved in closed form. The renewal equation deals directly with the expected number of renewals by time t instead of dealing with the probability distribution for the number of renewals before time t.

My guess is that if it were possible to compute a closed form expression for the probability distribution of the number of renewals then this result would be well known and easy to look up. Nevertheless, it would be interesting to attempt to write the pdf and see what happens.
 
  • #9
Chan Pok Fung said:
If more short life bulbs fit into 2 years, that means the expected number should be >2 instead of <2. And I am trying to derive a precise mathematical form for the general case.

Edit: I'll leave this post as it is, but I can see now that it is wrong!

Consider the following two, long-term experiments:

a) You simply let each bulb in turn burn out. At the end of ##n## years (where ##n## is large), approximately ##3n## bulbs will have burned out. That's an average of ##3## per year. The expected number in any two year period during your experiment is then ##6##.

b) You run the experiment for ##2## years and count how many bulbs burn out. Then, you replace the partially-used bulb with a new one and run the experiment for a further ##2## years. And continue like this, counting the number of bulbs that burn out in each 2-year period, then starting again with a fresh bulb for the next two years.

At the end of ##n## years, you will have burned out fewer than ##3n## bulbs, as some of the ##n## years has been lit by bulbs that haven't fully burned out yet.

PS you can get an approximate answer by assuming that the leftover bulbs have on average half their average life left. In this case ##1/6## of a year.

If you run experiment b) ##n## times, i.e. for ##2n## years, you have ##n## leftover bulbs, with an approx total remaining life of ##n/6## years. And total life used of ##n/6## years.

Therefore, only ##2n - n/6 = 11n/6## years have been lit by the burned out bulbs. Hence, ##11n/2## bulbs have burned out, which is an average of ##5.5## per experiment.

Working out precisely how much time is left on average for each bulb reduces, I suspect, to the original problem. But, at least this method gives a simple approximation.
 
Last edited:
  • #10
Here's why my post above is wrong. Consider experiment a) and suppose that at some random time you take the current bulb and see how long it has remaining. It's tempting to say that it must be ##1/6## of a year. As it's an "average" bulb, on average half way through its life. But this is wrong.

There is a bias towards longer living bulbs. For example, if we have only two types of bulb: those that fail immediately and those that last ##2/3## of a year. Now if we choose a random time and look at the bulb, it must be one of the good bulbs. Therefore, it must have a life of ##2/3## of a year and an average time remaining of ##1/3## of a year.

Therefore, the bulbs that are not burned out should have a bias towards longer-life bulbs. For experiment b) this would lead to an average of less than ##6## burning out as follows:

If we assume the last bulb was put in at a random time, and we let ##x## be its longevity, then we have a pdf for ##x## of:

##f(x) = 9x/2##

I.e. the likelihood of its being selected is proportional to its longevity. And its average time remaining is ##x/2##.

Therefore, the expected time remaining on the bulb (and the expected time already burned) is:

##E = \int_0^{2/3} (\frac{x}{2})(\frac{9x}{2}) dx = \frac29##

In experiment b), therefore, there would be a bias towards longer-life bulbs being removed, "half" used and a bias towards shorter-life bulbs being burned out. We could calculate this by imagining burning out the remaining ##n## half-used bulbs. The total time for the experiment would then be:

##2n + \frac{2n}{9}## years

Hence, we must have used ##6n + \frac{2n}{3}## bulbs, as the bulbs were simply chosen at random. How and when we burn out each bulb is irrelevant. Hence, the number of bulbs burned out must be ##6n + \frac{2n}{3} - n = 5\frac23##.

Which looks about right! Or, at least, a better approximation.
 
Last edited:
  • #11
Chan Pok Fung said:
If more short life bulbs fit into 2 years, that means the expected number should be >2 instead of <2. And I am trying to derive a precise mathematical form for the general case.
Oh, sorry! I stand corrected. I got confused about the original question and which direction the bias went. In any case, I do not think that there would be a "self selection bias" that I mentioned before, because that is not how the bulbs are picked.

Question -- what do you do about the bulb at the end of the two years that is still burning? Do you count it or not? If you do not count it, then that might explain a smaller average.

PS. I understand your desire to derive a closed-form solution to the problem. But you will find that only the simplest, most benign problems can be solved that way. Simulation is the only way to handle real-world problems.
 
Last edited:
  • #12
FactChecker said:
Oh, sorry! I stand corrected. I got confused about the original question and which direction the bias went. In any case, I do not think that there would be a "self selection bias" that I mentioned before, because that is not how the bulbs are picked.

Question -- what do you do about the bulb at the end of the two years that is still burning? Do you count it or not? If you do not count it, then that might explain a smaller average.

See my posts above. I made a silly error in my second post which I've corrected now.

The last bulb, on average, will be longer lived. If you assume that the last bulb is half-used, on average, then that gives you an expected value of time left in the last bulb. If you add that on, you get the expected time for all the bulbs (imagining that you let the last one burn out).

This gives you an expected value for the total number of bulbs - including the last one - from which you can subtract 1 to get the number that burned out.

My calculations (with this assumption) give ##5\frac23## burned out bulbs, which is close to the answer that others and I have got by simulation.

Note: some of the confusion comes from the fact that this means you need ##6\frac23## bulbs on average (##>6##) to get through the two years. From that perspective, that's why leaving a bulb unfinished leads to more bulbs per two years than in a continuous experiment where every bulb is allowed to burn out.
 
  • Like
Likes FactChecker
  • #13
Chan Pok Fung said:
Good idea! So you are assuming the expected lifespan of the bulb to be 1 year each with a uniform distribution, right?

The lifespan of 1 year simplifies things but technically really doesn't matter than much. To be clear, I did the 1 year life in the fully worked simplified problem. However, I am now addressing your original problem, below.

Recall that expectations are linear (and the concave function I mentioned is piecewise linear with payoffs of 0 and 1), so if we wanted to, we could re-write your problem such that each bulb has 1 year total life maximum, and re-scale your dropdead threshold accordingly. I.e. we could rescale by any positive number we want -- it just so happens to be highly preferable, in my view, to rescale such that the bulb has a life in ##[0,1]##. I did not do that in what follows, because I thought it may be confusing with respect to the numbers in the original post, but I really think it should be rescaled to make it interpretable. ( Rescaling can also be useful for numeric stability issues when running simulations, but that's another topic.)

The ideas here are more general than just a uniform distribution.

The issue when you get right down to it, is every single trial, with probability one, will terminate at your drop dead threshold with a bulb that is plugged in and has not yet burnt out. The concave function I was mentioning has no impact on any sequence ##X_1, X_2, ..., X_{n-1}, X_{n}## except for the ##X_n## one. Put differently, to think about the result you can radically simplify the problem and just consider the terminal variable ##X_n## for any trial.

These random variables are continuous time denominated, but discrete (i.e. countable). The issue with your intuitive estimate, i.e. when you used linearity of expectations to estimate 6 random variables are in use, is your ##X_n##, with probability one, gets partially counted. I.e. if you ran a simulation or experiment, where partial credit is allowed, you get ##\Big(\big(n-1\big) + \delta\Big)## random variables used in any trial, where with probability 1, we have ##0\lt \delta \lt 1## in your problem, but your drop dead function gives a weighting of zero to ##X_n## and hence only counts ##\big(n-1\big)##.
- - - - -
edit:
we can define

##\delta := F_{X{n}}(x_n)##

i.e. ##\delta## is the CDF for your last random variable, i.e. last arrival before time runs out.

There is always some last arrival, and it always has that CDF function (though it gets truncated at random points in time or equivalently, the clock starts ticking at random points in time and thus the time until expiry is different / is some kind of random variable in itself).

- - - - -
You could also think of your drop dead function as a rounding function that always rounds ##\delta## down to zero. (If you were interested in plug-ins and not burnouts, it would be a round-up function... note how this interpretation is easiest if we had rescaled each random variable to be in [0,1].)

Another way of looking at it is if you take the expectation first, such that the expectation is 2 years (i.e. 6 bulbs), your drop dead function has no impact. However if you apply the drop dead function first --i.e. to each random variable, remembering is neutral to each ##X_i## where ## i\neq n##, and then apply expectations, you must get a lower number -- i.e. due to concavity.

Stephen Tashi said:
The question is treated by "renewal theory". ( https://en.wikipedia.org/wiki/Renewal_theory See the section "Renewal equation") However, there are only a few special cases where the renewal equation can be solved in closed form. The renewal equation deals directly with the expected number of renewals by time t instead of dealing with the probability distribution for the number of renewals before time t.

This is how I'd think about it. If you peel back Wald's Equality and how the indicator function ##\mathbb I_{J\geq n}## -- which is in effect your drop dead function -- works, i.e. recalling that ##\mathbb I_{J\geq n}## is positive integer valued but a raw expected value calculation allows 'partial credit' for ##X_n##, then you get the above result.

edit: in terms of notation, I'm actually referencing page 176 (21 of 72 in the pdf file) of this:

https://ocw.mit.edu/courses/electri...ring-2011/course-notes/MIT6_262S11_chap04.pdf
 
Last edited:
  • #14
btw, to reinforce what I tried to say above, I've dropped in my code below. Note to OP and anyone else who did this in Python -- the numba import and decorator can be commented out if you don't have it installed. You really should have it installed though as it was practically designed for numeric simulations like this.

- - - -
Python:
import numpy as np
import numba
# done in python 3.x.  from __future__ imports may be needed in 2.x

@numba.jit(nopython=True)
def run_sum(n_trials, threshold_time, time_per_X):
    """
    where X arrives uniform in [0, time_per_X]
    """
    counter = 0
    # this is the main result we are actually interested in
    alt_counter = 0
    # this alternative counter gives partial credit for the delta's
    for _ in range(n_trials):
        local_counter = 0
        local_alt_counter = 0
        running_sum = 0
        while True:
            arrival = np.random.random()*time_per_X
            running_sum += arrival
            if running_sum >= threshold_time:
                local_alt_counter += (arrival - (running_sum - threshold_time))/time_per_X
                # i.e. incremented by that delta
                # you can ignore the division of 'time_per_X' when it is equal to one,
                # and that is probably the best way to think about this problem
                break
            local_counter += 1
            local_alt_counter += 1
        
        counter += local_counter
        alt_counter += local_alt_counter
    mu = time_per_X / 2
    naive_amount = threshold_time / mu
    return counter / n_trials, naive_amount, alt_counter/n_trialsN_TRIALS = 1000000
actual_amount, naive_amount, other_metric = run_sum(N_TRIALS, threshold_time = 2, time_per_X = 2/3)
print(actual_amount)
print(naive_amount, "\n-------\n")
print(other_metric)print("rescaled amount is below.  Result is the same, but cleaner to think about")
actual_amount, naive_amount, other_metric = run_sum(N_TRIALS, threshold_time = 3, time_per_X = 1)
print(actual_amount)
print(naive_amount, "\n-------\n")
print(other_metric)
 
Last edited:
  • Like
Likes sysprog and haushofer

FAQ: A seemingly simple problem about probability

1. What is probability and how is it used in science?

Probability is a measure of the likelihood of an event occurring. In science, probability is used to make predictions, analyze data, and test hypotheses. It allows scientists to quantify uncertainty and make informed decisions based on the likelihood of different outcomes.

2. How can a seemingly simple problem about probability be complex?

Some probability problems may seem simple at first, but can become complex when there are multiple variables or conditions involved. Additionally, our intuition and common sense may not always align with the mathematical principles of probability, making some problems more challenging to solve.

3. Can probability be used to prove or disprove a hypothesis?

Probability alone cannot prove or disprove a hypothesis, but it can provide evidence for or against it. By calculating the probability of a certain outcome, scientists can evaluate the likelihood of their hypothesis being true and make conclusions based on the results.

4. What are some common misconceptions about probability?

One common misconception about probability is the belief that past outcomes can influence future events. In reality, probability is based on independent events and the outcome of one event does not affect the likelihood of another event occurring. Another misconception is that probability always has to be a number between 0 and 1, when in fact it can also be expressed as a percentage or fraction.

5. How can understanding probability be beneficial in everyday life?

Understanding probability can help individuals make informed decisions, evaluate risks, and interpret data. It can also help people recognize and avoid common fallacies and biases related to probability, such as the gambler's fallacy or the base rate fallacy.

Back
Top