Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Negative Poisson-Binomial Distribution

  1. Jun 26, 2014 #1
    The name in the title is probably not what it's called but it so similar that I chose it anyways.

    This is a problem I've been looking into on my spare time and I'm having a difficult time nailing it down. Essentially, it's an "extension" of the negative binomial distribution in the sense that instead of each Bernoulli trial being i.i.d., they're independent but not necessarily identically distributed. In other words, the probability for each independent trial is a sequence of probabilities. When the number of trials are fixed, the number of successes follows a Poisson-Binomial distribution.

    However, in the case of a Negative Poisson-Binomial distribution, one would count the number of trials until the [itex]k[/itex]th success occurs. Hence in theory, the support for this distribution are all integers greater than or equal to [itex]k[/itex]. It follows then that the probability sequence must be defined for all positive, non-zero integers or to put it another way, the function that the sequence is drawn from has all positive, non-zero integers (i.e. the index of the trial) as a subset of its domain at least.

    However, in practice that function may be a black box or may only be feasibly acquired numerically. So how then would one go around to compute this? If one were to look at the definition, the desired probability is the likelihood of getting [itex]k-1[/itex] successes out of [itex]n-1[/itex] trials followed by a success. If [itex]f_{n}(k)[/itex] is the PMF for a Poisson-Binomial distribution with n trials then the PMF for the Negative Poisson-Binomial distribution is [itex]f_{n-1}(k-1) p_n[/itex] where [itex]p_i[/itex] is the success probability of the [itex]i[/itex]th trial. Keeping [itex]k[/itex] fixed makes this a function of [itex]n[/itex] which is the total number of trials before the [itex]k[/itex]th success occurs. Given values on [itex]p_i \forall i \in \left \{1, 2, 3, ... , n \right \}[/itex], this can be computed in polynomial time using memoized recursion.

    Therefore, PMF is defined for [itex]p_i \in [0,1][/itex]. However, with the preconditions about [itex]p_i[/itex], we only know the PMF of a small subset or in other words, it will be truncated. Can we still say anything about the statistical qualities such as mean, variance and median of the full distribution or if there's a way to estimate them from the truncated distribution?

    So far, I've estimated them by approximating the distribution with the truncated distribution, which I think should be valid when the cumulative probability of the right tail of the untruncated distribution is sufficiently small. But then again, I suppose this would assume that the probability sequence is well-behaved in the sense that the values of the probability sequence don't jump too much between extremes or has long sequences of extremes. I wonder if all of this is correct and whether or not there is a better approach.

    An example would be having an urn that always contain 30 marbles, each being either black or white. You are also told the proportion of black and white marbles. You pick one marble, then empty the urn and fill it with another 30 marbles which may not necessarily have the same proportion of black and white marbles as the previous trial. How many urns would one then need to go through on average until you pick the seventh white marble?
     
  2. jcsd
  3. Jul 4, 2014 #2
    I'm sorry you are not generating any responses at the moment. Is there any additional information you can share with us? Any new findings?
     
  4. Jul 4, 2014 #3

    mathman

    User Avatar
    Science Advisor
    Gold Member

    I would like to try to help, but when I try to read the post I get an attack of MEGO and I can't finish it.
     
  5. Jul 6, 2014 #4
    Here's the thing, I started thinking on this problem when I wanted to make a good unbiased estimate of the efficiency of guns in various multiplayer video games. For now I'm only considering the cone-of-fire without the recoil just to see how well it does.

    For this, a good estimate is usually the time-to-kill (TTK), which is the length of the time interval between the first shot fired and last shot hitting the target. However, since the cone-of-fire isn't the same for each shot unless you're pacing your shots whilst maintaining your aim on the target, I can't use the Negative Binomial distribution to model as each hit probability can't be guaranteed to be equal.

    Therefore, I'm trying to apply this Negative Poisson-Binomial distribution, as described in the OP, as the model. I have written a two algorithms, one fast algorithm that works only in certain scenarios and one slower Monte Carlo algorithm for when those scenarios no longer apply. These will simulate the cone-of-fire so all I have to do now is get the distribution right.

    Now, as I've written, I already have a way of calculating the PMF but since it's recursive and no computer has infinite memory nor processing power, I can only get it to calculate up to a certain point (or in this case, bullet) before it gets computationally infeasible. Hence why I'm truncating the distribution as an approximation.

    The problem I have is I don't know how good this approximation will be for the mean, variance, median and mode. For the mode, I have a sufficient condition to show when the mode of a truncated, arbitrary discrete distribution is a mode of the original, arbitrary discrete distribution. And for the median, I can check the CDF and maybe estimate the error from there.

    But for the mean and variance, I think I can get lower bounds for the errors based on Markov's inequality, maybe even Chebyshev's inequality. I don't know of a good approach to get the upper bounds. Perhaps even worse, since the Negative Poisson-Binomial distribution only models bullets, transforming that to the time domain puts the random variable of the distribution as the argument for a function. That function isn't necessarily a bijection either and only linear during fully automatic fire (excluding reload), so the resulting distribution may change.

    I can understand though if there are no upper bounds since the function may grow faster than the PMF such that the mean and variance are infinite or undefined.
     
    Last edited: Jul 6, 2014
  6. Jul 6, 2014 #5

    Stephen Tashi

    User Avatar
    Science Advisor

    You should state the mathematical problem you are trying to solve. Your previous descriptions are very colorful and they mention mathematical terms. However, you haven't clearly stated what is given or what you are trying to solve for. For example, if each shot has a different probability of kill, then what is the distribution of these different probabilities of kill? What quantity characterizes the "efficiency" of a gun?
     
  7. Jul 9, 2014 #6
    First, the quantity that defines the efficiency of a weapon is the time-to-kill (TTK). The lower the TTK is, the better. The TTK is defined as [itex]T = t_n - t_1[/itex] where [itex]t_i[/itex] is the point in time at which the ith bullet hits the target, here with [itex]t_1 = 0[/itex] and [itex]t_n[/itex] being the point in time of the last lethal shot hitting the target. If the travel distant is kept (approximately) constant, then the time sequence may be defined for when they were shot with the final result translated by the travel time.

    Now the weapon model: It is assumed that
    • the target is stationary and simplified to a rectangle with dimensions estimated to the orthogonal size of the player model when viewed from the front
    • the center of the target is at distance d from the apex
    • the number of bullets, k, needed to kill the target is constant for a given d
    • the axis of the cone-of-fire is centered at the center of the target initially
    • recoil is negligible ("disabled" for now, but may be re-enabled later)
    • bullet drop is negligible
    • each weapon has a maximum rate-of-fire
    • each weapon has infinite ammo but cannot fire continuously forever (reload, overheat etc.)
    • reload time is not negligible
    • each weapon has a minimum and maximum cone-of-fire
    • the cone-of-fire starts at minimum
    • for each shot fired, the cone-of-fire expands (blooms)
    • the cone-of-fire retracts by a given rate between shots and fully resets during reload.

    So, with that out of the way, let [itex]p_i[/itex] be the probability of the independent event of the ith bullet hitting the target (Bernoulli trial). [itex]p_i = Pr(\text{Bullet } i \text{ hits}) = \frac{\omega_i}{\Omega_i}[/itex] where [itex]\Omega_i[/itex] is the solid angle of the spherical cap of the cone-of-fire and [itex]\omega_i[/itex] is the solid angle of the portion of the target that lies inside the cone-of-fire. (Shotgun-like weapons may have a slightly different definition depending on the mechanics but it's still based on solid angles.)

    Here, [itex]\Omega_i = 2\pi (1 - \cos \theta_i)[/itex] where [itex]2\theta_i[/itex] is the apex angle of the cone-of-fire for the ith bullet. [itex]\omega_i[/itex] is calculated using two algorithms:
    1. If all four corners of the rectangle are in front of the apex (positive scalar product of the axis of the cone and the position vector of each corner relative to the apex), centrally project that rectangle as seen from the apex at distance d. If at least one corner of the polygon is inside the circle, which is formed by the orthogonal intersection of a plane at distance d with the cone, form new polygons from each corner to the center of the circle. From these polygons, find the intersection areas and calculate the signed solid angles these subtend and add them. This sum is [itex]\omega_i[/itex].

      If all corners lie outside the circle, [itex]p_i = 0[/itex]
    2. If the previous algorithm no longer applies, sample a large number of uniformly distributed directional vectors from inside the cone. Divide the number of lines, formed by these vectors, that intersects the target by the sample number to get [itex]p_i[/itex] directly. (Monte Carlo approach)

    Now, define the infinite-dimensional vector [itex]\overline{p} \in [0,1]^{\infty\times 1}[/itex] whose ith component represents [itex]p_i[/itex]. The vector is also periodic (components repeating themselves), as a result of the reload. If reload is nonexistent, then this still applies if the firing sequence is periodic with enough time between each period to reset the cone-of-fire. Otherwise, it may still be periodic at the tail as cone-of-fire bounces around the maximum value.

    Let [itex]NPB(\overline{x},r)[/itex] be the Negative Poisson-Binomial distribution with infinite-dimensional vector [itex]\overline{x}[/itex], whose components [itex]x_i[/itex] are probabilities, and r is a positive integer greater than 0 that represents the number of successes needed. The distribution has support for all integers ar.

    If the random variable [itex]A \sim NPB(\overline{x},r)[/itex] and [itex]B_j \sim PB([I_j | Z_j] \overline{x})[/itex], where [itex]PB([I_j | Z_j]\overline{x})[/itex] is the Poisson-Binomial distribution for [itex]\overline{x}[/itex] multiplied by an infinite-dimensional block matrix consisting of a finite-dimensional identity matrix [itex]I_j[/itex] and an infinite-dimensional zero matrix [itex]Z_j \in \lbrace 0 \rbrace ^{j \times \infty}[/itex], then the probability mass function is defined as [itex]Pr(A = a) = p_A(a) = p_{B_{a-1}}(r-1) x_a[/itex] when [itex]a \geq r > 0 [/itex].

    So, let therefore [itex]N[/itex] be a random variable representing the number of bullets fired such that [itex]N \sim NPB(\overline{p},k)[/itex]. Let [itex]f(n)[/itex] be a monotonically increasing, surjective function that maps integers n > 0 to a point in time with [itex]f(1) = 0[/itex]. These points in time are the times when shots are fired. Note that multiple shots may be fired at the same time e.g. a shotgun.

    Therefore, the TTK would be [itex]T = f(N)[/itex], now a random variable. Now given the TTK of two weapons, [itex]T_u[/itex] and [itex]T_v[/itex], one needs to determine which is bigger. This is usually done by comparing central tendency, for which I know two that can give meaningful data: mean and median. The mode is optional.

    However, due to computational limitations, the distribution is truncated from the right to the mth point in time (or bullet if many bullets are fired simultaneously). If [itex]\widehat{T} \approx T[/itex] such that [itex]\widehat{T} = \frac{T}{\alpha}[/itex] with the restricted support. Here, [itex]\alpha[/itex] is the normalization factor. If it's restricted to the mth point in time, then [itex]\alpha = Pr(T \leq t_m)[/itex].

    If it's then approximated that the mean and median [itex]\mu_T \approx \mu_{\widehat{T}}~, m_T \approx m_{\widehat{T}}[/itex] respectively, what are the errors for these approximations? Are there any better approximations?
     
    Last edited: Jul 9, 2014
  8. Jul 17, 2014 #7
    Alright, I might have something. Instead of dealing with the mean and variance which may or may not be infinite or undefined, I'll just focus on using the median and median absolute deviation. Furthermore, I might be able to use the truncated distribution to my advantage.

    Lemma 1: For a discrete distribution, you only need to know the first [itex]k[/itex] values from the left such that the their cumulative probability is greater than 0.5.

    To show this, recall the definition of a median for a discrete distribution: [itex]x_m[/itex] is a median of the discrete distribution [itex]X[/itex] iff [itex]Pr(X \leq x_m) \geq \frac{1}{2} \wedge Pr(X \geq x_m) \geq \frac{1}{2}[/itex]. This is equivalent to [itex]F_{X}(x_m) \geq \frac{1}{2} \wedge F_X(x_{m-1}) \leq \frac{1}{2}[/itex]. The first condition is satisfied only by all [itex]x[/itex]-values, each of whose mapped points lie at or above the line [itex]y = 0.5[/itex]. The second condition is also satisfied by all [itex]x[/itex]-values, whose mapped points are at [itex]y = 0.5[/itex], and only the first point from the left that lies above it. Thus the set intersection proves the first lemma. (For the case where the leftmost [itex]x[/itex] value is a median, note that any point before it has a probability of zero and thus the second condition is still satisfied.)

    Hence, if the normalization factor [itex]\alpha > 0.5[/itex] for a distribution, which is truncated from above, you will always be able to find the median.

    Now, for the median absolute deviation (MAD). First, let [itex]Y = |X - x_m|[/itex]. What happens with the MAD when [itex]X[/itex] is truncated from above at [itex]x_b[/itex]? Assume that [itex]x_m \leq x_b[/itex] and [itex]\alpha > 0.5[/itex].

    Well, the sum of all probabilities for [itex]X \leq x_b[/itex] must equal [itex]\alpha[/itex]. So one would think one can always find the median of [itex]Y[/itex], no? Well, that's not true in this case since [itex]Y[/itex] itself is not necessarily truncated from above. Since all [itex]X < x_m[/itex] would be negative, these are flipped to the positive side due to the absolute value function. So we have known values mixed with unknown values.

    However, if the leftmost [itex]x[/itex]-value is at a smaller distance from the median than the truncation point, then the distribution is truncated from above as all the unknown values have a greater distance than the known values and thus the values aren't mixed. Thus if [itex]\alpha > 0.5[/itex], the MAD can always be found due to the first lemma.

    What about when that isn't the case? Let's look at the cumulative density function (CDF) of [itex]Y[/itex]. The CDF of [itex]Y[/itex] must be a sum of the cumulative function of the known probabilities and the cumulative function of the unknown probabilities. Since all probabilities are non-negative, the known cumulative function is a lower bound of the CDF. It follows then that, since [itex]\alpha > 0.5[/itex], the upper bound of the MAD must be the largest median [itex]\hat{y}_m[/itex] from the known values, by virtue of cumulative functions of non-negative numbers increasing monotonically.

    What about a lower bound? The trivial answer would be zero. It is known, however, that the unknown values must have a distance from the median greater than [itex]x_b - x_m[/itex], by virtue of being truncated from above. Therefore, if the values are mixed then this distance is a lower bound for the MAD. Hence, as [itex]x_b - x_m[/itex] approaches [itex]x_m - x_0[/itex] where [itex]x_0[/itex] is the leftmost [itex]x[/itex]-value, the error must approach 0 since the normalization factor is greater than 0.5 and the MAD will then always be found due to the first lemma.

    In conclusions, to feasibly solve the problem:

    1. Generate enough PMF values (truncated from above) such that the normalization factor (>0.5) is large enough. Do not, however, apply the normalization factor. Call this function a pseudo-PMF.
    2. Find the median [itex]x_m[/itex] of this pseudo-PMF (if multiple, I suggest using the average, the middle or the leftmost one).
    3. Generate the pseudo-PMF for the absolute distance from the median using the known values.
    4. If the largest distance left of the [itex]x_m[/itex] is smaller than the largest distance right of the [itex]x_m[/itex] (call this distance [itex]d[/itex]), find the median of this new pseudo-PMF to find the exact MAD.
    5. If not, estimate it as the median of the new pseudo-PMF while using the largest median of this pseudo-PMF as the upper bound and [itex]d[/itex] as the lower bound. Alternatively, use the average value of the error bounds as the estimate with the difference as the error.

    What do you guys think of this? Any flaws or improvements?

    Example, let X have this distribution, with probabilities randomly selected from a uniform distribution on 0 to 1, rounded to two decimals and then normalized:
    Code (Text):
        1.0000    0.0326
        2.0000    0.1609
        3.0000    0.0631
        4.0000    0.1079
        5.0000    0.0346
        6.0000    0.1222
        7.0000    0.0530
        8.0000    0.1324
        9.0000    0.1405
       10.0000    0.1527
    It has the CDF:
    Code (Text):
        1.0000    0.0326
        2.0000    0.1935
        3.0000    0.2566
        4.0000    0.3646
        5.0000    0.3992
        6.0000    0.5214
        7.0000    0.5743
        8.0000    0.7067
        9.0000    0.8473
       10.0000    1.0000
    Now, we can clearly see that 6 is the median. I also only need to know the 6 first values to determine that, so let's truncate to 7. (Normalization factor is 0.5743)

    In the same order, the probability mass relation for the absolute deviation from the mean:
    Code (Text):
        5.0000    0.0326
        4.0000    0.1609
        3.0000    0.0631
        2.0000    0.1079
        1.0000    0.0346
             0    0.1222
        1.0000    0.0530
        2.0000    0.1324
        3.0000    0.1405
        4.0000    0.1527
    And the PMF:
    Code (Text):
             0    0.1222
        1.0000    0.0876
        2.0000    0.2403
        3.0000    0.2037
        4.0000    0.3136
        5.0000    0.0326
    The pseudo-PMF, with the truncation parameter at 7:
    Code (Text):
             0    0.1222
        1.0000    0.0876
        2.0000    0.1079
        3.0000    0.0631
        4.0000    0.1609
        5.0000    0.0326
    The respective cumulative functions:
    Code (Text):
             0    0.1222    0.1222
        1.0000    0.2098    0.2098
        2.0000    0.4501    0.3177
        3.0000    0.6538    0.3809
        4.0000    0.9674    0.5418
        5.0000    1.0000    0.5743
    Clearly, the cumulative function for the pseudo-PMF is a lower bound function. The upper bound of the MAD is therefore 4, as this is the largest median of the lower bound function. The lower bound of the MAD is 1 as this is the distance between the median 6 and the truncation point 7. The actual MAD is 3.

    So, with a normalization factor of 0.5743, we got an exact median of 6 with a MAD estimated at 4, bounded above by 4 and below by 1. Alternatively, MAD is estimated at 2.5 +- 1.5.

    Raising the truncation parameter to 8 (normalization factor 0.7067) gives us the cumulative functions:

    Code (Text):
             0    0.1222    0.1222
        1.0000    0.2098    0.2098
        2.0000    0.4501    0.4501
        3.0000    0.6538    0.5132
        4.0000    0.9674    0.6741
        5.0000    1.0000    0.7067
    Again, median is at 6 with MAD now estimated at 3 +- 1. Raising the truncation parameter to 9 (normalization factor 0.8473) gives us:
    Code (Text):
             0    0.1222    0.1222
        1.0000    0.2098    0.2098
        2.0000    0.4501    0.4501
        3.0000    0.6538    0.6538
        4.0000    0.9674    0.8147
        5.0000    1.0000    0.8473
    Now we get an exact value for the MAD.

    For posterity, the mean is 6.103 and the standard deviation is 2.9503.
     
  9. Jul 18, 2014 #8

    Stephen Tashi

    User Avatar
    Science Advisor

    What is a "Negative Possion-Binomial distribution with an infinite dimensional vector" ? You are inventing terminology and leaving a reader to guess what it means.

    I think you want the efficiency of a weapon defined to be the mean time to kill.

    The general situation you describe is that there is an infinite sequence of probabilities (x[1],x[2],...). A gun takes shots at a target. The k-th shot occurs at time t[k]. The probability the gun hits the target on the k-th shot is x[k]. We know the given integer N, which is the number of shots needed to kill the target. Define the random variable Y to be the time t[k] at which the gun makes its Nth hit on the target.

    You apparently are interested in the mean value of Y. However, you aren't making it clear which variables in this problem have some random variation in them. For example, t[0] is defined to be 0. But is t[1] always the same given value? Or does t[1] vary in some random fashion? Likewise, is x[1] a constant probability ? Or are the t[k] and x[k] also random variables?
     
  10. Jul 18, 2014 #9
    The infinite dimensional (column) vector [itex]\bar x[/itex] is defined as a column vector with the dimension [itex]\infty \times 1[/itex]. In the context of the Negative Poisson-Binomial distribution, [itex]\bar x \in [0,1]^{\infty \times 1}[/itex]. It's there to simplify the representation of the infinite probability sequence. Is there a different term for this I'm not aware of? Or maybe it's just Matlab-speak.

    Quite, or perhaps more generally it's the central tendency. I hadn't quite decided which measure to use at that time since it is rather difficult (for me) to tell what the actual value is (either the exact value or the error bound on an approximation).

    Let's see if I can make this clear.
    Let [itex]k[/itex] be the index for each shot. [itex]k = 1, 2, 3, ...[/itex].
    We have two sequences, one for the probability of the [itex]k[/itex]th shot hitting the target [itex](p_k)^{\infty}_{k=1}[/itex] and one for the time at which that shot was fired [itex](t_k)^{\infty}_{k=1}[/itex].
    These sequences are not random.
    The time sequence is defined as [itex]t_k = f(k)[/itex] where [itex]f(k)[/itex] is a non-negative, monotonically increasing (NB!) function defined for all [itex]k[/itex] with [itex]f(1) = 0[/itex]. This function is bounded below by the equivalent function of the fully automatic firing of the weapon.
    The probability sequence is defined as [itex]p_k = g(k)[/itex] where [itex]g(k) \in [0,1], \forall k[/itex]. This function depends on the time between each shot, the cone-of-fire mechanic as well as the shape of the target and the distance to the target. Again, not random. It it is also computationally complex as far as I know.

    Define [itex]Y[/itex] to be the time at which the [itex]n[/itex]th hit occurs. Define [itex]X[/itex] to be the number of shots it took to land [itex]n[/itex] hits with the last shot being a hit. [itex]X[/itex] follows a Negative Poisson-Binomial distribution with the parameters [itex](p_k)^{\infty}_{k=1}[/itex] (the probabilities) and [itex]n[/itex] (the number of successes). When [itex]p_k[/itex] is constant for all [itex]k[/itex], this means [itex]X[/itex] follows the more specific Negative Binomial distribution. In any case, [itex]Y[/itex] can then be written as [itex]Y = f(X)[/itex]. We want to measure the central tendency (choose your pick) of [itex]Y[/itex].

    For instance, the mean would be [itex]\sum^{\infty}_{k=n}f(k) p_X(k)[/itex] where [itex]p_X(k)[/itex] is the probability mass function of [itex]X[/itex]. However, since I don't know [itex]f(k)[/itex] in advance, I have no idea whether or not this sum even converges let alone its value. Hence why I'm thinking about using the median instead, which is also more robust.
     
  11. Jul 19, 2014 #10

    Stephen Tashi

    User Avatar
    Science Advisor

    Unless you put some conditions on the [itex]p_k[/itex]'s that may not define [itex] Y.[/itex] as a real number. For example if [itex] n = 2 [/itex] and [itex] p_1 = p_2 = 0.5 [/itex] and [itex] p_k = 0 [/itex] for [itex] k > 2 [/itex].

    Now that we have the situation stated clearly, what is your question? Are you asking for a good way to estimate the mean or median of [itex] Y [/itex] from some empirical data?
     
  12. Jul 20, 2014 #11
    I think it still does define it, it's just that [itex]Pr(Y = t_k) = 0 , k > 2[/itex] for when [itex]f(k)[/itex] is bijective. I suppose crazy stuff might happen if [itex]p_k = 0 [/itex], such as the PMF of [itex]X[/itex] no longer summing to 1, which your example shows:
    Code (Text):
        k    p
        2    0.25
        3    0
        4    0
        5    0
        .    .
        .    .
        .    .
    I'm not sure if it's okay to normalize it at that point.

    At this point, I'm willing to just work with the median and MAD since there's so little I can assume about [itex]f(k)[/itex] to get a meaningful value on the mean and variance. But I'm not working samples. If in theory I had infinite time and memory, I could calculate all values of the PMF directly.

    However, since I don't and the calculations can't be assumed to be computationally feasible for large [itex]k[/itex], I have to estimate them from a truncated distribution. Namely, by truncating the distribution from above. In my one of my previous posts, I showed a possible to method of doing this by making sure the normalization factor is greater than half. But again, I'm not sure if it's fully correct or if there are any improvements that can be made.
     
  13. Jul 20, 2014 #12

    Stephen Tashi

    User Avatar
    Science Advisor

    The definition of [itex] Y [/itex] must specify an outcome, "the time at which the [itex]n[/itex]th hit occurs". There must be a time when the [itex] n[/itex]th hit occurs. The fact that there are many times when it can't occur, doesn't specify when it does occur.

    But is the question how to estimate the median from empirical data?
     
  14. Jul 20, 2014 #13
    I'm not sure what you mean there. [itex]Y[/itex] can only take on values from [itex]t_k[/itex] (the times at which bullets can hit the target). Whether they have a non-zero chance of occurring or not shouldn't change that. Unless there's a mathematics "fine print" I'm missing.
    If by empirical data you mean actually testing in-game and noting down the times it took to kill one's opponent, then no. This is more akin to having access to the actual PMF of the distribution and working out the statistic (mean, median etc.) from there, except this time you only have access to the first [itex]m[/itex] probabilities due to computational reasons.
     
  15. Jul 20, 2014 #14

    Stephen Tashi

    User Avatar
    Science Advisor

    Look at the PMF in your example. The probabilities sum to 0.25, not 1.0. That illustrates that there is something wrong with the definition of the random variable [itex] Y [/itex]. A specific "realization" of a real valued random variable must always be a specific real number.

    Is it "akin" to that or is the problem exactly that? It's an interesting problem! You want to estimate a population parameter of a random variable from partial information about it's PMF. I don't know any pat answer about how to do that. I think the best you can do is get some bounds on the possible values. Should getting bounds be the goal?
     
  16. Jul 20, 2014 #15
    Well let's see. It's meant to behave similarly to the Negative Binomial distribution, except the success probability is no longer constant. So the general formulation of its PMF shouldn't change i.e. [itex]Pr(n\text{th success on the }k\text{th trial}) = Pr(n-1\text{ successes in }k-1 \text{ trials})Pr(\text{The }k\text{th trial is a success})[/itex] with [itex]n[/itex] constant, no? This is what the PMF of the Negative Poisson-Binomial distribution looks like in general and is the PMF of landing the [itex]n[/itex]th hit with the [itex]k[/itex]th bullet aka [itex]X[/itex]. Clearly, the support is [itex]k \geq n[/itex]. Letting [itex]Y = f(X)[/itex] should therefore be valid.

    So is the problem then that [itex]p_k[/itex] lies in a closed interval instead of an open one as in the Negative Binomial? If that's the case, then it's a bit of a downer since there are times when you're guaranteed to land all bullets (point blank range) and times when you can't (recoil pushing your cone-of-fire above the target). Or is there a conditional probability that I've missed?

    Yeah, pretty much. I was thinking that since I only know so many values, trying to calculate the mean with so little information on the function of values is almost pointless. I wouldn't even know if the infinite weighted sum would converge.

    The median and MAD, however, depend much less on those values though and more on the PMF so it would probably make more sense to use them. And as I've stated before, I believe you shouldn't need to know all the values (right of the median) to find the median due to the very definition of the median.

    In any case, in the small-scale tests I've done with that method I'm almost certain that I can get an exact median, given a criterion, and at least a two-sided bound on the MAD such that the relative error when estimating the MAD to be the average of the bounds, numerically at least, appear to be proportional to the cumulative probability of the complement of the known probabilities.
     
    Last edited: Jul 21, 2014
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook