- #1
kelly0303
- 563
- 33
Hello! I have 2 probabilities ##p_1## and ##p_2## which governs the probability of measuring some events. I measure event 1 N times and get ##N_1## counts and event 2 N times and get ##N_2## counts. Then I need to build the function ##A = \frac{N_1-N_2}{N_1+N_2}##. I am trying to estimate the uncertainty on ##A## using Monte Carlo but I can't seem to get it right. What I am doing now is to generate N events 1 and N events 2 sampled from a binomial distribution with probability ##p_1## and ##p_2##, respectively (I am using ##p_1 = 0.06## and ##p_2 = 0.02##). I sum the events for each case and obtain ##N_1## and ##N_2## then I compute ##A##. I am doing this 10000 times and I am using the mean and standard deviation of the 10000 A values as the estimate and uncertainty for A, which should be 0.5. However, when I tried N = 100, I obtained ##0.496 \pm 0.328##, for N = 1000 I obtained ##0.499 \pm 0.097## and for N = 10000 I obtained ##0.500 \pm 0.030## (or in general very closed value to these numbers for that specific N). It seems like the actual estimate for A is much closer to the real value (0.5) than the obtained uncertainty. Am I doing something wrong? Do I need to divide by ##\sqrt{N}## at a point (the uncertainty seems to already scale as ~##\sqrt{N}## so I assumed this is not needed)?