kelly0303 said:
Which one of these is the right way?
It's possible to present information in terms of vague concepts like "errors" or "uncertainties" and obey certain traditional methods for computing such quantities. However, if publishing statistics is going to be a frequent part of your work, I suggest you get a clear idea of the basic scenario for statistical estimation - or use a more precise vocabulary if you already have a clear idea.
Numbers that can be calculated from sample data are not the parameters of distribution from which the sample is taken. Instead, they can only be
estimators of those parameters. Estimators are not right or wrong in the same sense that a proposed solution to the equation ##2x + 3 = 5## is right or wrong. For most distributions, the odds are that any estimator
will be wrong in that sense. Considering that samples are random variables, estimators are also random variables. Estimators have probability distributions. The goodness and/or badness of an estimator is mathematically defined by the properties of its distribution. The notion that an estimator is "right" may be interpreted as saying it has certain desirable properties (e.g. unbiased, minimum variance, maximum liklihood etc.) - or it can be interpreted as saying that the person using the word "right" doesn't understand the concept of estimators!
I think what
@Dale alludes-to is the distinction between a poission distribution and a Skellam distribution
https://en.wikipedia.org/wiki/Skellam_distribution.
Rephrasing the questions in more precise language:
If I want to subtract the background, I get 84 counts.
Let ##S_b =## the observed counts in the sample of signal+background. Let ##B = ## the observed counts in the sample of background-only. You intend to estimate the mean of the distribution of the signal-only as ##\hat{\mu} = S_b - B##.
##\hat{\mu} = S_b - B## is an intuitively pleasing idea for an estimator. We should check what properties this estimator has. ( What would we do if ##S_b -B < 0##? That would illustrate Dale's point.)
I am not sure what error to put on this number.
The distribution of ##\hat{\mu}## has a standard deviation ##\sigma_{\hat{\mu}}## You want a good estimator for it. (i.e. This is
not a question about the standard deviation of the distribution of signal-only. Instead it's a question about the standard deviation of an
estimator for the mean of that distribution.)
If I use Poisson, I would have ##84 \pm \sqrt{84}##.
One possible estimator for the standard deviation of ##\hat{\mu}## is ##\sqrt{|S_b - B|}##
If I use the error propagation for taking the difference I get an error of ##\sqrt{4^2+10^2}=\sqrt{116}## so I get ##84 \pm \sqrt{116}##.
Another possible estimator for the standard deviation of ##\hat{\mu}## is ##\sqrt{ S_b + B}##.
Which one of these is the right way?
better phrased as: How do the properties of these two estimators compare?
----------
Most readers of a published estimate for the standard deviation of ##\hat{\mu}## (or any other statistical estimate!) will interpret the value as if it were an estimate for a normal distribution. So they will think that deviations of one sigma , two sigma etc. have a probability corresponding what's true for a normal distribution. Often this way of thinking is approximately correct. However, it's worth checking whether thinking this way is approximately correct for poisson and Skellam distributions.