Combining Poisson error

  • #1
4
0

Main Question or Discussion Point

I have low counting stats and need to subtract background, account for efficiency, and divide by volume. How do I combine the asymmetrical (Poisson) errors?
 

Answers and Replies

  • #2
Orodruin
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
16,668
6,445
Exactly what kind of analysis are you looking at performing? Are you planning to test a hypothesis? Are you going to make a confidence interval for some parameter?
 
  • #3
4
0
I want to eventually test the hypothesis that one sample is is greater than the controls. And if two samples are different from each other. This I am ok with- but I have to show all of my calculations for how I can mathematically prove the values are different.

I have small counts in 60 fields of view on a scope and I was propagating error following Gaussian error propagation- which I now know is wrong. But what do I do with these asymmetrical error bars when I want to know sample (+/- error) minus control (+/- error)?
 
  • #4
Stephen Tashi
Science Advisor
7,012
1,235
I suggest you have another try at stating your question, unless you are writing to someone on the forum who already knows what kind of experiment you are doing.
 
  • #5
4
0
How do I subtract a Poisson background from a Poisson sample and propagate the error associated with each?
 
  • #6
Stephen Tashi
Science Advisor
7,012
1,235
How do I subtract a Poisson background from a Poisson sample and propagate the error associated with each?
That isn't a description of an experiment. As far as I know, it isn't a description of a specific problem in statistics.
 
  • #7
4
0
I am counting the number of particles in 60 fields of view on a scope. I count three pieces of a filter for a sample and three pieces of a filter for a control. All of my counts in 60 fields of view are <50 and Poisson distributed.
 
  • #8
Stephen Tashi
Science Advisor
7,012
1,235
I am counting the number of particles in 60 fields of view on a scope. I count three pieces of a filter for a sample and three pieces of a filter for a control. All of my counts in 60 fields of view are <50 and Poisson distributed.
Estimation and "proving a difference" are technically two different statistical tasks. Statistics doesn't actually "prove" a difference. There are statistical procedures that make a decision about whether a difference in two situations exists, but these procedures are not proofs. These are regarded as "evidence". They are not a mathematical proof.

With respect to the task of estimation, are you trying to estimate the parameters of a Poission distribution that would account for the difference between the counts on the control filters and the counts on the non-control samples?

With respect to the task of giving evidence for a difference (a task called Hypothesis Testing), how many different situations are there? Are all the non-controls from the same general situation (e.g. from the livers of rats treated with drug X) or are they from different situations (e.g. some from the livers of rats treated with drug X and some from the livers of rats treated with drug Y).
 
  • #9
chiro
Science Advisor
4,790
131
As others have hinted you need to specify what you are trying to test in terms of parameters (this is what estimators do - they model the parameters with random variables and you use this to make inferences) and also supply assumptions and the kind of data you have.

If you are doing a difference of means then you will be basically doing a hypothesis of something like H0: lambda1 = lambda2 => lambda1 - lambda2 = 0 vs H1: lambda1 > lambda2 or lambda1 != lambda2 or something else.

To use a normal distribution on the mean you need a large sample size. If you are not confident about that then you need to derive the joint distribution for your random variable of lambda1 - lambda2 and then get an interval (using say the likelihood ratio test) and use that to test the hypothesis.

You can do this kind of thing in SAS or R if you have it (R is free and open source and if you've done any statistical or mathematical programming then it will be fairly straightforward) and you can find the site by typing in R project in google.
 
  • #10
jim mcnamara
Mentor
3,782
2,107
What everyone is asking - from a wholly different perspective: You seem to have an XY problem here. You did X and you think Y will solve it. The problem is that you are looking at Y assuming it will fix things. We think that we, that being all of us, need to get to X and start there. Please tell us precisely what you did, and what hypothesis you want to test. And importantly: why? There are lots of smart folks here, it is a virtually given that one of them can help.

http://mywiki.wooledge.org/XyProblem
 
  • #11
Svein
Science Advisor
Insights Author
2,025
649
I want to eventually test the hypothesis that one sample is is greater than the controls. And if two samples are different from each other. This I am ok with- but I have to show all of my calculations for how I can mathematically prove the values are different.

I have small counts in 60 fields of view on a scope and I was propagating error following Gaussian error propagation- which I now know is wrong. But what do I do with these asymmetrical error bars when I want to know sample (+/- error) minus control (+/- error)?
The standard way of testing for significant difference is:
  1. Calculate the mean and standard deviation of both your samples. Call them m1, m2, s1 and s2. Assume that m1 is the mean of the sample you are interested in.
  2. Then calculate the mean and standard deviation of the total data set (both samples merged). Call them M and S
  3. State the null hypothesis: There is no significant difference
  4. Then calculate [itex]\frac{(M-m1)}{S} [/itex]. This tells you how many standard deviations your sample is from the merged mean
  5. From that number, you can calculate the probability of the null hypothesis being true.
 
  • #12
Stephen Tashi
Science Advisor
7,012
1,235
From that number, you can calculate the probability of the null hypothesis being true.
You can't calculate the probability that the null hypothesis is true.

You can only assume the null hypothesis is true and calculate the probability that a number computed from the data is in some subset of the real numbers.
 
  • #13
Svein
Science Advisor
Insights Author
2,025
649
You can't calculate the probability that the null hypothesis is true.
Sorry, sloppy formulation. I was going to be more specific, but I suddenly remembered that the data are assumed to follow a Poisson distribution - and I did not quite remember how to deal with that.
 
  • #14
DrDu
Science Advisor
6,023
755
You could have a look at the ISO 11929 norm "
Determination of the characteristic limits (decision threshold, detection limit and limits of the confidence interval) for measurements of ionizing radiation -- Fundamentals and application"
It treats more or less exactly the situation you are describing.
 

Related Threads on Combining Poisson error

  • Last Post
Replies
5
Views
458
  • Last Post
Replies
1
Views
1K
Replies
4
Views
675
Replies
5
Views
5K
  • Last Post
Replies
5
Views
3K
  • Last Post
Replies
1
Views
2K
Replies
6
Views
2K
Replies
4
Views
816
Replies
5
Views
1K
Top