Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Combining Poisson error

  1. Jan 15, 2015 #1
    I have low counting stats and need to subtract background, account for efficiency, and divide by volume. How do I combine the asymmetrical (Poisson) errors?
     
  2. jcsd
  3. Jan 16, 2015 #2

    Orodruin

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Gold Member

    Exactly what kind of analysis are you looking at performing? Are you planning to test a hypothesis? Are you going to make a confidence interval for some parameter?
     
  4. Jan 16, 2015 #3
    I want to eventually test the hypothesis that one sample is is greater than the controls. And if two samples are different from each other. This I am ok with- but I have to show all of my calculations for how I can mathematically prove the values are different.

    I have small counts in 60 fields of view on a scope and I was propagating error following Gaussian error propagation- which I now know is wrong. But what do I do with these asymmetrical error bars when I want to know sample (+/- error) minus control (+/- error)?
     
  5. Jan 16, 2015 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    I suggest you have another try at stating your question, unless you are writing to someone on the forum who already knows what kind of experiment you are doing.
     
  6. Jan 16, 2015 #5
    How do I subtract a Poisson background from a Poisson sample and propagate the error associated with each?
     
  7. Jan 16, 2015 #6

    Stephen Tashi

    User Avatar
    Science Advisor

    That isn't a description of an experiment. As far as I know, it isn't a description of a specific problem in statistics.
     
  8. Jan 16, 2015 #7
    I am counting the number of particles in 60 fields of view on a scope. I count three pieces of a filter for a sample and three pieces of a filter for a control. All of my counts in 60 fields of view are <50 and Poisson distributed.
     
  9. Jan 16, 2015 #8

    Stephen Tashi

    User Avatar
    Science Advisor

    Estimation and "proving a difference" are technically two different statistical tasks. Statistics doesn't actually "prove" a difference. There are statistical procedures that make a decision about whether a difference in two situations exists, but these procedures are not proofs. These are regarded as "evidence". They are not a mathematical proof.

    With respect to the task of estimation, are you trying to estimate the parameters of a Poission distribution that would account for the difference between the counts on the control filters and the counts on the non-control samples?

    With respect to the task of giving evidence for a difference (a task called Hypothesis Testing), how many different situations are there? Are all the non-controls from the same general situation (e.g. from the livers of rats treated with drug X) or are they from different situations (e.g. some from the livers of rats treated with drug X and some from the livers of rats treated with drug Y).
     
  10. Jan 21, 2015 #9

    chiro

    User Avatar
    Science Advisor

    As others have hinted you need to specify what you are trying to test in terms of parameters (this is what estimators do - they model the parameters with random variables and you use this to make inferences) and also supply assumptions and the kind of data you have.

    If you are doing a difference of means then you will be basically doing a hypothesis of something like H0: lambda1 = lambda2 => lambda1 - lambda2 = 0 vs H1: lambda1 > lambda2 or lambda1 != lambda2 or something else.

    To use a normal distribution on the mean you need a large sample size. If you are not confident about that then you need to derive the joint distribution for your random variable of lambda1 - lambda2 and then get an interval (using say the likelihood ratio test) and use that to test the hypothesis.

    You can do this kind of thing in SAS or R if you have it (R is free and open source and if you've done any statistical or mathematical programming then it will be fairly straightforward) and you can find the site by typing in R project in google.
     
  11. Jan 23, 2015 #10

    jim mcnamara

    User Avatar

    Staff: Mentor

    What everyone is asking - from a wholly different perspective: You seem to have an XY problem here. You did X and you think Y will solve it. The problem is that you are looking at Y assuming it will fix things. We think that we, that being all of us, need to get to X and start there. Please tell us precisely what you did, and what hypothesis you want to test. And importantly: why? There are lots of smart folks here, it is a virtually given that one of them can help.

    http://mywiki.wooledge.org/XyProblem
     
  12. Jan 26, 2015 #11

    Svein

    User Avatar
    Science Advisor

    The standard way of testing for significant difference is:
    1. Calculate the mean and standard deviation of both your samples. Call them m1, m2, s1 and s2. Assume that m1 is the mean of the sample you are interested in.
    2. Then calculate the mean and standard deviation of the total data set (both samples merged). Call them M and S
    3. State the null hypothesis: There is no significant difference
    4. Then calculate [itex]\frac{(M-m1)}{S} [/itex]. This tells you how many standard deviations your sample is from the merged mean
    5. From that number, you can calculate the probability of the null hypothesis being true.
     
  13. Jan 26, 2015 #12

    Stephen Tashi

    User Avatar
    Science Advisor

    You can't calculate the probability that the null hypothesis is true.

    You can only assume the null hypothesis is true and calculate the probability that a number computed from the data is in some subset of the real numbers.
     
  14. Jan 26, 2015 #13

    Svein

    User Avatar
    Science Advisor

    Sorry, sloppy formulation. I was going to be more specific, but I suddenly remembered that the data are assumed to follow a Poisson distribution - and I did not quite remember how to deal with that.
     
  15. Jan 31, 2015 #14

    DrDu

    User Avatar
    Science Advisor

    You could have a look at the ISO 11929 norm "
    Determination of the characteristic limits (decision threshold, detection limit and limits of the confidence interval) for measurements of ionizing radiation -- Fundamentals and application"
    It treats more or less exactly the situation you are describing.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook