- #1
RufusDawes
- 156
- 0
Hello,
I have a survey where there will be a known population and a number of people sampled from within the survey population.
What I'd like to know is what sample size I should use to receive a certain margin of error which will be set at a constant 5% with 95% confidence.
I would like to consider this with the size of the population in mind. I understand that statistical theory sometimes indicates that the size is not relevant, but the variability is what determines the sample size.
I'm not sure if that is relevant as the true proportion is not known and is estimated based on the survey results.
What I'm trying to avoid is taking a sample which is too small and could therefore be more random than expected by formula which don't consider the population size.
I was thinking that a hyper-geometric distribution might be useful but then I realized that only applies for small sample sizes ? Is this correct ?
I have seen tables where they say what sample size you should use for a given population, margin of error and confidence level but I want to know what is the underlying equation.
I tried ripping off the java script from a few calculators but I'm still not sure exactly the theory is behind it.
Thanks.
I have a survey where there will be a known population and a number of people sampled from within the survey population.
What I'd like to know is what sample size I should use to receive a certain margin of error which will be set at a constant 5% with 95% confidence.
I would like to consider this with the size of the population in mind. I understand that statistical theory sometimes indicates that the size is not relevant, but the variability is what determines the sample size.
I'm not sure if that is relevant as the true proportion is not known and is estimated based on the survey results.
What I'm trying to avoid is taking a sample which is too small and could therefore be more random than expected by formula which don't consider the population size.
I was thinking that a hyper-geometric distribution might be useful but then I realized that only applies for small sample sizes ? Is this correct ?
I have seen tables where they say what sample size you should use for a given population, margin of error and confidence level but I want to know what is the underlying equation.
I tried ripping off the java script from a few calculators but I'm still not sure exactly the theory is behind it.
Thanks.