Probability of type 1/type 2 errors and distribution of p-values

In summary, the conversation discusses a statistics exam that included questions about finding the probability of type 1 and type 2 errors. The scenario involved a lie detector test given to 1000 people, with 500 lying and 500 telling the truth. The calculated probabilities for type 1 and type 2 errors were 37% and 24%, respectively. The other question asked for the distribution of p-values, with the assumption being N(0,1). The conversation also touches on the terminology used for type 1 and type 2 errors, and the meaning of "Negative" and "Positive" in relation to the lie detector test. Overall, the conversation highlights the importance of understanding assumptions and terminology in statistics.
  • #1
humantripod
3
0
Hi,

I had a statistics exam today and there were three questions about which I felt a little uneasy.

The questions which felt too easy involved finding the probability of type 1 and type 2 errors. The scenario was that a lie detector test was given to 1000 people. Of those 1000 people, 500 lied and 500 told the truth. The lie detector incorrectly reported that 185 of the people who were truthtellers were actually lying and that 120 of the liars were telling the truth.

I calculated the P(Type 1 error) by simply doing 185/500 = 0.37
I calculated the P(Type 2 error) by simply doing 120/500 = 0.24

The other question asked me to give the distribution of p-values. I don't recall the question giving any details about whether it was under the assumption that H0 is true or Ha is true, just simply asking for the distribution. I said N(0,1).

What do you think guys? Any mistakes/fundamental flaws in my working out?
 
Physics news on Phys.org
  • #2
If P(Type I) = 37% and P(Type II) = 24%, what is the probability that there is no error? What should these sum up to?

Think about what your choice of denominator tells about what assumptions you make.
You may find reading this, helpful https://nflinjuryanalyticscom.files.wordpress.com/2020/04/diagnostic_testing_characteristics.pdf

So they don't specifically use the terminology Type I and II, but use False Positive and False Negative. As far as the p-values, you can look it up, but I think it is the probability that the Null-Hypothesis is correct.

Again, you should look this up, but I think a "Negative" would mean the detector did not detect anything (so it thinks the person told Truth)a "Positive" would mean the detector detected a lie.

Somebody else may chime in with more insight.
 

What is the difference between type 1 and type 2 errors in probability?

Type 1 error, also known as a false positive, occurs when a null hypothesis is rejected when it is actually true. Type 2 error, or false negative, occurs when a null hypothesis is accepted when it is actually false. In other words, type 1 errors are more significant because they lead to incorrect conclusions.

Why is it important to understand the distribution of p-values in probability?

P-values are used to determine the significance of results in a study or experiment. Understanding their distribution can help in interpreting the strength of evidence for or against a hypothesis. It can also provide insights into the reliability of the results and guide further research.

How do type 1 and type 2 errors impact statistical power?

Type 1 and type 2 errors have an inverse relationship with statistical power. As the probability of type 1 error decreases, the probability of type 2 error increases and vice versa. This means that if a study has a low probability of making a type 1 error, it is more likely to detect a real effect with high statistical power.

Can p-values be used to determine the validity of a study?

P-values alone cannot determine the validity of a study. They are just one piece of evidence that can be used to evaluate the strength of results. Other factors such as sample size, study design, and potential biases also need to be considered in determining the validity of a study.

How does the choice of significance level impact the probability of type 1 and type 2 errors?

The significance level, also known as alpha, is the threshold used to determine if a result is statistically significant. A lower significance level decreases the probability of type 1 error but increases the probability of type 2 error. On the other hand, a higher significance level increases the probability of type 1 error but decreases the probability of type 2 error.

Similar threads

  • Precalculus Mathematics Homework Help
Replies
1
Views
1K
  • Precalculus Mathematics Homework Help
Replies
2
Views
845
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
710
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Precalculus Mathematics Homework Help
Replies
3
Views
6K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
Back
Top