Statistics: Probability of False Negative during Measurement

Click For Summary
The discussion focuses on calculating the total probability of a false negative when measuring a normally distributed variable "x" with a measurement device that also has a normal distribution. The main challenge is determining the probability that the measurement output is less than a certain value "a" while the true value of "x" is greater than or equal to "a." Participants suggest using integration over the probability density functions (PDFs) of both distributions to find the desired probability. The conversation highlights the complexity of the problem and the need for numerical methods or specific statistical literature for a solution. Overall, the thread emphasizes the intricacies involved in calculating probabilities in the context of measurement errors.
n00bcake22
Messages
21
Reaction score
0
Hello Everyone,

My statistics is terribly rusty so I am turning to all of you for assistance! I am in the process of reviewing my old text but I figured this may be quite a bit quicker.

Homework Statement


Suppose "x" is normally distributed with "mu_1" and "sigma_1." Now suppose x is measured with a device whose output is also normally distributed where "mu_2" equals the true value of x and has a standard deviation of "sigma_2."

I am trying to figure out how to find the total probability that the measurement device will say x < "a" (some value < mu_1) when in fact x >= a (i.e. a false negative).

If that makes sense...

Homework Equations


The Attempt at a Solution



I know how to determine P(x < a) for the x-distribution alone and could determine the probability of the false negative if I was given a particular, known x-value but I have no idea how to find the TOTAL probability of false negatives when x >=a but not exactly known. This has been driving me crazy all morning.

Thanks in advance Everyone!
 
Physics news on Phys.org
for a given x value the probabilty of a false negative will be the "tail" of the measurement distribution that spreads below a.

So call a mearuement y
y = x+e
where
x is the actual quatinty to be measured (Normallly distributed RV, N(mu_1, sigma_1))
x is the actual quatinty to be measured (Normallly distributed RV, (0, sigma_2)))

so as discussed you should be able to find
P(y<a|x)

now sum this over all possible x and its probabilty distribution
 
I think the probability of the event { 'a' is greater than or equal to mu1 and the reading is less than or equal to mu1 } is:

\int_{\mu_1}^\infty \frac{1}{\sqrt{2\pi} \sigma_1 } e^\frac{-(a-\mu_1)^2}{2\sigma_1^2} <br /> \big{[} \int_{-\infty}^{\mu_1 - a} \frac{1}{\sqrt{2\pi}\sigma_2}e^\frac{-(x-\mu_2)^2}{2\sigma_2^2} dx \big{]} da

I don't recognize this expression as anything you could look up in standard statistical tables. It can be computed numerically. I'd bet there are papers written about this type of problem. We just have to find the right keywords for a search.
 
@Stephen: I think you read my question wrong as your description doesn't match my statement. Lanedance has the correct idea.

x = population = N(mu1, sig1)
y = measurement of x = x_true + e
e = error = N(0, sig2)
a = some lower bound, a < mu1

I would like to know the total probability of false negatives provided that the true value of x >= a (i.e. what is the total probability that for any x >= a, y < a). I think it would look something like this in statistical syntax (wild guess)...

P((y < a)|(x >= a))

So I can calculate P(y<a) at x = a, x = a + dx, x = a + 2*dx, ..., and sum them all up but this doesn't seem right. How do I incorporate the PDF of x itself?
 
I'm not going to call the measurement error 'e' because of the confusion with the number 'e'. I'll call the measurement error 'w'.

Let \sigma_3 = \sqrt{ \sigma_1^2 + \sigma_2^2}

Let \mu_3 =\mu_1

Let C = \frac{1}{ \sqrt{2\pi} \sigma_3} \int_a^\infty e^ \frac{-(y-\mu_3)^2}{2 \sigma_3^2} dy

p(x \leq a | y \geq a) = p(x \leq a |x + w \geq a) = p( x \leq a and x + w \geq a)/ p(x+w \geq a) =

\frac{1}{C} \int_{-\infty}^a \frac{1}{\sqrt{2\pi} \sigma_1 } e^\frac{-(x-\mu_1)^2}{2\sigma_1^2} <br /> \big{[} \int_{a-x}^{\infty} \frac{1}{\sqrt{2\pi}\sigma_2}e^\frac{-w^2}{2\sigma_2^2} dw \big{]} dx
 
I see that I answered the wrong question, in my last post.Let C = \frac{1}{ \sqrt{2\pi} \sigma_1} \int_a^\infty e^ \frac{-(x-\mu_1)^2}{2 \sigma_1^2} dy

What you asked was:
p(y &lt; a | x \geq a) = p(x + w &lt; a | x \geq a) = p( x + w &lt; a and x \geq a)/ p(x \geq a) =

\frac{1}{C} \int_a^\infty \frac{1}{\sqrt{2\pi} \sigma_1 } e^\frac{-(x-\mu_1)^2}{2\sigma_1^2} <br /> \big{[} \int_{-\infty}^{x-a} \frac{1}{\sqrt{2\pi}\sigma_2}e^\frac{-w^2}{2\sigma_2^2} dw \big{]} dx
 
Question: A clock's minute hand has length 4 and its hour hand has length 3. What is the distance between the tips at the moment when it is increasing most rapidly?(Putnam Exam Question) Answer: Making assumption that both the hands moves at constant angular velocities, the answer is ## \sqrt{7} .## But don't you think this assumption is somewhat doubtful and wrong?

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 18 ·
Replies
18
Views
2K
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
5K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K