Ray Vickson said:
The question makes no sense. You say that ##\mu## is known, and then you say the hypotheses involve ##\mu##!
It makes sense to test hypotheses about ##\sigma## when ##\mu## is known, or to test hypotheses about ##\mu## when ##\sigma## is known (or even to test hypotheses about ##\mu## or ##\sigma## when neither of these is known).
That's what the problem says, but because of what you say I think it might have been a typo. It should read ##\mu## while ##\sigma## is known.
I'm not sure how can I edit the main post. I don't see an edit button.
StoneTemplePython said:
I still not sure what ##\alpha## is, but do you actually need the CDF here in closed-esq form?
Typically the way these problems are set up is you have function that has something you want to minimize: the usual recipe is to differentiate once and set equal to zero. Once you differentiate it, you can use the simple pdf for the Gaussian of course, and so you only ever work abstractly with the CDF of the Guassian, denoted by "CDF" of ##\Phi## or something like that .
##\alpha## is the critical value, the value we cross when we enter the rejection region.
Alright, let's consider the following: using the error function above I have that the error function with mean 0 and variance ##\sigma ## is ##\frac{1}{2\pi}\int_0^{\alpha/(\sigma\sqrt{2})} e^{-t^2}dt##.
This error gives the probability of falling in ##(-\alpha,\alpha)## but I am interested in the rejection region, this is ##(-\infty, \alpha)\cup(\alpha, +\infty)##. Therefore, I think I should consider the complementary error function ##erfc(\alpha) = 1-\frac{1}{2\pi}\int_0^{\alpha/(\sigma\sqrt{2})} e^{-t^2}dt = \frac{1}{2\pi}\int_{\alpha/(\sigma\sqrt{2})}^{\infty} e^{-t^2}dt##
Now I could derive and get that ##\frac{d}{dt}erfc(\sigma) = - \frac{1}{2\pi}e^{-\alpha^2/(2\sigma^2}##. I should set it to ##0## and find ##\alpha##, to "solve" the problem.
There are three issues here:
(1) ##e^{-\alpha^2/(2\sigma^2}## will never be zero for any ##\alpha##.
(2) I didn't get involved the hypothesis testing.
(3) It is not clear what the ##\sigma## in the error function is. The wikipedia entry linked above says that error generally have mean zero, but it is possible for the error to have a variance. Is the ##\sigma## in the normal distribution the very same ##\sigma## in the error function?