Statistics: describing a best critical region of size alpha.

  • Thread starter Thread starter Artusartos
  • Start date Start date
  • Tags Tags
    Alpha Statistics
Click For Summary
SUMMARY

This discussion focuses on determining a best critical region of size α for testing the null hypothesis H0: θ = 0 against the alternative hypothesis H1: θ = 1, using a sample of size 10 from the probability density function f(x, θ) = 2xθ(1-x)1-θ. The likelihood ratio is established as L(0)/L(1) = ((1-x)10)/(x10) = ((1-x)/x)10, leading to the condition ((1-x)/x) ≤ k1/10 for some k < 1. The discussion also touches on the Bayesian perspective, where prior probabilities for θ are considered, and posterior probabilities are derived based on the observed data.

PREREQUISITES
  • Understanding of hypothesis testing and critical regions
  • Familiarity with likelihood ratios in statistical inference
  • Knowledge of Bayesian statistics and prior distributions
  • Basic calculus, particularly concerning concavity and convexity of functions
NEXT STEPS
  • Study the derivation of likelihood ratios in hypothesis testing
  • Learn about Bayesian inference and how to compute posterior probabilities
  • Explore the concepts of critical regions and their applications in statistical tests
  • Review properties of concave and convex functions in calculus
USEFUL FOR

Statisticians, data scientists, and researchers involved in hypothesis testing and Bayesian analysis will benefit from this discussion, particularly those working with likelihood ratios and critical regions in statistical inference.

Artusartos
Messages
236
Reaction score
0

Homework Statement



Suppose a sample of size 10 is drawn from a distribution with probability density function ##f(x, \theta) = 2x^{\theta}(1-x)^{1-\theta}## if ##0<x<1## and ##0## otherwise, where ##\theta \in \{0,1\}##. Describe a best critical region of size ##\alpha## for testing ##H_0 : \theta = 0## against the alternative hypothesis ##H_1 : \theta =1##.

Homework Equations


The Attempt at a Solution



We need ##\frac{L(0)}{L(1)} \leq k## for some ##k < 1##

We find that ##\frac{L(0)}{L(1)} = \frac{2(1-x_1)2(1-x_2)...2(1-x_{10})}{2x_12x_2...2x_{10}} = \frac{(1-x_1)(1-x_2)...(1-x_{10})}{x_1x_2...x_{10}}##. Now I want to simplify ##\frac{(1-x_1)(1-x_2)...(1-x_{10})}{x_1x_2...x_{10}} \leq k## to see if I can turn this into a distribution that looks more familiar. But for some reason I wasn't able to do anything, so I was wondering if somebody would give me a hint so I can continue...

Thanks in advance
 
Last edited:
Physics news on Phys.org
Artusartos said:

Homework Statement



Suppose a sample of size 10 is drawn from a distribution with probability density function ##f(x, \theta) = 2x^{\theta}(1-x)^{1-\theta}## if ##0<x<1## and ##0## otherwise, where ##\theta \in \{0,1\}##. Describe a best critical region of size ##\alpha## for testing ##H_0 : \theta = 0## against the alternative hypothesis ##H_1 : \theta =1##.

Homework Equations





The Attempt at a Solution



We need ##\frac{L(0)}{L(1)} \leq k## for some ##k < 1##

We find that ##\frac{L(0)}{L(1)} = \frac{2(1-x_1)2(1-x_2)...2(1-x_{10})}{2x_12x_2...2x_{10}} = \frac{(1-x_1)(1-x_2)...(1-x_{10})}{x_1x_2...x_{10}}##. Now I want to simplify ##\frac{(1-x_1)(1-x_2)...(1-x_{10})}{x_1x_2...x_{10}} \leq k## to see if I can turn this into a distribution that looks more familiar. But for some reason I wasn't able to do anything, so I was wondering if somebody would give me a hint so I can continue...

Thanks in advance

You should not have ##x_1, x_2, \ldots, x_{10}##. All samples are drawn from the same distribution, with some fixed but unknown ##x##.
 
Ray Vickson said:
You should not have ##x_1, x_2, \ldots, x_{10}##. All samples are drawn from the same distribution, with some fixed but unknown ##x##.

Thanks a lot.

We have,

##\frac{L(0)}{L(1)} = \frac{(1-x)^{10}}{x^{10}} = (\frac{1-x}{x})^{10}##

So,

##(\frac{1-x}{x})^{10} \leq k##

## \implies (\frac{1-x}{x}) \leq k^{1/10}##

##\implies \ln (\frac{1-x}{x}) \leq \frac{1}{10} \ln(k) ##

Let ##k' = \frac{1}{10} \ln(k)##, and let ##g(x) = \ln (\frac{1-x}{x})##. We can see that the second derivative of ##g(x)## is ##\frac{1}{1-x^2} + \frac{1}{x^2}##, which is positive for ##0 < x < 1##. So the function is concave upward.

But then ##g(x) \leq k'## is equivalent to ##c < x < c'## for some ##c## and ##c'## such that, for all ##x \leq c## and ##x \geq c'##, we have ##g(x) > k'##.

But we already have the distribution for ##X##, right?
 
Artusartos said:
Thanks a lot.

We have,

##\frac{L(0)}{L(1)} = \frac{(1-x)^{10}}{x^{10}} = (\frac{1-x}{x})^{10}##

So,

##(\frac{1-x}{x})^{10} \leq k##

## \implies (\frac{1-x}{x}) \leq k^{1/10}##

##\implies \ln (\frac{1-x}{x}) \leq \frac{1}{10} \ln(k) ##

Let ##k' = \frac{1}{10} \ln(k)##, and let ##g(x) = \ln (\frac{1-x}{x})##. We can see that the second derivative of ##g(x)## is ##\frac{1}{1-x^2} + \frac{1}{x^2}##, which is positive for ##0 < x < 1##. So the function is concave upward.

But then ##g(x) \leq k'## is equivalent to ##c < x < c'## for some ##c## and ##c'## such that, for all ##x \leq c## and ##x \geq c'##, we have ##g(x) > k'##.

But we already have the distribution for ##X##, right?

You claim that ##g(x) = \ln (\frac{1-x}{x})## is a convex function (I refuse to use the old-fashioned terms concave up or concave down), but that is false: you have computed ##g''(x)## incorrectly. The function g(x) switches from convex to concave as x increases from 0 to 1.
 
Ray Vickson said:
You claim that ##g(x) = \ln (\frac{1-x}{x})## is a convex function (I refuse to use the old-fashioned terms concave up or concave down), but that is false: you have computed ##g''(x)## incorrectly. The function g(x) switches from convex to concave as x increases from 0 to 1.

Thanks. But we do know that ##\frac{1-x}{x}## is convex...so we can come up with a similar result, right? (without taking the ln of both sides). Is that right?
 
Ray Vickson said:
(I refuse to use the old-fashioned terms concave up or concave down)

Damn! I must of missed when they went out of fashion.
 
Ray Vickson said:
You should not have ##x_1, x_2, \ldots, x_{10}##. All samples are drawn from the same distribution, with some fixed but unknown ##x##.
The samples DO NOT have the same fixed but unknown x.
He should have the subscripts. It is true that the x values are from the same distribution, but calling each one x is not correct. In particular, the ratio of likelihoods at 0 and 1 IS NOT

<br /> \frac{(1-x)^{10}}{x^{10}}<br />

Look, for example, at the relevant sections of an introductory text such as Hogg/Craig or intermediate texts like Bickel and Doksum, or any more recent mathematical stat text.
 
  • Like
Likes   Reactions: 1 person
LCKurtz said:
Damn! I must of missed when they went out of fashion.


Most modern optimization textbooks (say, written after about the mid 1960s) seem to have abandoned the concave up/down nomenclature. Calculus texts often still use it, though. I guess it has not been standardized.
 
Artusartos said:
Thanks a lot.

We have,

##\frac{L(0)}{L(1)} = \frac{(1-x)^{10}}{x^{10}} = (\frac{1-x}{x})^{10}##

So,

##(\frac{1-x}{x})^{10} \leq k##

## \implies (\frac{1-x}{x}) \leq k^{1/10}##

##\implies \ln (\frac{1-x}{x}) \leq \frac{1}{10} \ln(k) ##

Let ##k' = \frac{1}{10} \ln(k)##, and let ##g(x) = \ln (\frac{1-x}{x})##. We can see that the second derivative of ##g(x)## is ##\frac{1}{1-x^2} + \frac{1}{x^2}##, which is positive for ##0 < x < 1##. So the function is concave upward.

But then ##g(x) \leq k'## is equivalent to ##c < x < c'## for some ##c## and ##c'## such that, for all ##x \leq c## and ##x \geq c'##, we have ##g(x) > k'##.

But we already have the distribution for ##X##, right?

Sorry: I take back what I said at first: I guess ##X## is the random variable and ##\theta## the parameter, so you should have ##x_1, x_2, \ldots, x_{10}##! I am going to stop posting very late at night as an insomnia cure.

I think I see your problem, but can only tell you what I would do if I were a Bayesian. As a Bayesian, I would essentially view ##\theta## as a random quantity with prior probabilities
P\{\theta = 0 \} = p_0, \: P\{ \theta = 1 \} = q_0 \equiv 1 - p_0.
The posterior probabilities would be
P\{ \theta = 0 | x_1, x_2 , \ldots, x_n \} = <br /> \frac{p_0 f(x_1,x_2,\ldots,x_n|\theta = 0)}{f(x_1,x_2, \ldots, x_n)}, \text{ where}\\<br /> f(x_1,x_2, \ldots, x_n) = p_0 f(x_1,x_2,\ldots,x_n|\theta = 0) <br /> + q_0 f(x_1,x_2,\ldots,x_n|\theta = 1) \text{ and}\\<br /> f(x_1,x_2,\ldots,x_n|\theta = 0) = 2^n (1-x_1)(1-x_2) \cdots (1-x_n), \;<br /> f(x_1,x_2,\ldots,x_n|\theta = 1) = 2^n x_1 x_2 \cdots x_n
with ##P\{ \theta = 1 | x_1, x_2 , \ldots, x_n \}= 1-P\{ \theta = 0 | x_1, x_2 , \ldots, x_n \}.##
Thus, if we set ##P = x_1 x_2 \cdots x_n##, ##Q = (1-x_1)(1-x_2) \cdots (1-x_n),## and if we had a uniform prior (##p_0 = q_0 = 1/2##) we would have
P\{\theta = 1|X\} = \frac{Q}{Q+P}.

I'm not sure what you would do if you were not a Bayesian.
 
  • Like
Likes   Reactions: 1 person

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 19 ·
Replies
19
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
2
Views
2K
Replies
3
Views
1K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 25 ·
Replies
25
Views
3K