A simple baysian question on likelihood

  • Thread starter Thread starter ghostyc
  • Start date Start date
  • Tags Tags
    Likelihood
ghostyc
Messages
25
Reaction score
0
Hi all,

attachment.php?attachmentid=35247&stc=1&d=1304691641.jpg


In this question, I found that

\Pr(X_i|\theta)=\frac{\exp(\theta x_i)}{1+\exp(\theta)}

and I carry on with the likelihood being \frac{\exp(\theta \sum x_i)}{(1+\exp(\theta))^n}

and so s=\sum x_i = Tn

I need some help with part (c).

========================================
attachment.php?attachmentid=35248&d=1304691641.jpg



How do I start with this question then?

I couldn't get a general expression for \Pr(X_i|\theta).


How do I generally deal with these "piecewise probability density function"?

Thanks!
 

Attachments

  • 6.jpg
    6.jpg
    32.3 KB · Views: 455
  • 3.jpg
    3.jpg
    18.6 KB · Views: 443
Physics news on Phys.org
Suppose that X_1,...,X_n are independent and identically distributed conditional on a random parameter \theta > 0 such that
P(X_i = 1|\theta) = \frac{e^\theta}{1+e^\theta} and P(X_i = 0| \theta) = 1 - P(X_i=1| \theta).

A normal prior for \theta is taken with mean \mu and variance \sigma^2.

(a) Show that the liklihood function is of this type

l(\theta) = \frac{e^{\theta S}}{(1 + e^\theta)^n} = \big{(}\frac{e^{\theta T}}{1 + e^\theta}\big{)}^n and find S and T in terms of \{X_1,X_2,...X_n\}.

(b) Hence, write down the posterior distribution for \theta.

(c) For any 0 < t < 1 and \theta > 0, show that

\frac{e^{\theta t }}{1 + e^\theta} < t^t(1-t)^{1-t} .

In this question, I found that

\Pr(X_i|\theta)=\frac{\exp(\theta x_i)}{1+\exp(\theta)}

and I carry on with the likelihood being \frac{\exp(\theta \sum x_i)}{(1+\exp(\theta))^n}

and so s=\sum x_i = Tn

I need some help with part (c).

One thought that comes to mind is to use calculus to see if the function
f(\theta) = (\theta)^S ( 1 - \theta)^{n-S} takes its maximum value when \theta = S/N. If so, then (f(\theta))^{1/n} also takes its maximum value there.
 
---------------
Suppose that X_1,...,X_n are independent and identically distributed conditional on a random parameter \theta > 0 such that
for each i = 1,2,...n

P(X_i = -1|\theta) = \theta} and P(X_i = 1| \theta) = 1 - \theta.

(a) Show that the liklihood function is given by the form

l(\theta) = \theta^{n/2 - S} (1-\theta)^{n/2 + S} and find S in terms of \{X_1,X_2,...X_n\}.
---------------

let k [/tex] be the number of the X_i that are +1 and let S = n/2 - k<br /> <br /> Isn&#039;t &quot;discrete&quot; a better term than &quot;piecewise&quot;?
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top