Pattern recognition and machine learning problem 2.7

Click For Summary
SUMMARY

The discussion focuses on proving that a binomial random variable x, with a prior distribution for μ defined by a beta distribution, has a posterior mean that lies between the prior mean and the maximum likelihood estimate for μ. The key equations presented are eq. 1, which establishes the relationship between the prior mean, posterior mean, and maximum likelihood estimate, and eq. 2, which provides a method to express the posterior mean as a weighted average of the prior mean and the maximum likelihood estimate. The user seeks clarification on the legality of transitioning from eq. 1 to eq. 2 and the implications of the parameter λ in this context.

PREREQUISITES
  • Understanding of binomial random variables
  • Familiarity with beta distributions
  • Knowledge of maximum likelihood estimation (MLE)
  • Basic concepts of Bayesian statistics
NEXT STEPS
  • Study the properties of binomial random variables and their distributions
  • Learn about beta distribution and its role in Bayesian inference
  • Research maximum likelihood estimation techniques
  • Explore the concept of weighted averages in statistical contexts
USEFUL FOR

Students and practitioners in statistics, data science, and machine learning who are working on Bayesian inference and pattern recognition problems.

karse
Messages
2
Reaction score
0
I'm working my way through pattern recognition and machine learning using this http://www.cs.pitt.edu/~milos/courses/cs2750/ as a guide.

Homework Statement


We have to prove that a binomial random variable x, with a prior distribution for [itex]\mu[/itex] given by a beta distribution, has a posterior mean value that is x that lies between the pror mean and the maximum likelihood estimate for [itex]\mu[/itex].

[itex]\underbrace{\frac{a}{a+b}}_{prior-mean}<\underbrace{\frac{m+a}{m+a+l+b}}_{posterior-mean}< \underbrace{\frac{m}{m+l}}_{ml-estimate-of-\mu} (eq. 1)[/itex]

where a hint in the book state that it is equal to solving:

[itex] \frac{m+a}{m+a+l+b}= \lambda\cdot \frac{a}{a+b}+(1-\lambda)\cdot\frac{m}{m+l}, 0<=\lambda<=1 \text{ (eq. 2)}[/itex]

m and l is the numer of observed values where x=1 and x=0 respectively. a and b specifies our prior belief via the beta distribution.

My question is about the hint. how do i get from eq. 1 to eq. 2.? Is it always "legal" to solve eq. 2 instead of eq. 1??(i'm not looking for a solution to the original problem :) )
 
Physics news on Phys.org
karse said:
I'm working my way through pattern recognition and machine learning using this http://www.cs.pitt.edu/~milos/courses/cs2750/ as a guide.


Homework Statement


We have to prove that a binomial random variable x, with a prior distribution for [itex]\mu[/itex] given by a beta distribution, has a posterior mean value that is x that lies between the pror mean and the maximum likelihood estimate for [itex]\mu[/itex].

[itex]\underbrace{\frac{a}{a+b}}_{prior-mean}<\underbrace{\frac{m+a}{m+a+l+b}}_{posterior-mean}< \underbrace{\frac{m}{m+l}}_{ml-estimate-of-\mu} (eq. 1)[/itex]

where a hint in the book state that it is equal to solving:

[itex] \frac{m+a}{m+a+l+b}= \lambda\cdot \frac{a}{a+b}+(1-\lambda)\cdot\frac{m}{m+l}, 0<=\lambda<=1 \text{ (eq. 2)}[/itex]

m and l is the numer of observed values where x=1 and x=0 respectively. a and b specifies our prior belief via the beta distribution.

My question is about the hint. how do i get from eq. 1 to eq. 2.? Is it always "legal" to solve eq. 2 instead of eq. 1??


(i'm not looking for a solution to the original problem :) )

Look at ##\lambda## = 0 and ##\lambda## = 1.
 
Thanks ;)
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 15 ·
Replies
15
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K