Maximize A Posteriori: Countable Hypotheses

  • Context: MHB 
  • Thread starter Thread starter OhMyMarkov
  • Start date Start date
  • Tags Tags
    Maximum
Click For Summary
SUMMARY

The discussion focuses on maximizing a posteriori estimation in the context of countable hypotheses, specifically addressing the decision rule $m_0 = \arg \max_m p(x|H_m)$. Participants explore the implications of having infinitely many hypotheses, emphasizing that the principles governing finite and countably infinite hypotheses remain consistent. The conversation highlights the necessity of a logical structure to the hypotheses to effectively determine the maximum likelihood estimate of the unobserved parameter $\theta_m$.

PREREQUISITES
  • Understanding of Bayesian inference and posterior estimation
  • Familiarity with likelihood functions and their maximization
  • Knowledge of normal distributions and their properties
  • Concept of countable versus uncountable sets in probability theory
NEXT STEPS
  • Study Bayesian inference techniques for infinite hypothesis spaces
  • Learn about the properties of normal distributions and their applications in hypothesis testing
  • Explore the concept of likelihood maximization in statistical modeling
  • Investigate the implications of countable versus uncountable hypotheses in statistical analysis
USEFUL FOR

Statisticians, data scientists, and researchers involved in Bayesian analysis and hypothesis testing, particularly those dealing with complex models involving infinite parameter spaces.

OhMyMarkov
Messages
81
Reaction score
0
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)
 
Physics news on Phys.org
OhMyMarkov said:
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)

In principle there is no difference, if you want to know more you will need to be more specific.

CB
 
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?
 
Last edited by a moderator:
OhMyMarkov said:
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?

I see no difference between a finite and countably infinite number of hypotheses in principle. That is other than you cannot simply pick the required hypothesis out of a list of likelihoods, that is.

But you cannot have a completely disordered collection of hypotheses there must be some logic to their order, and so there will be some logic to the order of the likelihoods and it will be that logic that will allow you to find the hypothesis with the maximum likelihood.

CB
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 125 ·
5
Replies
125
Views
20K
  • · Replies 175 ·
6
Replies
175
Views
27K