Maximize A Posteriori: Countable Hypotheses

  • Context: MHB 
  • Thread starter Thread starter OhMyMarkov
  • Start date Start date
  • Tags Tags
    Maximum
Click For Summary

Discussion Overview

The discussion revolves around the decision-making process in the context of maximizing posterior probabilities when dealing with multiple hypotheses, specifically focusing on scenarios with countably infinite hypotheses. Participants explore the implications of having infinitely many hypotheses on estimating an unobserved parameter.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants propose that the decision rule $m_0 = \arg \max_m p(x|H_m)$ can be applied similarly whether there are finitely many or infinitely many hypotheses.
  • Others argue that while the principle remains the same, the practical implications of estimating $\theta$ differ when considering infinitely many hypotheses.
  • A participant suggests that there must be some logical ordering to the hypotheses, which could influence the likelihoods and the ability to determine the maximum likelihood hypothesis.

Areas of Agreement / Disagreement

Participants express differing views on whether there is a significant difference between finite and countably infinite hypotheses, indicating that the discussion remains unresolved regarding the implications of infinite hypotheses on the decision-making process.

Contextual Notes

There are assumptions regarding the nature of the hypotheses and the distribution of likelihoods that remain unaddressed, particularly concerning the implications of having a disordered collection of hypotheses.

OhMyMarkov
Messages
81
Reaction score
0
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)
 
Physics news on Phys.org
OhMyMarkov said:
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)

In principle there is no difference, if you want to know more you will need to be more specific.

CB
 
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?
 
Last edited by a moderator:
OhMyMarkov said:
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?

I see no difference between a finite and countably infinite number of hypotheses in principle. That is other than you cannot simply pick the required hypothesis out of a list of likelihoods, that is.

But you cannot have a completely disordered collection of hypotheses there must be some logic to their order, and so there will be some logic to the order of the likelihoods and it will be that logic that will allow you to find the hypothesis with the maximum likelihood.

CB
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 125 ·
5
Replies
125
Views
20K
  • · Replies 175 ·
6
Replies
175
Views
27K