MHB Maximize A Posteriori: Countable Hypotheses

  • Thread starter Thread starter OhMyMarkov
  • Start date Start date
  • Tags Tags
    Maximum
OhMyMarkov
Messages
81
Reaction score
0
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)
 
Physics news on Phys.org
OhMyMarkov said:
Hello everyone!

Suppose we have multiple hypothesis, $H_1, H_2,\dots ,H_N$ of equal likelihood, and we wish to choose the unobserved parameter $\theta _m$ according to the following decision rule: $m _0 = arg \max _m p(x|H_m)$.

What if there are infinitely many hypotheses? (the case is countable but infinite)

In principle there is no difference, if you want to know more you will need to be more specific.

CB
 
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?
 
Last edited by a moderator:
OhMyMarkov said:
Hello CaptainBlack,

Let's start by two hypothesis of equally likely probability ("flat normal distribution"):

$H_0: X = \theta _0 + N$
$H_1: X = \theta _1 + N$

where N is a normal random variable (lets say of variance << $\frac{a+b}{2}$)

then the solution is $\operatorname{arg\, max}_m p(x|H_m)$.

But what if there were infinitely many hypothesis, i.e. $\theta$ is a real variable. How to estimate $\theta$?

I see no difference between a finite and countably infinite number of hypotheses in principle. That is other than you cannot simply pick the required hypothesis out of a list of likelihoods, that is.

But you cannot have a completely disordered collection of hypotheses there must be some logic to their order, and so there will be some logic to the order of the likelihoods and it will be that logic that will allow you to find the hypothesis with the maximum likelihood.

CB
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...

Similar threads

Replies
10
Views
2K
Replies
125
Views
19K
4
Replies
175
Views
25K
Back
Top