An optimal strategy to blend two probability estimates

broccoli7
Messages
8
Reaction score
0
Two mariners report to the skipper of a ship that they are distances d1 and d2 from the shore. The skipper knows from historical data that the mariners A & B make errors that are normally distributed and have a standard deviation of s1 and s2. What should the skipper do to arrive at the best estimate of how far the ship is from the shore?

Spoiler http://bayesianthink.blogspot.com/2013/02/the-case-of-two-mariners.html
 
Physics news on Phys.org
broccoli7 said:
What should the skipper do to arrive at the best estimate of how far the ship is from the shore?

He should first define what he means by "best".
 
broccoli7 said:
Two mariners report to the skipper of a ship that they are distances d1 and d2 from the shore. The skipper knows from historical data that the mariners A & B make errors that are normally distributed and have a standard deviation of s1 and s2. What should the skipper do to arrive at the best estimate of how far the ship is from the shore?

Spoiler http://bayesianthink.blogspot.com/2013/02/the-case-of-two-mariners.html

The answer given in the spoiler is sub-optimal.

The skipper can arrive at a [STRIKE]better[/STRIKE] closer estimate by ignoring both mariners whenever both estimates are negative.
 
jbriggs444 said:
The answer given in the spoiler is sub-optimal.

The skipper can arrive at a [STRIKE]better[/STRIKE] closer estimate by ignoring both mariners whenever both estimates are negative.

If making a negative error implies causing a collision that is certainly the case. But it's not possible to say what is optimal until the captain defines what quantity he is trying to optimize. For example, does he want an estimator with the minimum expected square error or minimum expected absolute error? Or does he want a maximum liklihood estimator etc.

An error of -0.3 is a smaller when squared than an error of +0.5.
 
I may be babbling a bit here, but...

From a Bayesian perspective the skipper has some unspecified distribution in mind for the ship's possible distance from shore. The figures reported by the two mariners are (hopefully independent!) pieces of evidence that may lead him to revise that initial estimate. If the skipper's initial estimate has zero probability for being anywhere on the landward side of the shore line then no evidence will change that estimation.

The problem appears to ask for a single parameter that is related to this distribution and is optimal without having specified the Bayesian prior and without, as you have pointed out, having specified the criteria for optimality.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top