- #1
bobby2k
- 127
- 2
Hello, I am taking an introduction class to statistics, in an earlier exam excercise they choose to estimate the answer in an odd way. To make my question concise I will make an abstraction of the problem, and not go into the details, but I can give you them if you'd like aswell.
In the problem we have two jointly distribution variables X and Y. They then draw a Y value, and they want to estimate the X value they got.
The way they do this is by first finding the distribution of X given Y. That is
[itex]P(X=x|Y=y^{*})=f_{1}(x,y^{*})[/itex]
They then find the exptected value of X given Y:
[itex]E(X|Y=y^{*})=f_{2}(y^{*})[/itex]
When they then draw the value Y they just plug it into the function [itex]f_{2}(y^{*})[/itex], and say that this is en estimate for x, why does this work? When we estimate the mean of a normal distribution for instance, we do not calculate en exptected value when we want to estimate it.
I am wondering if this other way of estimating X is better or worse(or even correct)?:
We can get the opposite distrubution, that is Y given X:
[itex]P(Y=y|X=x^{*})=f_{3}(y,x^{*})[/itex]
Then we get an estimator:[itex] \hat{X}^{*}(y)[/itex], and if this estimator is unbiased, we plug the value of Y into this, and say that this is the estimation for x. This way looks more like the "normal" way,like when we estimate the mean in the normal distribution.
What do you think? Is this explained somewhere?
In the problem we have two jointly distribution variables X and Y. They then draw a Y value, and they want to estimate the X value they got.
The way they do this is by first finding the distribution of X given Y. That is
[itex]P(X=x|Y=y^{*})=f_{1}(x,y^{*})[/itex]
They then find the exptected value of X given Y:
[itex]E(X|Y=y^{*})=f_{2}(y^{*})[/itex]
When they then draw the value Y they just plug it into the function [itex]f_{2}(y^{*})[/itex], and say that this is en estimate for x, why does this work? When we estimate the mean of a normal distribution for instance, we do not calculate en exptected value when we want to estimate it.
I am wondering if this other way of estimating X is better or worse(or even correct)?:
We can get the opposite distrubution, that is Y given X:
[itex]P(Y=y|X=x^{*})=f_{3}(y,x^{*})[/itex]
Then we get an estimator:[itex] \hat{X}^{*}(y)[/itex], and if this estimator is unbiased, we plug the value of Y into this, and say that this is the estimation for x. This way looks more like the "normal" way,like when we estimate the mean in the normal distribution.
What do you think? Is this explained somewhere?