Apteronotus said:
I guess I would define it as follows:
Which quantity is smaller
<br />
\left\{E[g(X,Y)]-g(x,y)\right\}^2<br />
or
<br />
\left\{g(E[X],E[Y])-g(x,y)\right\}^2<br />
You can consider answer questions only if you know E(X),E(Y) and E(g(X,Y)). If you already know those quantities, what statistical problem are you trying to solve?
If lower case "x" and "y" denote random variables in those expression, the expressions themselves take-on random values, so you can't claim anything about which one of them is smaller.
In your next post, you seem to say E[X] = x. That would imply x is a constant. So what is "x"? is it a constant or is it a random variable?
A typical scenario is statistics would be that we are trying to estimate E[ g(x,y)] from a sample. We define some function W of the sample data. This function is an "estimator". I think you want to ask which is the "best" estimator for E(g(x,y)). Is it the function defined by W1 = the mean value of g(xi, yi) taken over all data points (xi,yi) ? Or is it the function defined by W2 = g( x_bar, y_bar) where x_bar and y_bar are the mean of the samples x1,x2,.. and y1,y2,... respectively.
One way to define a "best" estimator W (x1,x2,...y1,y2,..) is to say that it minimized the expected square error.
I.e. it minimizes E[ ( w(x1,x2,..,y2,y2...) - E(g(x,y))^2 ]
Note that you have to have two expectations in this expression. If you leave off the one on the left, the expression is a random quantity which varies with the data.
I think the language your are using in your thoughts is failing to distinguish among the following different concepts.
1) The mean of a distribution
2) The mean of sample that is drawn from that distribution
3) An estimator for the mean of the distribution
Similar distinctions hold for other statistical quantities, such as the standard deviation, the variance, the mode etc.