Prediction error in a random sample

Click For Summary
The exercise involves calculating the expectation, expected square, and variance of the prediction error when predicting the mean of a second sample based on a first sample from a population with a known variance of 490. To solve this, one must understand how sample means are distributed, which typically follows a normal distribution if the population is normal or if the sample size is large enough due to the Central Limit Theorem. The variance of the prediction error can be derived from the variances of the two samples, noting that the variance for a sample differs from that of the population. Additionally, approximating the probability that the prediction error is less than 14 in absolute value requires knowledge of the distribution of the prediction error. Understanding these statistical concepts is crucial for solving the exercise accurately.
Charlotte87
Messages
18
Reaction score
0
I have an exercise that I do not understand how to solve (statistics and probability is really my weaker part...). The exercise goes as follow:

In a certain population, the random variable Y has variance equal to 490. Two independent random samples, each of size 20, are drawn. The first sample is used as the predictor of the second sample mean.

a) Calculate the expectation, expected square end variance of the prediction error.
b) Approximate the probability that the prediction error is less than 14 in absolute value.

any clues how I can start?
 
Physics news on Phys.org
You need to look at how sample means are distributed.
 
But how do I know how it is distributed? There is no information about that in the exercise.
 
The information is presented in your course notes - the part where it talks about how to combine two samples perhaps? You will also see that the variance calculation is different for a sample than for a population.

The distribution of the means of successive samples is related to the distribution of the population.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 24 ·
Replies
24
Views
6K