1. The problem statement, all variables and given/known data An engineer is measuring a quantity q. It is assumed that there is a random error in each measurement, so the engineer will take n measurements and reports the average of the measurements as the estimated value of q. Specifically, if Yi is the value that is obtained in the i'th measurement, we assume that Yi=q+Xi, where Xi is the error in the ith measurement. We assume that Xi's are i.i.d. with EXi=0 and Var(Xi)=4 units. The engineer reports the average of measurements Mn = (Y1 + .... Yn) / n How many measurements does the engineer need to make until he is 95% sure that the final error is less than 0.1 units? In other words, what should the value of n be such that P(q−0.1≤Mn≤q+0.1)≥0.95? 2. Relevant equations Central Limit Theorem states: Zn = (Mx - μ) / (σ / √n) 3. The attempt at a solution So this is the formula I chose to use. It seems like a simple variable swap, but my problem is pulling n out. P(y1 ≤ Y ≤ y2) = P( (y1 - nμ) / (σ√n) ≤ ((X1 +... Xn) - nμ) / (σ√n)) ≤ (y2 - nμ) / (σ√n) Then this would give ((q + 0.1) / (2√n)) - ((q - 0.1) / (2√n)) = Φ-1 (0.95) = 1.64 combining this would give:(q/2√n) + (0.1 / 2√n) - (q / 2√n)- (0.1 / 2√n) = 0.2 / 2√n = 1/√n = 1.64 ⇒ n = (1/1.64)2 = 0.37 Now... this is not an integer, and makes absolutely no sense. I am semi-confident in my process, but I think I may have made too many assumptions about the central limit theorem.