# Standard Deviation (Radialogical Physics Attix)

1. Sep 8, 2012

### azaharak

In Frank Attix's book on Radialogical Physics

σ = √(E) ≈ √(μ) σ= standard deviation of a single random measurement

Where E is the expecation value of a stochastic process which approaches μ (μ is the average of measured values) as the number of measured values becomes vary large (∞)

I agree with the how the mean and expectation value approach each other,
I do not see how the standard deviation is the square root of the expectation value.

Isn't the stdv σ = √[E(x^2)-E(x)^2] = √[E(x-μ)^2]

He continues with the following example:

A detector makes 10 measurements, for each measurement the average number of rays detected (counts) per measurement is 10^5.

He writes that the standard deviation of the mean is √[E(x)/n]≈√[μ/n] = √[(10^5)/10]

where n is the number of measurements.

I agree that the standard deviation of the mean is related to the standard deviation by σ'=σ/√(n), but once again I'm not sure how he gets the standard deviation itself.

Help!

2. Sep 8, 2012

### chiro

Hey azaharak.

The short answer is that you want to calculate Var[X_bar] where X_bar is the sample mean (sum everything up and divide by the number of total samples).

Basically if you do this for a random sample (i.e. you get N samples that are all independent from a fixed population) then the variance can be calculated as Var[X_bar] = (1/n^2)*{Var[X1] + Var[X2] + ... + Var[Xn]} = (n/n^2)Var[X] = Var[X]/n = sigma^2/n.

This means our standard deviation of the estimator of the mean is given by sigma/SQRT(n).

If we know the population variance, we can use this but otherwise we need to use the sample to estimate this and this uses your formula.

There are theoretical reasons why we can do this especially if the estimator for the mean is the sample mean and it happens to be a maximum likelihood estimator (or MLE) and if you are really keen you can look into how this is used in the invariance principle and for finding estimates for transformations of parameters as well as for getting distributions of general estimators.

3. Sep 8, 2012

### azaharak

I understand why the standard deviation of the mean is related to the standard deviation by a factor of √(n). I have no issue with this.

What I don't understand is how you can estimate the standard deviation, by taking the square root of the expectation value. That is where my confusion lies.

____

Heres a thought experiment. Assume a normal distributions of times obtained from an experiment where we measure the time it takes something to fall. If the average time for the object to fall is around 0.8 seconds, and my times scatter from each other on the order of .1s

The above forumula would convey that the standard deviation could be taken as sqrt (0.8) , which is going to be larger than 0.8s. The standard deviation should be on the order of the precision (generally speaking).

I can't see how this formula or relation stands true.

Last edited: Sep 8, 2012
4. Sep 8, 2012

### chiro

I'm pretty sure this model is based on a Poisson distribution. In this distribution the mean and the variance are the same.

5. Sep 8, 2012

### azaharak

Thank you.. it now makes sense

for large n poisson looks like gaussian.

thanks

6. Sep 8, 2012

### Stephen Tashi

The equation does look crazy, but we can probably answer the question if you state the claim precisely.

You (or Frank) haven't stated the context of the equation clearly. .

You didn't define the stochastic process. A stochastic process is an indexed collection of random variables. What is the index being used? If the index is k and the random variable is X[k], what does this random variable represent?

What is meant by the "the expectation value" of the process? Over what set of events are you taking an expectation?

Over what set of events are taking an expectation in order compute $\sigma$? For that matter, what random variable is $\sigma$ supposed to be the standard deviation of?