Standard Deviation (Radialogical Physics Attix)

azaharak
Messages
152
Reaction score
0
In Frank Attix's book on Radialogical Physics

Equation 1.2a reads

σ = √(E) ≈ √(μ) σ= standard deviation of a single random measurement

Where E is the expecation value of a stochastic process which approaches μ (μ is the average of measured values) as the number of measured values becomes vary large (∞)


I agree with the how the mean and expectation value approach each other,
I do not see how the standard deviation is the square root of the expectation value.


Isn't the stdv σ = √[E(x^2)-E(x)^2] = √[E(x-μ)^2]


He continues with the following example:

A detector makes 10 measurements, for each measurement the average number of rays detected (counts) per measurement is 10^5.

He writes that the standard deviation of the mean is √[E(x)/n]≈√[μ/n] = √[(10^5)/10]

where n is the number of measurements.

I agree that the standard deviation of the mean is related to the standard deviation by σ'=σ/√(n), but once again I'm not sure how he gets the standard deviation itself.

Help!
 
Physics news on Phys.org
Hey azaharak.

The short answer is that you want to calculate Var[X_bar] where X_bar is the sample mean (sum everything up and divide by the number of total samples).

Basically if you do this for a random sample (i.e. you get N samples that are all independent from a fixed population) then the variance can be calculated as Var[X_bar] = (1/n^2)*{Var[X1] + Var[X2] + ... + Var[Xn]} = (n/n^2)Var[X] = Var[X]/n = sigma^2/n.

This means our standard deviation of the estimator of the mean is given by sigma/SQRT(n).

If we know the population variance, we can use this but otherwise we need to use the sample to estimate this and this uses your formula.

There are theoretical reasons why we can do this especially if the estimator for the mean is the sample mean and it happens to be a maximum likelihood estimator (or MLE) and if you are really keen you can look into how this is used in the invariance principle and for finding estimates for transformations of parameters as well as for getting distributions of general estimators.
 
I understand why the standard deviation of the mean is related to the standard deviation by a factor of √(n). I have no issue with this.

What I don't understand is how you can estimate the standard deviation, by taking the square root of the expectation value. That is where my confusion lies.

____

Heres a thought experiment. Assume a normal distributions of times obtained from an experiment where we measure the time it takes something to fall. If the average time for the object to fall is around 0.8 seconds, and my times scatter from each other on the order of .1s

The above forumula would convey that the standard deviation could be taken as sqrt (0.8) , which is going to be larger than 0.8s. The standard deviation should be on the order of the precision (generally speaking).

I can't see how this formula or relation stands true.
 
Last edited:
I'm pretty sure this model is based on a Poisson distribution. In this distribution the mean and the variance are the same.
 
Thank you.. it now makes sense


for large n poisson looks like gaussian.


thanks
 
The equation does look crazy, but we can probably answer the question if you state the claim precisely.

azaharak said:
In Frank Attix's book on Radialogical Physics

Equation 1.2a reads

σ = √(E) ≈ √(μ) σ= standard deviation of a single random measurement

Where E is the expecation value of a stochastic process which approaches μ (μ is the average of measured values) as the number of measured values becomes vary large (∞)

You (or Frank) haven't stated the context of the equation clearly. .

You didn't define the stochastic process. A stochastic process is an indexed collection of random variables. What is the index being used? If the index is k and the random variable is X[k], what does this random variable represent?

What is meant by the "the expectation value" of the process? Over what set of events are you taking an expectation?

Over what set of events are taking an expectation in order compute \sigma? For that matter, what random variable is \sigma supposed to be the standard deviation of?
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.

Similar threads

Back
Top