- #1
chemaddict
- 15
- 0
So I have a situation that I keep confusing myself with:
I am running a computer simulation that, after a set number of iterations (a block) will output the running average (mean) of a property and the standard error based on the previous blocks. So, as the simulation runs, these standard errors tend to shrink because the simulation settles into its "equilibrium".
The problem is, I have 6 of these simulations (all the same, replicates), each with their own means and standard errors for each block during the simulation. I could easily find the mean and standard errors of the means, but what about the standard error associated with each replicate? Is there anyway to legitimately average those?
Thanks for any help.
I am running a computer simulation that, after a set number of iterations (a block) will output the running average (mean) of a property and the standard error based on the previous blocks. So, as the simulation runs, these standard errors tend to shrink because the simulation settles into its "equilibrium".
The problem is, I have 6 of these simulations (all the same, replicates), each with their own means and standard errors for each block during the simulation. I could easily find the mean and standard errors of the means, but what about the standard error associated with each replicate? Is there anyway to legitimately average those?
Thanks for any help.