- #1
RobbieM.
- 7
- 0
I need some help finding an appropriate statistics model for some experimental data. Thanks in advance for any suggestions.
I am trying to compare simulated results from a code that models nuclide concentrations in spent nuclear fuel to experimental data. These concentrations have complicated dependencies on starting concentrations, reactor conditions, fuel design, etc.
I have a set of experimental data (and associated standard error) representing fuel from a wide variety of the conditions listed above.
For each experimental data point I have a simulated result. The simulated result has no given error.
I am taking the ratio of measured value to calculated value (M/C) for a variety of nuclides to determine how well the simulation works and to conservatively correct future calculated values. If the simulation were perfect (and the measurements were perfect), all of the M/C values would be 1.0. However, I don't think I can really combine the data points as if each point were a measurement of the same value... because each is based on a different set of dependencies.
Previous work has treated the data as normally distributed... but I think that is a flawed approach. So how can I collapse my data set into a single value that will bound some known percentage of results and account for the experimental uncertainty? At this point I am considering using mean deviation about the mean, using experimental data points plus their respective error.
I am trying to compare simulated results from a code that models nuclide concentrations in spent nuclear fuel to experimental data. These concentrations have complicated dependencies on starting concentrations, reactor conditions, fuel design, etc.
I have a set of experimental data (and associated standard error) representing fuel from a wide variety of the conditions listed above.
For each experimental data point I have a simulated result. The simulated result has no given error.
I am taking the ratio of measured value to calculated value (M/C) for a variety of nuclides to determine how well the simulation works and to conservatively correct future calculated values. If the simulation were perfect (and the measurements were perfect), all of the M/C values would be 1.0. However, I don't think I can really combine the data points as if each point were a measurement of the same value... because each is based on a different set of dependencies.
Previous work has treated the data as normally distributed... but I think that is a flawed approach. So how can I collapse my data set into a single value that will bound some known percentage of results and account for the experimental uncertainty? At this point I am considering using mean deviation about the mean, using experimental data points plus their respective error.