How Can I Quantitatively Validate My Numerical Model Against Experimental Data?

  • Thread starter Thread starter py_engineer
  • Start date Start date
  • Tags Tags
    Model Numerical
py_engineer
Messages
11
Reaction score
0
Hi,

I developed this numerical model where I solve a set of PDEs that allows me to simulate an imaging detector with different parameters, etc.

Now, I would like to compare my model with a particular case where experimental data has been obtained. I made a very simple plot to explain what I want to do:

http://www-personal.umich.edu/~pyemelie/plot.bmp

The variable in the experimental data is 'RA' and is plotted versus temperature T.

Using my numerical model, I can simulate the detector in the conditions of the experimental case, and simulate 'RA' as well versus temperature (at the same temperature points than the experimental data, as shown in the little figure).

If I plot the experimental 'RA' and my simulated 'RA' versus T on the same graph, that should give something similar to what you can see in the image.

Now, I would like to find some theoretical parameters, correlation parameters or something like that (I am not really good with Statistics..) in order to give a quantitative value of how close my simulation data set is from the experimental one. Could anyone give me some advice/recommendations on this??

I don't want to go too deep in the model validation. I think that just a graphical analysis (such as just plotting the simulated and experimental data on the same graph) and a quantitative parameter would do.. But I would like still to use some kind of parameter with a theoretical background (meaning, I don't just want to make up my own parameter).

I want to point out that (in case it's relevant), as you see in the plot, the 'RA' versus T is not a linear plot, and that the 'RA' data points can vary by several orders of magnitude.

Thanks a lot!
 
Last edited by a moderator:
Physics news on Phys.org
It's not clear to me whether you are more interested in RA or log(RA), but I don't see why a simple mean absolute error or mean absolute percentage error couldn't be used to characterize your model's error, though it should be calculated on observations not used to fit the model function.


-Will Dwinnell
http://matlabdatamining.blogspot.com/"
 
Last edited by a moderator:
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top