- #1
omega_minus
- 72
- 11
- TL;DR Summary
- I have recorded time domain data and a computer model (finite difference time domain). I want to know the best method to quantify how well the model fits the recorded data.
Hello,
I have recorded time-domain data from an experiment. I am using a finite difference time domain algorithm to simulate the experiment. The model has been tuned to resemble the data fairly well but I'd like to be able to quantify the fit. Most of what I see online seems to be about random data and quantifying its randomness. My data is not random, it's deterministic. Other than point wise subtraction, is there a standard approach to compare them?
One issue I see is that the small amplitude signals look like they disagree with the model more than large ones (the model gets better looking for big amplitudes). But just subtracting them leads to bigger differences for the larger signals due to their amplitudes, but it seems like they should have smaller error since the model "appears" to fit better.
Any help is appreciated!
I have recorded time-domain data from an experiment. I am using a finite difference time domain algorithm to simulate the experiment. The model has been tuned to resemble the data fairly well but I'd like to be able to quantify the fit. Most of what I see online seems to be about random data and quantifying its randomness. My data is not random, it's deterministic. Other than point wise subtraction, is there a standard approach to compare them?
One issue I see is that the small amplitude signals look like they disagree with the model more than large ones (the model gets better looking for big amplitudes). But just subtracting them leads to bigger differences for the larger signals due to their amplitudes, but it seems like they should have smaller error since the model "appears" to fit better.
Any help is appreciated!