# Metric for rating funtion accuracy from R^m to R.

I'm writing program in which I generate pseudo random expressions with the hope of finding one that closely approximates the given data set. The functions map from R^m (an m-tuple of reals) to a real. What I need is a way to rate the functions by their accuracy. Are there any known methods for doing this? Maybe something from stats.

Ideally this would be a method that would take a list of expected outputs and actual outputs and return a real number. A uniform distribution would be good, but not required.

Office_Shredder
Staff Emeritus
Gold Member
2021 Award
A standard thing is to do something like
$$\sum_{j} \left( y_j - z_j \right)^2$$

where the yj are the actual data that you measured, and the zj are the predictions that you made (this is called the sum of squared errors). It's used mostly because it's easy to work with sums of squares when doing analytical calculations (like trying to minimize it). If you're doing a numerical method, people will often instead use the sum of the absolute values of the errors, especially if they consider the property "makes sure there are no significant outliers at the cost of being off by slightly more on average" to be a negative quality

I went with the abs value one because, by nature of the fact they're all random, there are going to be a lot of bad solutions. So I thought a metric which doesn't underestimate their badness would be better. Thanks.

Office_Shredder
Staff Emeritus