Looking for empirical equation in experimental data

Click For Summary
SUMMARY

The discussion centers on finding an empirical equation for a variable G dependent on three variables A, B, and C in nuclear research experiments. The user initially proposes a linear model G = aA + bB + cC + d but realizes that the slopes of the functions f(A), f(B), and f(C) are not constant. Suggestions include using interaction terms in the model, such as z = a*x + b*y + c*x*y, and employing least-squares methods to approximate the relationship. The user finds improved correlation with a more complex model, f(x,y,z) ≈ axyz + bxy + cxz + dyz + ex + fy + gz + h, indicating the necessity of including higher-order terms for better accuracy.

PREREQUISITES
  • Understanding of empirical modeling techniques
  • Familiarity with linear algebra concepts, particularly least-squares solutions
  • Knowledge of interaction terms in regression analysis
  • Basic grasp of polynomial functions and their applications in data fitting
NEXT STEPS
  • Study the Design of Experiments to understand modeling techniques
  • Learn about least-squares regression and its applications in empirical data analysis
  • Explore polynomial regression and its effectiveness in fitting complex data
  • Investigate the use of interaction terms in multiple regression models
USEFUL FOR

Engineering students, data analysts, and researchers in fields requiring empirical modeling and data fitting techniques.

Curran919
Messages
2
Reaction score
0
I am an engineer student working in nuclear research. I am performing some experiments looking for an empirical equation to apply to results in a test section, but am having trouble making a mental leap. Here is the core of the problem with all of the engineering 'fat' trimmed off:

I have a variable with three dependents:
G = f(A,B,C)

I have shown that G is more or less linear WRT each variable for multiple values of the other variables (sorry, I'm an undergrad engineer, mathematic notation is lacking):
G=f(A) of O(1) for every B,C
G=f(B) of O(1) for every A,C
G=f(C) of O(1) for every A,B


I would like to say that because of this,
G = f(A)+g(B)+h(C)
or even,
G = aA+bB+cC+d where a,b,c,d are constants

but this would only be true if the slope of f(A) where constant regardless of B,C (and the same for f(B)/f(C)). Of course, it isn't. Is what I've said correct, and if so, is there an alternative conclusion I can make?

G = (A-a)(B-b)(C-c)?
 
Physics news on Phys.org
A common model for empirical work is the linear model. Where things don't fit so well, the experimenter can include interaction terms. Thus, the model might be (for two independent variables x and y)

z = a*x + b*y + c*x*y

The a, b, and c are constants are need to be fit to the data. You might check some books on the Design of Experiments, as such modeling is often done in that context.
 
Consider the function f(A,B,C) = AB+AC+BC. This is linear in each variable, but not globally approximate to anything on the form Aa+Bb+Cc for constants a, b and c. If you are only interested in local behavior, you should add the constraints of a, b and c. Then maybe you can get an approximate linear form.

If you know some linear algebra, you could find the linear expression that is "closest" to your set of data-points in the manner that the sum of squares of the differences from the data points and the values of a linear expression is minimized. If you have values for f(x,y,z) at (x_1,y_1,z_1), (x_2,y_2,z_2),...,(x_n,y_n,z_n), solve for the least-squares solution to Mx=b, where
M = \begin{bmatrix} x_1 & y_1 & z_1 & 1 \\ x_2 & y_2 & z_2 & 1 \\ \vdots & \vdots & \vdots & \vdots \\ x_n & y_n & z_n & 1 \end{bmatrix}

and

b = \begin{bmatrix} f(x_1,y_1,z_1) \\ f(x_2, y_2, z_2) \\ \vdots \\ f(x_n , y_n , z_n) \end{bmatrix}.

I.e. solve for M^TMx = M^Tb.



Then one of your x = \begin{bmatrix} a \\ b \\ c \\ d \end{bmatrix} will give an approximation f(x,y,z) \approx ax+by+cz+d on these data-points. The more "linear" your function behaves the better the approximation.

If you suspect it to be on other forms, such as higher degree polynomials or linear combinations of entirely different functions this can also be done similarly. To do this: If you think the function is approximately a linear combination of the functions g_1(x,y,z),...,g_k(x,y,x), substitute \begin{bmatrix} g_1(x_i,y_i,z_i) & \ldots & g_k(x_i,y_i,z_i) \end{bmatrix} for the i'th row of the matrix M, and solve for some k-vector x, which will be the coefficients of the functions. The linear form corresponds to the case where g_1(x,y,z) = x, g_2(x,y,z) = y, g_3(x,y,z) = z, and g_4(x,y,z) = 1.

Often M^TM will be invertible giving a unique solution (M^TM)^{-1}M^Tb, and inverting will not be very difficult as the matrix M^TM is a k x k matrix where k is the number of functions you are considering. You should probably constrain your data-set so you can multiply the matrices without difficulty. Hope this helps, good luck.
 
Last edited:
Thank Jarle, very helpful.

Indeed using <br /> f(x,y,z) \approx ax+by+cz+d <br /> gave a poor correlation between the estimated and the measured readings. I tried:

<br /> f(x,y,z) \approx axyz+bxy+cxz+dyz+ex+fy+gz+h <br />

and the correlation appears much better. I think I have some outliers in the measurement data, so I will remove a few instances and see what happens. Is there an underlying explanation to the terms that I used, or is it just a mathematical catch-all (or more terms \approx less error)? I tried nixing the terms that seemed to have a low correlation, which was okay for axyz, but removing any of the second order terms introduced considerable error.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 27 ·
Replies
27
Views
4K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
986
  • · Replies 4 ·
Replies
4
Views
653
  • · Replies 66 ·
3
Replies
66
Views
7K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K