Isolate variables in nonlinear equation for regression

In summary, the conversation discusses a nonlinear equation that describes the relationship between tidal parameters and river discharge in tidal rivers. The speaker is attempting to calibrate the model using contemporary data and is seeking advice on how to handle the last term in the equation for regression. The recommendation is to use total least squares regression, which takes into account measurement errors and random fluctuations in the variables. If the variables are not independent, correlation and covariance information will need to be considered for accurate estimates.
  • #1
edge333
16
0
Hi all,

I have a nonlinear equation of the form:

[tex]
\frac{TP_x}{TP_R} = c_0 + c_1 U_R^n + c_2 \frac{T_R^2}{\sqrt{U_R}}
[/tex]

This equation describes the relationship between tidal parameters and river discharge (velocity) in tidal rivers derived from the 1-D St. Venant equations. TPx is some tidal property at station x along the river, TPR is the same tidal property at a coastal reference station, UR is the river flow (velocity), and TR is the tidal range at the reference station.

What I am trying to do is calibrate this model using contemporary data where TPx, TPR, TR, and UR are known. Therefore, I am trying to determine the coefficients (c0, c1, c2, and n) of this equation using Matlab's nlinfit function. The problem is that the last term in the equation complicates the regression because the TR variable is on the right-hand side. Is there a way separating of the independent and dependent variables for regression? Consider any combination of TPR, TPx, and TR to be the independent variable. I should also mention that 0.5 < n < 1.5.

thanks
 
Last edited:
  • #3
I don't use Matlab, but after some web browsing, I suggest you consider doing total least squares regression (http://www.mathworks.com/matlabcentral/fileexchange/31109-total-least-squares-method) instead of ordinary regression.

In a least square regression fit for Y = F(x1,x2,..) to data, it is assumed that x1,x2,... are measured without "error" and that the object is to find a curve that fits noisy data for Y. So if you one of your variables ##TP_X,TP_R,T_R,U_R## is measured very precisely and free of any conceptual random variations, you could us it for Y. However, if all your variables are on an equal footing as far as measurement errors or random fluctuations go, then total least squares regression would be a better choice.
 
  • #4
Hey edge333.

Building on the post from Stephen Tashi - if the dependent variable spans the real line then linear regression can be used. If the variables contributing to the dependent variable are independent then it makes the whole process easier.

If not you will need to use correlation/covariance information to get accurate estimates (basically the standard errors will be under-estimated if things are correlated in some manner).
 

1. What is the purpose of isolating variables in a nonlinear equation for regression?

Isolating variables in a nonlinear equation for regression allows us to determine the relationship between the independent variable(s) and the dependent variable more accurately. It helps us to identify the effect of each independent variable on the dependent variable, while holding all other variables constant.

2. How do you isolate variables in a nonlinear equation for regression?

To isolate variables in a nonlinear equation for regression, we use algebraic techniques to manipulate the equation and rearrange it in a form that allows us to solve for the desired variable. This typically involves taking the natural logarithm of both sides of the equation or using other mathematical operations to isolate the variable of interest.

3. Can all variables in a nonlinear equation be isolated for regression?

In most cases, not all variables in a nonlinear equation can be isolated for regression. This is because some equations may have multiple variables that are interdependent and cannot be isolated from one another. In these cases, we may need to use advanced techniques such as partial regression analysis to examine the relationship between the variables.

4. What are the benefits of isolating variables in a nonlinear equation for regression?

Isolating variables in a nonlinear equation for regression allows us to understand the individual effects of each variable on the dependent variable. This can help us to identify which variables are most important in predicting the outcome and make more accurate predictions. It also allows us to control for the effects of other variables, reducing the potential for confounding factors.

5. Are there any limitations to isolating variables in a nonlinear equation for regression?

One limitation of isolating variables in a nonlinear equation for regression is that it assumes a linear relationship between the variables. If the relationship is not truly linear, using this method may result in inaccurate predictions. Additionally, isolating variables may not be possible in some cases due to the complexity of the equation or the interdependence of variables.

Back
Top