Hi,(adsbygoogle = window.adsbygoogle || []).push({});

My question is about a common procedure used to find minimum and maximum values of a function. In many problems we find the first derivative of a function and then equate it to zero. I understand the use of this method when one is trying to find the minimum or maximum value of the function.

However, I get confused when I see people using that ‘equating to 0’ assumption as a proof for something else.

To better explain my question, I have attached a file here. The file has equations used in deriving the coefficients of a least-square regression line.

The OLS method starts with the partial differentiation of equation 3.1.2, and then equates the derivatives to 0 and solves them to get the coeff. I get it up to this point.

However, in the last section, to prove that the sum of the residuals is 0, the author uses terms from partial differentiation as the proof.

I don’t understand how an assumption can be used as the proof for something.

Thanks,

MG.

**Physics Forums - The Fusion of Science and Community**

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# OLS regression - using an assumption as the proof?

Loading...

Similar Threads - regression using assumption | Date |
---|---|

I Ode using Fourier Transform | Jan 2, 2018 |

I Using Complex Numbers to find the solutions (simple Q.) | Dec 29, 2017 |

A 2nd Order PDE Using Similarity Method | May 14, 2017 |

I What method can be used to solve this pde? | Apr 4, 2017 |

Does this assumption cause problems in (many) cases? | Sep 13, 2015 |

**Physics Forums - The Fusion of Science and Community**