Let's say I'm doing a experiment in order to measure a resistance of a single resistor of 200 ohm by its V against I graph, where V is the voltage in the resistor's terminal given by and voltmeter and I the current of the circuit, given by and ammeter. The angular coefficient in ##y=ax+b## must be the resistance.
OK, I'll proceed this way: apply a voltage of 0.5V: write down what I read in the voltmeter and the ammter. lift the voltage to 0.75, write down the voltage in resistor and current, and so on until 5V. By doing this I'll get a list o data with uncertainty given by my instruments, let's assume it is 5% of the displayed value. So for every data I have a different error associated.
Now I plot voltage against current and the respective error bars. Now I want to do the linear fitting of these data. When I do this with python I get an array with the coefficients and another which is the covariance matrix. I can get the error of each coefficient through this matrix, but the linear fitting never asked for the error bars, the errors given by the fitting isn't related to the error of my measurements. So my question is what is the relation between the error given by the fitting and the error by the measurement, isn't it related ? If so, why should I care about the measurement uncertainty once the fitting will give it's own error?
The Attempt at a Solution
[Sorry, I don't know how to apply these kind of question in this formulary...][/B]