# Error propagation in an average of two values

I'm writing up an experiment I did for a lab course and I am calculating the error in quantity V. I have two runs and have ended up with a value of V for each one, as well as an error. Ie, I have

V = 0.1145±0.0136 for Run 1
V= 0.1146± 0.0134 for Run 2

I got my errors through some tedious propagation which I won't go into, but what I'm wondering is what's the best way to calculate the error for my final V? (which will be the average V for the two runs) I have looked around and can't seem to find anything which gives a straight answer.

Would it be ridiculous to use the fact that Vaverage=(V1+V2)/2 and then propagate the error in (V1+V2) using the addition/subtraction propagation formula, then equate this quantity's fractional error to the fractional error in Vaverage? This seems a little over complicated.
Normally I would take the error in an average using Standard Deviation but that doesn't seem appropriate for just two values.

Related Introductory Physics Homework Help News on Phys.org
BruceW
Homework Helper
Would it be ridiculous to use the fact that Vaverage=(V1+V2)/2 and then propagate the error in (V1+V2) using the addition/subtraction propagation formula, then equate this quantity's fractional error to the fractional error in Vaverage? This seems a little over complicated.
That is how I would do it. It is just one of the simplest examples of propagation of error.

I find these figures fascinating. Is it possible to explain what you measured and what instruments were used.
I would like to know about the tedious propagation you used to arrive at the errors.
The explanation may be there.

CWatters