StonieJ
- 29
- 0
propagation of errors? (lab "phenomenon" question)
In our lab, we are measuring the acceleration due to gravity. We do this by dropping an object through a photogate, which records position vs. time data as the object falls. From this position vs. time data, the computer uses derivatives to calculate velocity vs. time data. So, at the end of one trial run, we have a table with position vs. time data and another table with velocity vs. time data. Using this data, we have the computer fit a best-fit line to each set of data. The position vs. time data obviously has a quadratic model function, and the velocity vs. time data has a linear function. Therefore, in order to solve for acceleration due to gravity, we simply do one of the following:
1) After fitting a quadratic curve to position vs. time data, take the second derivative of the curve.
2) After fitting a linear curve to velocity vs. time data, take the first derivative of the curve.
We continue to get better results using method 2. That is, we get results closer to 9.79 m/s/s, which we are supposed to use as our ideal value. One of the lab questions asks if one method appears to be better and if so why. Our group discussed it some, but the only thing we could really come up with was that propagation of errors played a role in it somehow. Considering that the position data is the only data that is really measured by the equipment (i.e. the computer calculates velocity data based on the position data) I'm having a hard time understanding how the velocity data could actually be better. In other words, it seems like the velocity data is "second-hand" and should be worse. Anyway, I was just wondering if anyone could help explain why this is so. Thanks.
In our lab, we are measuring the acceleration due to gravity. We do this by dropping an object through a photogate, which records position vs. time data as the object falls. From this position vs. time data, the computer uses derivatives to calculate velocity vs. time data. So, at the end of one trial run, we have a table with position vs. time data and another table with velocity vs. time data. Using this data, we have the computer fit a best-fit line to each set of data. The position vs. time data obviously has a quadratic model function, and the velocity vs. time data has a linear function. Therefore, in order to solve for acceleration due to gravity, we simply do one of the following:
1) After fitting a quadratic curve to position vs. time data, take the second derivative of the curve.
2) After fitting a linear curve to velocity vs. time data, take the first derivative of the curve.
We continue to get better results using method 2. That is, we get results closer to 9.79 m/s/s, which we are supposed to use as our ideal value. One of the lab questions asks if one method appears to be better and if so why. Our group discussed it some, but the only thing we could really come up with was that propagation of errors played a role in it somehow. Considering that the position data is the only data that is really measured by the equipment (i.e. the computer calculates velocity data based on the position data) I'm having a hard time understanding how the velocity data could actually be better. In other words, it seems like the velocity data is "second-hand" and should be worse. Anyway, I was just wondering if anyone could help explain why this is so. Thanks.