Solving for Velocity and Acceleration Error: A Confusing Task

AI Thread Summary
The discussion revolves around calculating individual errors for a set of time and velocity values, with a focus on understanding how to determine acceleration and its associated error. The user is unsure how to apply the given uniform errors of delta t = 0.2 s and delta v = 3.0 m/s to their data points for graphing. They consider using the slope of a graph to derive acceleration but are cautioned against treating the difference between their calculated and standard values as an error. The confusion stems from the requirement to plot points with error bars while also needing to calculate acceleration accurately. Clarification on these concepts is sought to resolve the user's confusion regarding the assignment.
h6872
Messages
9
Reaction score
0
Hi everybody!

This might seem like a terribly easy question, but I can't seem to figure it out for the life of me. I've been given a set of values for time and velocity:

t (s) v (m/s)
0 0
3 7
7 16
12 33
18 48
23 53
27 67
34 86

And told that the errors at (delta)t = 0.2 s and (delta)v = 3.0 m/s are the same for all experimental points.

How would I go about determining the error for these values individually (as I'll have to graph the points)? The question asks for the acceleration and error, and the method I've been given seems to involve plotting each point with its associated error bars.

But couldn't I use the slope of my graph (if I plot these points without the error bars) to determine acceleration, and take the acceleration calculated from 3.0/0.2 as my expected value, and use the: (|your value-standard value|)/standard value equation?

And yet, I've been explicitly told not to consider this difference an error.

Please help! I'm completely confused!
 
Physics news on Phys.org
Homework?
 
Back
Top