This is just a question regarding error calculation in physics labs. I've never before in my career as a student done any lab work, this is a 1st year undergraduate physics course.
For this lab we were required to create an experiment that would illustrate acceleration due to gravity with a cart rail on an air track.
My lab partner and I conducted several runs where we would give the cart an initial push up the air track, and then gather data with motion sensors and then import that data into DataStudio.
Very smooth runs, no exaggerated spikes in the graph that according to our limited info would constitute an error.
I have position time and velocity time graphs of all the runs.
From that we extracted the slope, y int, r, mean squared and root MSE for our runs.
a = g * sin(theta)
Front of book forumlas:
Δy = a(Δx)
Δy = nx^(n-1)Δx
y = x1+x2+x3+....; y = x1+x2-x2...; y=x1-x2-x2....;y=x1-x2+x2...;etc.
Then Δy= (sqrt) Δx1^2 + Δx2^2 + Δx2^2....
y = ax1 (+-) bx2 where a and b are constants.
Δy = (sqrt) a^2(Δx1^2) + b^2(Δx2^2)
y = cos (ax) where a is a constant
Δy = asin(ax)Δx; Δx expressed in radians
The Attempt at a Solution
I really have no idea where to begin with error calculation. It was never discussed in the labs, we didn't have any lab prep session where error calculation was discussed. In the front of our lab books we are given ambiguous formulas which we've been told to use in error calculation.
This really isn't so much of a straight forward homework question as it is a "what am I supposed to do because I'm completely lost here" type situation.
any help appreciated.