How to appropriately assign sigma values (uncertainty)

  • Thread starter Thread starter NicolaiTheDane
  • Start date Start date
  • Tags Tags
    Sigma Uncertainty
Click For Summary
The discussion focuses on how to properly assign and calculate uncertainty in a lab report involving the moment of inertia of rolling objects on a slope. Participants express confusion about combining statistical and instrumental uncertainties, particularly in the context of using photocell data to measure acceleration. A suggestion is made to utilize regression analysis to better fit the data, accounting for errors in both time and position measurements. The original poster expresses frustration over their limited statistical knowledge and the complexity of the proposed methods, ultimately deciding to simplify their approach as advised by their professor. The conversation highlights the challenges of accurately calculating uncertainties in experimental physics without extensive statistical training.
NicolaiTheDane
Messages
100
Reaction score
10
First let me preface this by saying, I'm not a native english speaker. I'm not sure "uncertainty" is the word I'm looking for it might also be deviation, however it is the translation of what its called in Danish, my native tongue.

I'm doing a lab report about rolling objects on a slope, for my classic in classical mechanics at the Niels Bohr Institute. We have been told to assign and calculate appropriate "uncertainty" to our results, both experimental and theoretical. We are very sure we doing it right for either.

1. Homework Statement

We don't know how to calculate the uncertainty/deviation of our values for "I" the moment of inertia.

Most importantly though, we don't know what to do, when we have both statistical uncertainty/deviation, as well as instrumental uncertainty/deviation/accuracy.

Homework Equations


The equation we have been told to use, for our theoretical uncertainty:
upload_2017-12-16_14-8-38.png

The equation we are using for the moment of inertia:
upload_2017-12-16_14-8-58.png

I = Moment of Inertia, m = mass of the rolling object, r = radius of the object, g = gravitation acceleration, a = tangential acceleration of the rolling object.

The Attempt at a Solution


To keep things simple, we did the following to get an uncertainty of our measured moment of inertia for one of the objects:
upload_2017-12-16_14-21-36.png

Where sigma_m is the accuracy of the weight, and the sigma_r is the accuracy of my caliper. Now we aren't sure this is correct, so it would be nice to have it confirmed or denied.

The second and more involved problem, is that of our experimental moment of inertia. Our experiment yields a tangential acceleration, which we use in our equation for "I", which is mentioned further up, to find the moment of inertia.

The experiment is a slope, with 5 photocell detectors wired in a series, and plugged into an oscilloscope. We calculate the acceleration as follows:

v = Δx/Δt, a = 2*(v2-v1)/t3-t1

Where Δx between each photocell is measured using the caliper. This gives us 3 average accelerations pr run of the experiment, giving us an overall average acceleration pr run of a = (a1+a2+a3)/3 and a statistical deviation sigma = std([a1,a2,a3])/sqrt(3) (We use matlab; std() returns the standard deviation between elements of a vector)

Now this is where we the problem comes. We are running the experiment several times pr object, in order to minimize the statistical deviation. What we are doing is after finding the overall average acceleration and sigma pr experiment, we then do the same to them, to get an overall acceleration and sigma for the object.

Given that we have no statistical knowledge, and thus have no clue about how or why the sigma thing is treated the way it is, we don't know if this is the right way to handle it. Nor do we know how to apply the uncertainty of the caliper, which is used to measure Δx, to our resulting acceleration.

I'm not sure I have described the problem accurately enough, so if its just a mess, let me know, and I'll try to clarify. Thanks in advance for any and all aid.
 

Attachments

  • upload_2017-12-16_14-0-51.png
    upload_2017-12-16_14-0-51.png
    2.9 KB · Views: 519
  • upload_2017-12-16_14-2-32.png
    upload_2017-12-16_14-2-32.png
    911 bytes · Views: 531
  • upload_2017-12-16_14-8-38.png
    upload_2017-12-16_14-8-38.png
    2.2 KB · Views: 697
  • upload_2017-12-16_14-8-58.png
    upload_2017-12-16_14-8-58.png
    925 bytes · Views: 515
  • upload_2017-12-16_14-10-56.png
    upload_2017-12-16_14-10-56.png
    2.5 KB · Views: 508
  • upload_2017-12-16_14-12-44.png
    upload_2017-12-16_14-12-44.png
    6.5 KB · Views: 526
  • upload_2017-12-16_14-21-36.png
    upload_2017-12-16_14-21-36.png
    3.2 KB · Views: 696
Physics news on Phys.org
Your analysis for the theoretical value is fine.
For the experimental, there is a better way to handle the five photocell outputs.
Consider the graph the five (x,t) readings produce. Use regression analysis to find the best fit to the quadratic through them. But there is a catch. Standard regression analysis takes one coordinate as error free (the "X" axis, but that would be time here). In this case you need the variant that allows errors in both coordinates, and weighted according to the relative precisions of the two.
 
haruspex said:
Your analysis for the theoretical value is fine.
For the experimental, there is a better way to handle the five photocell outputs.
Consider the graph the five (x,t) readings produce. Use regression analysis to find the best fit to the quadratic through them. But there is a catch. Standard regression analysis takes one coordinate as error free (the "X" axis, but that would be time here). In this case you need the variant that allows errors in both coordinates, and weighted according to the relative precisions of the two.

The alternate approach is appreciated, though I haven't done regression analysis before. I can tell MatLab to do a polyfit, but that is about it with my current knowledge. Unless you can detail exactly what you want me to do here, I simply don't have the time to consider alternate approaches, as I'm already drowning in work as is.

I need to know how to handle standard deviation and uncertainty together.
 
NicolaiTheDane said:
Unless you can detail exactly what you want me to do here
We can take the first photocell as defining start time and position, so we have four datapoints relative to that.
Consider the curve x(t)=at2+bt+c. We want to tune a, b and c to give the best fit to these datapoints.

In regular regression analysis, we define the error at point (ti, xi) as x(ti)-xi and sum the squares of these for the overall error measure. But here we have to allow for errors in both x and t. We need to measure by how much the curve misses the datapoint in a more general sense.
For this purpose, we need to assign weights to the two coordinates of error. E.g. if we think the granularity of the x measurements is 1mm and that of the timings is 3μs then we would assign weights wx=1/(1mm), wt=1/(3μs). These represent our relative trust in the measurements.
The error at a datapoint is then the minimum weighted distance from point to curve, i.e. the square of the error is the minimum with respect to t of ((x(t)-xi)wx)2+((t-ti)wt)2.
The error in the whole curve is then just the sum of those, and we then minimise that with respect to the parameters a, b and c.

I have not been able to find a link that develops this into formulae for finding a, b and c from the datapoints. I will see if I can do that, but it might take me a while.
 
haruspex said:
We can take the first photocell as defining start time and position, so we have four datapoints relative to that.
Consider the curve x(t)=at2+bt+c. We want to tune a, b and c to give the best fit to these datapoints.

In regular regression analysis, we define the error at point (ti, xi) as x(ti)-xi and sum the squares of these for the overall error measure. But here we have to allow for errors in both x and t. We need to measure by how much the curve misses the datapoint in a more general sense.
For this purpose, we need to assign weights to the two coordinates of error. E.g. if we think the granularity of the x measurements is 1mm and that of the timings is 3μs then we would assign weights wx=1/(1mm), wt=1/(3μs). These represent our relative trust in the measurements.
The error at a datapoint is then the minimum weighted distance from point to curve, i.e. the square of the error is the minimum with respect to t of ((x(t)-xi)wx)2+((t-ti)wt)2.
The error in the whole curve is then just the sum of those, and we then minimise that with respect to the parameters a, b and c.

I have not been able to find a link that develops this into formulae for finding a, b and c from the datapoints. I will see if I can do that, but it might take me a while.

I appreciate your help, but I'm not getting any closer to my goal here. I'm only 4 months into my studies, haven't had a statistics course, and none of what you are telling me makes any sense to me. On a superficial level I know what regressive is, but I cannot start trying to learn something new (which presumably I'll learn in my second year), redo what I have done, and then instruct my group, so they too know what is going on.

I'll assume that because you suggest I do something, which I haven't got the knowledge to do, that what I'm attempting is much beyond the intended scope of my report, and therefore I'll simply disregard the standard deviation, and calculate the uncertainty of equation I have used. I greatly appreciate your attempt to enlighten me though, and the fact that I'm too stressed, and too tired to capitalize on your wisdom, frustrates me to no end. Never the less that's where I am. I appreciate your attempt, and I'm sorry I too dense/stressed/excuse to follow.
 
Unfortunately, finding the nearest point (weighted or otherwise) on a quadratic involves solving a cubic, so the simplest would be to do the curve fitting via software iteration.
 
I have written a very crude program to fit a quadratic to four datapoints, allowing for x and y weightings. If you wish me to run it on your data, please supply your photocell data and suitable granularity estimates for the x and t measures.
 
  • Like
Likes mfb
haruspex said:
I have written a very crude program to fit a quadratic to four datapoints, allowing for x and y weightings. If you wish me to run it on your data, please supply your photocell data and suitable granularity estimates for the x and t measures.

We only have test data amount (4 data sets), meant to allow us to write a MatLab script do what we need for us, which I have done. I have spoken to my professor, who indeed noted that what we were trying to do, was beyond the scope of the report, and that I shouldn't be able to do this (without independent reading) before I have had the course Statistics for Physicist in my 2nd year. He noted that since the measurement uncertainty is the same on all measurements, it wouldn't directly wrong to simply accumulate the uncertainties (so 1 mm times 4, as there are 4 measured distances), use the propagation law to find find the uncertainty for a given measurement, and add that as systematic uncertainty to the final product. It won't be completely right, because of the way we have chosen to handle the data, but its still more through then he'd expect.

However I'd love to see what you come up with, so if you have it done anyway, here are the test data we have so far:

https://www.dropbox.com/s/2cqji5aldq9ppty/testData.zip?dl=0
 
NicolaiTheDane said:
We only have test data amount (4 data sets), meant to allow us to write a MatLab script do what we need for us, which I have done. I have spoken to my professor, who indeed noted that what we were trying to do, was beyond the scope of the report, and that I shouldn't be able to do this (without independent reading) before I have had the course Statistics for Physicist in my 2nd year. He noted that since the measurement uncertainty is the same on all measurements, it wouldn't directly wrong to simply accumulate the uncertainties (so 1 mm times 4, as there are 4 measured distances), use the propagation law to find find the uncertainty for a given measurement, and add that as systematic uncertainty to the final product. It won't be completely right, because of the way we have chosen to handle the data, but its still more through then he'd expect.

However I'd love to see what you come up with, so if you have it done anyway, here are the test data we have so far:

https://www.dropbox.com/s/2cqji5aldq9ppty/testData.zip?dl=0
It would help if you would explain what these data are.
I looked at HCylM2.txt. I see five pairs of peaks. I assume each pair is generated by one sensor, so I only looked at the leading edge of the first peak of each pair, i.e. where it hits 4.9659. E.g. at time 0.00234 it reaches 4.9659 for the first time. For the second pair I took the t=0.14894 reading, etc.
I don't know the spacing between the sensors, so I just assumed they are equally spaced.

Rebasing all the times off the 0.00234 value (so that became zero, etc.) and arbitrarily taking the positions of the sensors as 0, 0.1, 0.2, 0.3, 0.4, I plotted these five points in a spreadsheet. Just playing around by hand I got an excellent fit with x=0.5t2+0.61t. The RMS error was only 0.00026.
Unfortunately my C++ program gets nowhere near this, so I must have some bugs. I said it was crude.
 
  • #10
haruspex said:
It would help if you would explain what these data are.
I looked at HCylM2.txt. I see five pairs of peaks. I assume each pair is generated by one sensor, so I only looked at the leading edge of the first peak of each pair, i.e. where it hits 4.9659. E.g. at time 0.00234 it reaches 4.9659 for the first time. For the second pair I took the t=0.14894 reading, etc.
I don't know the spacing between the sensors, so I just assumed they are equally spaced.

Rebasing all the times off the 0.00234 value (so that became zero, etc.) and arbitrarily taking the positions of the sensors as 0, 0.1, 0.2, 0.3, 0.4, I plotted these five points in a spreadsheet. Just playing around by hand I got an excellent fit with x=0.5t2+0.61t. The RMS error was only 0.00026.
Unfortunately my C++ program gets nowhere near this, so I must have some bugs. I said it was crude.

Yea sorry I was in a hurry. I also forgot to give you the delta x's. Basically the name of the file, tells you rather its the same object or not, and last part on there, the MX part, tells you which measurement came first (not very relevant).

The delta x's are: 130.86, 129.65, 129.58, 130.71 [mm]
 

Similar threads

Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
3
Views
11K
  • · Replies 19 ·
Replies
19
Views
2K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 0 ·
Replies
0
Views
4K
Replies
2
Views
3K
  • · Replies 7 ·
Replies
7
Views
7K
Replies
6
Views
3K
  • · Replies 0 ·
Replies
0
Views
2K