Direct Echo-Based Measurement of the Speed of Sound - Comments

Click For Summary
The discussion centers around the educational value of a direct echo-based measurement of the speed of sound experiment. Participants emphasize the importance of balancing engaging activities with clear learning objectives and accurate scientific methodology. Concerns are raised about the reliance on simulations versus hands-on experiments, with a preference for real data analysis over simulated results. Critiques of the original write-up highlight issues with data presentation, measurement uncertainty, and statistical analysis, suggesting improvements for clarity and educational effectiveness. Overall, the experiment is seen as a valuable teaching tool that requires careful execution to enhance student understanding of scientific principles.
  • #31
Dale said:
Sure, I was recommending reading the literature for you as a teacher, not for your students. You seemed reluctant to accept the validity of my explanation about why retaining the intercept is important, so you should inform yourself of the issue from sources you consider valid. Currently your opinion is not informed by the statistical literature. As a conscientious teacher surely you agree that it is important to make sure that your opinions are well informed.

Once you have established an informed opinion then I am sure that you can use that opinion to guide your lesson development in a way that will not detract from the learning objectives. Personally, I would simply use the default option to include the intercept without making much discussion about it. I would leave the teaching about the statistics to a different class, but I would quietly use valid methods.

My pedagogical disagreement with this is it trains students to accept terms in physics formulas in cases where those terms do not have clear physical meanings. Back to Einstein and Occam - my clear preference is to train students in science classes to want (even demand) explanations for every term in physics equations. In a distance vs. time deal with constant velocity, the physical meaning of the constant term is the position (or distance traveled) at time t = 0. This is problematic from the viewpoint of learning the science, and since students are unlikely to grasp the underlying mathematical justification, in the absence of a clear physical meaning, it will seem like a fudge factor whose need is asserted by authority. For pedagogical purposes, I expect to continue to teach my students that the meaning of the vertical intercept is the anticipated output for zero input. I value the science more than the math.

Demanding a physical meaning for the vertical intercept has born much fruit for my students. Several years back a group of 1st year cadets at the Air Force Academy used this approach to identify the vertical intercept of bullet energy vs. powder charge line as the work done by friction while the bullet traverses the rifle barrel. This method remains the simplest and one of the most accurate methods for measuring bullet friction at ballistic velocities. See: https://apps.dtic.mil/dtic/tr/fulltext/u2/a568594.pdf When studying Hooke's law for some springs, a non-zero vertical intercept is needed to account for the fact that the coils prevent some springs from stretching until some minimum force is applied. The physical meaning is clear: the vertical intercept when plotting Force vs. Displacement is the applied force necessary for the spring to begin stretching.

In contrast, the mass vs. volume lab doesn't lend itself to a physical meaning when plotting an experimental mass vs. volume. The mass of a quantity of substance occupying zero volume cannot be positive, and it cannot be negative. It can only be zero. Allowing it to vary presents a problem of giving a physical meaning to the resulting value, because "the expected mass for a volume of zero" does not make any sense. It may be mathematically rigorous, but in a high school science class, it's just silly. I'd rather not send my students the message that it's OK for terms in equations not to have physical meanings if someone mumbles some mathematical mumbo jumbo about how the software works. (Students go into Charlie Brown mode quickly.)

I use Tracker often in the lab for kinematics types of experiments and we do a lot with the kinematic equations. When fitting position vs. time, it is essential that each term in the fit for x(t) have the same physical meaning as in the kinematic equations. The constant term is the initial position, the linear coefficient is the initial velocity, and the quadratic coefficient is twice the acceleration. If the initial position is defined to be zero (as often the case), then a constant term in the model does not make sense. (Tracker allows t = 0 to be set at any frame and the origin can usually be placed at any convenient point; often the position of the object at t = 0 is a convenient point.)
 
Physics news on Phys.org
  • #32
fizzy said:
A graduated cylinder which is not cylindrical to within the indicated precision seems a little unlikely. It seems far more likely that your spurious attempts force the fit through zero was leading to an incorrect regression slope which produced increasing residuals at higher volumes. It is hard to say without seeing the data but it sounds like it did have a finite intercept, but you were in denial about such things, regarding them as "silly".

I expect folks who think it is unlikely for high school lab equipment to not be within its indicated precision have not spent sufficient time with high school lab equipment. I teach students how to check and double check equipment accuracy. What better simple check on the accuracy of a graduated cylinder (accuracy spec 0.2 CC) than an electronic balance (verified accuracy spec 0.01 g)?

Once accepts the constant density of water, one can use the balance itself as the best available check on the accuracy of the graduated cylinder. About half the measurements with the graduated cylinder were outside its spec. This is not a train wreck for how graduated cylinders are usually used in science labs, but I do encourage students to take note of the limitation.

The resulting density of water without a vertical intercept was 0.9967 (g/cc) with an R-Squared of 0.9999. Adding a vertical intercept puts the R-Squared closer to one, but the resulting density of water is 1.0045 g/cc with a vertical intercept suggesting that 0 cc of water has a mass of -0.5467 g. Silly. The known good value for the density of water at 20 deg C is 0.998 g/cc.

fizzy said:
Did you teach your students how to correctly read the meniscus of the fluid in the measuring cylinder?

Yes, of course.

fizzy said:
That could lead to a finite intercept, if you would allow that possibility to be seen. There clearly was some experimental error which needs to be identified. Had you not expressly removed the constant term, it would have given you some information about the problem. You have neatly demonstrated one reason not to bias the regression by excluding parameters.

I only removed the constant term for the student method after my careful pilot experiment. My careful pilot included analysis with several possible models, linear with and without constant term, and quadratic with and without constant term. I also carefully considered the residuals of the different models for three different liquids with known densities: water, isopropanol, and acetone. The high correlations of the residuals for different liquids suggests the most likely source of error was the graduated cylinder itself.

fizzy said:
If you suspected the cylinder was not straight, did you at least measure it to attempt to falsify this hypothesis. Apparently not. Did you substitute another cylinder to test the hypothesis.

And with what instrument commonly found in high school labs would you suggest accurately measuring the inner diameter at the bottom of a graduated cylinder? The other available cylinders were from the same manufacturer and demonstrated the same trend. (Adding apparently equal volumes near the top added more mass on the balance.) But the most convincing evidence was seeing the same trend in two additional liquids (isopropanol and acetone.) I expect as a manufacturing convenience these plastic graduated cylinders are formed on molds that make them slightly narrower at the bottom than at the top so that they are easier to remove from the molds. It is much more cost effective and resistant to breakage than glassware, and adequate for many laboratory purposes if the limitations are understood. If need be, a cylinder could be recalibrated with water, but it is easier just to double check on a balance for liquids of known density.
 
  • #33
Dr. Courtney said:
My pedagogical disagreement with this is it trains students to accept terms in physics formulas in cases where those terms do not have clear physical meanings.
That is fine, but before doing so you should make sure that you have the necessary statistical background knowledge to wisely make that call. You should also realize that it is not clearly the right call and that valid informed objections and differences of opinion are to be expected on this point.

Personally, to me this issue is about understanding the limitations of your tools. A tool can often be used for a task in a way that it is not intended to be used. Sometimes it is ok, but sometimes it is not. If you are going to use a tool in a way it is not intended then you need to understand the likely failure modes and be vigilant.

I have seen other scientists publish papers misusing linear regression this specific way and claiming an effect where none existed due to the biasing. The tool was breaking under misuse. They also had no clear physical interpretation for the intercept and chose, as you did, to remove it on those same grounds. It is not a thing to be done lightly and they suffered for it. At a minimum the intercept can be used to indicate a failure of your experimental setup. If you have no theoretical possibility for an intercept and yet your data shows an intercept then that is an indication that your experiment is not ideal. In your case, your distance measurements and time measurements are not perfect. Perhaps there is a systematic error and not just random errors. A systematic error could lead to a non-zero intercept, which you are artificially suppressing.

Dr. Courtney said:
Back to Einstein and Occam - my clear preference is to train students in science classes to want (even demand) explanations for every term in physics equations.
I don't think that Ockham's razor justifies your approach here. The problem is that by simplifying your effect model you have unknowingly made your error model more complicated. Your errors are no longer modeled as zero mean, and the mean of your residuals is directly related to what would have been your intercept. All you have done is to move the same complexity to a hidden spot where it is easy to ignore. It is still there. You still have the same two parameters, but you have moved one parameter to the residuals and suppressed its output.

Dr. Courtney said:
It is much more cost effective and resistant to breakage than glassware, and adequate for many laboratory purposes if the limitations are understood.
A wise approach. You should treat statistical methods similarly.
 
Last edited:
  • #34
Dale said:
That is fine, but before doing so you should make sure that you have the necessary statistical background knowledge to wisely make that call. You should also realize that it is not clearly the right call and that valid informed objections and differences of opinion are to be expected on this point.

I do. You seem to have wrongly assumed that I do not, and that if I had there would only be one right call to make since you previously wrote:

Currently your opinion is not informed by the statistical literature. As a conscientious teacher surely you agree that it is important to make sure that your opinions are well informed.

Once you have established an informed opinion then I am sure that you can use that opinion to guide your lesson development in a way that will not detract from the learning objectives.

I have thoroughly reviewed the relevant statistics literature. I have authored a widely distributed least-squares fitting software package. I have taught several college level statistics courses. I am aware of the issues. A few quotes from the literature:

In certain circumstances, it is clear, a priori, that the model describing the relationship between the independent variable and the dependent variable(s) should not contain a constant term and, in consequence, the least squares fit needs to be constrained to pass through the origin.
(HA Gordon, The Statistician, Vol 30 No 1, 1981)

There are many practical problems where it is reasonable to assume the relationship of a straight line passing through the origin ... (ME Turner, Biometrics, Vol 16 No 3, 1960)

This article describes situations in which regression through the origin is appropriate, derives the normal equation for such a regression and explains the controversy regarding its evaluative statistics. (JG Eisenhauer, Teaching statistics, Vol 25 No 3 2003)

Dale said:
Personally, to me this issue is about understanding the limitations of your tools. A tool can often be used for a task in a way that it is not intended to be used. Sometimes it is ok, but sometimes it is not. If you are going to use a tool in a way it is not intended then you need to understand the likely failure modes and be vigilant.

Yes, I understand that the R-squared values and other goodness of fit statistics are not comparable with other models. A better way to compare with other models is to compute the variance of the residuals. There are columns in my analysis spreadsheet for my pilot experiments doing just that.

Dale said:
I have seen other scientists publish papers misusing linear regression this specific way and claiming an effect where none existed due to the biasing. The tool was breaking under misuse. They also had no clear physical interpretation for the intercept and chose, as you did, to remove it on those same grounds. It is not a thing to be done lightly and they suffered for it.

And I've seen scientists publish papers with vertical shifts that make no sense. The probability of an effect when the cause is reduced to zero should be exactly zero. (The risk of death from a poison should be zero for zero mass of poison. The probability of a bullet penetrating armor should be exactly zero for a bullet with zero velocity. The weight of a fish with zero length should be exactly zero.) Further, you are creating a strawman to claim my scientific justification for removing the constant term was the lack of a physical meaning. I justify removing the constant term based on strong physical arguments that for zero input, the output can only be zero. The lack of physical meaning was a pedagogical motive, not a scientific justification.

Dale said:
At a minimum the intercept can be used to indicate a failure of your experimental setup. If you have no theoretical possibility for an intercept and yet your data shows an intercept then that is an indication that your experiment is not ideal. In your case, your distance measurements and time measurements are not perfect. Perhaps there is a systematic error and not just random errors. A systematic error could lead to a non-zero intercept, which you are artificially suppressing.

As explained above, my practice is to try a number of analysis techniques on my pilot data, and then slim down the analysis for students to the one that makes the most sense for the overall context. Done the echo-based speed of sound experiment lots of time now. There has never been a problem not adding the extra constant term, and the resulting speed of sound has always been within 1% of the expectation based on the ambient temperature. When the extra parameter is used (by me, not students, but I do re-analyze their data to check for such things) it is invariably close to zero (relative to its error estimate), so one can say it is not significantly different from zero. Some teachers may see the pedagogical benefit of walking students through these steps, but software that provides the error estimates in the slope and vertical intercept tends to be harder for students to use and confusing, so I avoid it for most student uses.

Dale said:
I don't think that Ockham's razor justifies your approach here. The problem is that by simplifying your effect model you have unknowingly made your error model more complicated. Your errors are no longer modeled as zero mean, and the mean of your residuals is directly related to what would have been your intercept. All you have done is to move the same complexity to a hidden spot where it is easy to ignore. It is still there. You still have the same two parameters, but you have moved one parameter to the residuals and suppressed its output.

Occam's Razor here is more of a pedagogical motive for keeping the model simple. I know all along that the error model is more complicated, but the students are not usually cognizant of the error model. Much like ignoring air resistance in projectile motion problems, the motive is to keep the model the students see simpler. For published research, I do not doubt the value of the approach of trying linear models with a constant term to see if it is statistically different from zero, and if the slope is changed significantly. But having done both, one then faces the challenge of deciding which fit is better. This is way beyond the scope of a high school science class, but it is discussed here (Casella, G. (1983). Leverage and regression through the origin. The American Statistician, 37(2), 147-152.) Designing labs is about providing students new skills in manageable doses.

Most papers I've read on through the origin regression are not primarily concerned with whether models that go through the origin SHOULD be used in the first place, but rather how the descriptive statistics are used to assess the goodness of fit. Many possible criticisms do not just apply to linear least squares, but to most non-linear least squares models that are forced through the origin. There is now wide agreement that these models are appropriate in many areas of science, including weight-length in fish, a multitude of other power law models, probability curves, and a variety of economic models.
 
Last edited:
  • #35
Dr. Courtney said:
You seem to have wrongly assumed that I do not
I apologize for my wrong assumption. Based on your questions it seemed like you did not understand the statistical issues involved as you did not mention any of the relevant statistical issues but only the pedagogical/scientific issues. For me, if I had decided (due to pedagogical or scientific considerations) to use the no-intercept method then I would have gone through a few of the relevant statistical issues, identified them as being immaterial for the data sets in consideration, and only then proceeded with the pedagogical/scientific justification. I mistakenly thought that the absence of any mention of the statistical issues indicated an unfamiliarity with them.

Dr. Courtney said:
Yes, I understand that the R-squared values and other goodness of fit statistics are not comparable with other models.
That is not the only issue, nor even the most important. By far the most important one is the possibility of bias in the slope. It does not appear to be a substantial issue for your data, so that would be the justification I would use were I trying to justify this approach.

Dr. Courtney said:
A better way to compare with other models is to compute the variance of the residuals.
Or in the Bayesian framework you can directly compare the probability of different models.

Dr. Courtney said:
the resulting speed of sound has always been within 1% of the expectation based on the ambient temperature
This would be a good statistical justification. It is not a general justification, because the general rule remains that use of the intercept is preferred. It is a justification specific to this particular experiment that the violation of the usual process does not produce the primary effect of concern: a substantial bias in the other parameter estimates.

Dr. Courtney said:
Occam's Razor here is more of a pedagogical motive for keeping the model simple. I know all along that the error model is more complicated
Then you should know that your Ockham's razor argument is not strong in this case. It is at best neutral.

Dr. Courtney said:
But having done both, one then faces the challenge of deciding which fit is better.
In the Bayesian approach this can be decided formally, and in the frequentist framework this is a no-no which leads to p-value hacking and failure to replicate results.
 
Last edited:
  • #36
Dale said:
I apologize for my wrong assumption. Based on your questions it seemed like you did not understand the statistical issues involved as you did not mention any of the relevant statistical issues but only the pedagogical/scientific issues. For me, if I had decided (due to pedagogical or scientific considerations) to use the no-intercept method then I would have gone through a few of the relevant statistical issues, identified them as being immaterial for the data sets in consideration, and only then proceeded with the pedagogical/scientific justification. I mistakenly thought that the absence of any mention of the statistical issues indicated an unfamiliarity with them.

That is not the only issue, nor even the most important. By far the most important one is the possibility of bias in the slope. It does not appear to be a substantial issue for your data, so that would be the justification I would use were I trying to justify this approach.

Or in the Bayesian framework you can directly compare the probability of different models.

This would be a good statistical justification. It is not a general justification, because the general rule remains that use of the intercept is preferred. It is a justification specific to this particular experiment that the violation of the usual process does not produce the primary effect of concern: a substantial bias in the other parameter estimates.

Then you should know that your Ockham's razor argument is not strong in this case. It is at best neutral.

In the Bayesian approach this can be decided formally, and in the frequentist framework this is a no-no which leads to p-value hacking and failure to replicate results.

All considerations from the viewpoint of doing science intended for the mainstream literature. But from the viewpoint of the high school or intro college science classroom, largely irrelevant. The papers I cited make a strong case for leaving out the constant term when physical considerations indicate a reasonable physical model will go through the origin, and I think this is sufficient peer-reviewed statistics work to justify widespread use in the classroom in applicable cases. I also pointed out the classroom case of Mass vs. Volume where leaving out the constant term consistently provides more accurate estimates of the material density than including it. Been at this a while and never seen a problem when the conditions are met that are pointed out in the statistics papers I cited. You seem to be maintaining a disagreement based on your own authority without a willingness to cite peer-reviewed support for your position that the favored (or valid) approach is to include a constant term.

I don't regard the Bayesian approach as appropriate for the abilities of high school students I've tended to encounter. In contrast, computing residuals (and their variance) can be useful and instructive and is well within their capabilities once they've grown in their skills through 10 or so quantitative laboratories.

But zooming out, the statistical details of the analysis approach are all less relevant if one has taught the students the effort, means, and care to acquire accurate data in the first place for the input and output variables. It may seem to some that I am cutting corners in teaching analysis due to time and pedagogical constraints. But start with 5-10 data points with all the x and y values measured to 1% and you can yield better results with simplified analysis than you can with the same number of data points with 5% errors and the most rigorous statistical approach available. Analysis is often the turd polishing stage of introductory labs. I don't teach turd polishing.
 
  • #37
Dr. Courtney said:
And I've seen scientists publish papers with vertical shifts that make no sense. The probability of an effect when the cause is reduced to zero should be exactly zero.
I disagree emphatically on this. Including a vertical intercept in your regression is always valid (categorically and without reservation). In specific restricted circumstances it may be OK to coerce the intercept to 0, but it is always appropriate to not coerce it.

You may have a theory that says that in your experiment the effect is zero when the cause is zero, but if you artificially coerce that value to be zero then you are ignoring the data.

If the data has a non-zero intercept then either your theory is wrong or your experiment is wrong. Coercing it to zero makes you ignore this red flag from the data.

If your experiment is right and your theory is right then the confidence interval for the intercept will naturally and automatically include zero. Coercing it to zero prevents you from being able to use the data to confirm that aspect of your theory.

Dr. Courtney said:
You seem to be maintaining a disagreement based on your own authority without a willingness to cite peer-reviewed support for your position that the favored (or valid) approach is to include a constant term.
I am perfectly willing to do so, but it will have to wait until tomorrow when I am back in my office.

Dr. Courtney said:
All considerations from the viewpoint of doing science intended for the mainstream literature. But from the viewpoint of the high school or intro college science classroom, largely irrelevant.
Hmm, irrelevant? The goal of the class is to teach them how to do science, is it not?

Dr. Courtney said:
I don't regard the Bayesian approach as appropriate for the abilities of high school students I've tended to encounter.
Agreed. Those comments are for your benefit. You seem to think about statistics in a way that would benefit from Bayesian methods.

Dr. Courtney said:
But start with 5-10 data points with all the x and y values measured to 1% and you can yield better results with simplified analysis than you can with the same number of data points with 5% errors and the most rigorous statistical approach available.
100% agree.

Dr. Courtney said:
It may seem to some that I am cutting corners in teaching analysis due to time and pedagogical constraints.
That is not at all how I see it. I think that you are going out of your way to teach something that is OK in this specific circumstance but is not generally a valid approach. To me, leaving the intercept in without discussing why would be the corner cutting approach (and what I would do in the interest of focusing on the science instead of the statistics).
 
Last edited:
  • #38
Dr. Courtney said:
You seem to be maintaining a disagreement based on your own authority without a willingness to cite peer-reviewed support for your position that the favored (or valid) approach is to include a constant term.
The best reference I have is:
"it is generally a safe practice not to use regression-through-the origin model and instead use the intercept regression model. If the regression line does go through the origin, b0 with the intercept model will differ from 0 only by a small sampling error, and unless the sample size is very small use of the intercept regression model has no disadvantages of any consequence. If the regression line does not go through the origin, use of the intercept regression model will avoid potentially serious difficulties resulting from forcing the regression line through the origin when this is not appropriate." (Kutner, et al. Applied Linear Statistical Models. 2005. McGraw-Hill Irwin). This I think summarizes my view on the topic completely.

Other cautionary notes include:

"Even if the response variable is theoretically zero when the predictor variable is, this does not necessarily mean that the no-intercept model is appropriate" (Gunst. Regression Analysis and its Application: A Data-Oriented Approach. 2018. Routledge)
"It is relatively easy to misuse the no intercept model" (Montgomery, et al. Introduction to Linear Regression. 2015. Wiley)
“regression through the origin will bias the results” (Lefkovitch. The study of population growth in organisms grouped by stages. 1965. Biometrics)
"in the no-intercept model the sum of the residuals is not necessarily zero" (Rawlings. Applied Regression Analysis: A Research Tool. 2001. Springer).
"Caution in the use of the model is advised" (Hahn. Fitting Regression Models with No Intercept Term. 1977. J. Qual. Tech.)

All directly echoing comments I made and issues I raised earlier.
 
Last edited:
  • #39
Fun tip: You can use the sound recorder app in a smartphone to record the bang and the echo. There are some good apps that you can then use to display the waveform and measure the delay within the phone, or else you can send the files to a desktop.

I used this approach to measure the muzzle velocity of things that I fired from my homemade blowgun. I recorded the "pop" from the exiting projectile, followed by the sound of said projectile smacking through a sheet of paper pinned to a backdrop.

BTW, making a blowgun is a fun way to learn a lot of physics and of course, to teach "safety first."

 
Last edited:
  • #40
Swamp Thing said:
to teach "safety first."
On that note: The woman in the video, who gets a marshmallow shot into her mouth, should wear safety glasses.
 
  • Like
Likes sophiecentaur
  • #41
Dr. Courtney said:
I explain it to students this way: the only possible distance any signal can travel in zero time is zero distance.
You also need to point out to them that there is always a finite (and often significant) offset in measurements and that the line of measured dots will not actually pass through 0,0. The actual values near the origin have the same significance as the rest of them.
 
  • #42
A.T. said:
On that note: The woman in the video, who gets a marshmallow shot into her mouth, should wear safety glasses.
AND she should go for a 1km run to make proper use of the energy consumed.
 
  • #43
sophiecentaur said:
You also need to point out to them that there is always a finite (and often significant) offset in measurements and that the line of measured dots will not actually pass through 0,0. The actual values near the origin have the same significance as the rest of them.

With sufficient experimental care, the line can be made to pass through the origin. Why insist on a vertical offset for a line but not a power law? Would it make sense to add a vertical offset fitting data to Kepler's Third Law? That just adds an unnecessary adjustable parameter.
 
  • #44
Dr. Courtney said:
Would it make sense to add a vertical offset fitting data to Kepler's Third Law?
Yes, it would. For all of the reasons already identified above.
 
  • #45
Dr. Courtney said:
Would it make sense to add a vertical offset fitting data to Kepler's Third Law?
A 'DC' offset would apply to all your measurements. if you were to crowbar your curve fit to a wrong value near the origin, it would result in the parameters of your best fit curve being corrupted. Remember that all your measurements are subject to all the error sources and the curve can't know about the law that you are trying to fit them to. They are telling you there is something wrong by predicting that a low pair of co ordinates wouldn't sit at 0,0. Any 'theory' you try to apply to a set of measurements has to be consistent (within bounds) with your measurements. It would be like saying "These are Keppler's Laws but they don't apply at four o'clock on Sunday afternoon".
 
  • Like
Likes Dale
  • #46
sophiecentaur said:
if you were to crowbar your curve fit to a wrong value near the origin, it would result in the parameters of your best fit curve being corrupted
Meaning that the errors in your estimate of any other parameters would no longer have zero mean but would have some non-zero bias.
 
  • Like
Likes sophiecentaur
  • #47
sophiecentaur said:
A 'DC' offset would apply to all your measurements. if you were to crowbar your curve fit to a wrong value near the origin, it would result in the parameters of your best fit curve being corrupted. Remember that all your measurements are subject to all the error sources and the curve can't know about the law that you are trying to fit them to. They are telling you there is something wrong by predicting that a low pair of co ordinates wouldn't sit at 0,0. Any 'theory' you try to apply to a set of measurements has to be consistent (within bounds) with your measurements. It would be like saying "These are Keppler's Laws but they don't apply at four o'clock on Sunday afternoon".

Interesting theory, but I was skeptical since forcing the line through the origin has tended to give better agreement with "known good" values over lots and lots of intro physics experiments I've supervised.

So I just completed a numerical experiment. The the Area of a circle (A) vs. the square of the radius is a straight line through the origin with a slope of pi. Adding some Gaussian noise with a defined standard deviation will give values of the slope that differ a bit from pi. If allowing an offset is better, the RMS error of the slope from pi will be SMALLER than the RMS error of the slope without the offset.

I used values of r from 1 to 10 in steps of 1, and standard deviations in the error added to A varying from 0.01 to 1. The RMS errors of the slopes from pi LARGER in every case allowing the offset. For example, for a standard deviation in the error added to A of 0.1, the RMS error in the slope was 0.0015 allowing the offset and 0.00082 forcing the line through the origin. Repeating the numerical experiment with Circumference vs diameter yields the same result that the RMS error of the slope from the known good (pi) is always LARGER when the offset is allowed.

Nice theory, but when the value of the output is known to be zero for zero input, forcing the line through the origin provides a more accurate value of the slope.
 
  • #48
So, imagine your stopwatch had a 0.1s delay before it starts counting and that is the only significant error in the experiment. An excellent straight line through the points would predict a permanent 0.1s offset at t=0. You have found in your experiments, apparently, that your other points ‘predict’ a zero crossing that’s at 0,0. Bells should ring and equipment examined, IMO.
You can either believe that your argument is simply ‘true’ or you can look deeper into your limited data set and find a good flaw in what you have been doing. It is always dangerous to rely on isolated experiments if you’re trying to argue with accepted (well founded) statistical theory.
 
  • #49
sophiecentaur said:
So, imagine your stopwatch had a 0.1s delay before it starts counting and that is the only significant error in the experiment. An excellent straight line through the points would predict a permanent 0.1s offset at t=0. You have found in your experiments, apparently, that your other points ‘predict’ a zero crossing that’s at 0,0. Bells should ring and equipment examined, IMO.
You can either believe that your argument is simply ‘true’ or you can look deeper into your limited data set and find a good flaw in what you have been doing. It is always dangerous to rely on isolated experiments if you’re trying to argue with accepted (well founded) statistical theory.

If the theory has not been tested with real data sets, it should not be widely accepted.

As I said above, I've tested lots of real experimental data sets with and without an offset. If the experiment makes sense that the line goes through the origin, doing the fit without an offset most often provides better agreement with the known good value.

Your experiment is the one that is cherry picked, or at least it has little to do with most careful experiments. There was one occasion where we added an offset, but the physical justification was well understood. See: https://arxiv.org/ftp/arxiv/papers/1102/1102.1635.pdf

But to the original experiment, the measured round trip time is a time DIFFERENCE. A systematic vertical shift much larger than the random errors introduced in the measurement technique would be unexpected. Finding one would be evidence that the intended procedure was not carried out. The bad experiment should be repeated. Trying to fix it with an offset is just bad science. Can you say "fudge factor"?
 
  • #50
Dr. Courtney said:
So I just completed a numerical experiment. The the Area of a circle (A) vs. the square of the radius is a straight line through the origin with a slope of pi. Adding some Gaussian noise with a defined standard deviation will give values of the slope that differ a bit from pi.
Your numerical experiment doesn’t address the objection. The objection was about bias introduced into the slope when you forced a non-zero intercept to be zero.
 
  • #51
Dale said:
Your numerical experiment doesn’t address the objection. The objection was about bias introduced into the slope when you forced a non-zero intercept to be zero.

My experiment showed that the slopes are more accurate - in better agreement with the known good value.

If the experimental goal is a more accurate determination of the slope, the method works well. The purpose of the original experiment is an accurate determination of the speed of sound as the slope of distance vs time. My experiment supported that usage.
 
  • #52
Dr. Courtney said:
Your experiment is the one that is cherry picked,
Cherry picked, of course but if you can't argue against it then your other arguments fail.
Dale said:
when you forced a non-zero intercept to be zero
Forcing a zero is what you are doing - by introducing an extra point at the origin with no evidence. 0,0 has no more significance than any other arbitrary point in a data set.
If you had an experiment that, whatever you did, produced a value bang on the origin then you would have discovered some non-linear function of your system. That would be fine but in a simple system like transit times and distances you would have to conclude that something else was going on. The problem with your idea is that you need to ask yourself just how extreme the intercept would have to be before you would realize that things are not as simple as you might have hoped. We're not really discussing your particular experiments as much as a general principle about the discipline of measurement in general and you cannot just stick in numbers from out of your head and expect it to be valid. You cannot 'test' a theory by injecting numbers into your data which happen to follow that theory. That would be really bad Science.
 
  • Like
Likes Dale
  • #53
Dr. Courtney said:
My experiment showed that the slopes are more accurate - in better agreement with the known good value.
Sure, but your experiment did not address @sophiecentaur's "interesting theory" that you were skeptical about.

So, I did my own numerical experiment which did address the "interesting theory". I did almost the same thing that you did. I simulated 10000 data sets each with r incremented from 1 to 10 in steps of 1, I calculated ##r^2## and ##A_i=\pi r^2+b + \epsilon_i## with the "DC offset" ##b=0.1## and the random noise ##\epsilon_i \sim \mathcal{N}(0,0.1)##, and I did linear fits both with an intercept and without an intercept. I then subtracted ##\pi## from the fitted slope to get the error in the slope and plotted the histograms below. The orange histogram is the error in the slope with the intercept and the blue histogram is the error in the slope without the intercept.

slopeerrors.png


Note that there is substantial bias without the intercept. The no-intercept estimate is slightly more precise (as you discovered), but it is less accurate. The fit with the intercept is unbiased while the no-intercept fit is biased and the overall RMS deviation is greater for the no-intercept model.

The increased precision of the no-intercept estimate is deceptive, it does not generally correspond to increased accuracy as you suggested. Furthermore, because the intercept model is unbiased the correct slope can be obtained simply by acquiring enough data, whereas (even in the limit of infinite data) the no-intercept model will not converge to the correct estimate.

@sophiecentaur 's concern is valid, as shown in a relevant simulation, and is one of the issues recognized and discussed in the literature I cited earlier

Dr. Courtney said:
If the experimental goal is a more accurate determination of the slope, the method works well.
The method is not more accurate in general, particularly not in the actual scenario raised by @sophiecentaur
 
Last edited:
  • Like
  • Informative
Likes davenn and sophiecentaur
  • #54
@Dale it doesn’t surprise me that injecting a data point that follows an accepted law will bias the measured data in that direction. But if there is a set of measurements of a system that, for good reasons, will not follow the law then forcing a certain data point can fool the experimenter that there is no anomalous behavior. I can’t see the point in that.
You might miss out on a new branch of research by trying to prove that present ideas are correct. It’s the anomalies that reveal new knowledge.
Pluto would not have been spotted if measurements of Neptune’s orbit had been frigged to follow Kepler perfectly.
 
  • Like
Likes davenn
  • #55
sophiecentaur said:
@Dale it doesn’t surprise me that injecting a data point that follows an accepted law will bias the measured data in that direction.

In the context of the original experiment, there is no assumption or injection of a data point assuming the original law. The only assumption is that for a time interval of ZERO, the distance sound travels can only be ZERO. So, Einstein is implicitly assumed - a signal cannot propagate faster than light. But there is no assumption that the relationship between distance and time is linear or that the slope of the line will be a certain value if it is linear.
 
  • #56
Dr. Courtney said:
In the context of the original experiment, there is no assumption or injection of a data point assuming the original law. The only assumption is that for a time interval of ZERO, the distance sound travels can only be ZERO. So, Einstein is implicitly assumed - a signal cannot propagate faster than light. But there is no assumption that the relationship between distance and time is linear or that the slope of the line will be a certain value if it is linear.
You are confusing the Law with the measurement system you have been using. No one doubts that a perfect experiment would give a perfect zero crossing. Would you dream of injecting a theoretical data point elsewhere?
The consequence of your method would be to upset the whole of Science.
Your method, whether numerical or experimental, needs examination to identify any bias before you
Start changing the textbooks. Where would you stop?
 
  • Like
Likes Dale and davenn
  • #57
@Dr. Courtney Let's be practical here. Where and how were your distances measured? How big was your sound source (I mean treating it as a real wave source with a real extent)? What was the minimum distance that you reckon you could measure? Diffraction effects will be present around the source and detector to upset your zero distance measurement.
Many people post on PF and imply that they have found holes in accepted science. This seems to be just another example. Rule number one is to doubt yourself before you doubt Science. Only after extended work on a topic can anyone be sure that a change of model is justified. Read @Dale 's detailed post (carefully) to see the effect on accuracy when you do your sleight of hand trick.
 
  • Like
Likes davenn
  • #58
sophiecentaur said:
Many people post on PF and imply that they have found holes in accepted science. This seems to be just another example. Rule number one is to doubt yourself before you doubt Science. Only after extended work on a topic can anyone be sure that a change of model is justified.

And yet that is what you suggest for fits to power laws such as Kepler's third law. I have seen many published papers fitting data to power laws, including Kepler's third law. I don't recall any of them adding a third parameter (vertical shift) to their model. Yet you are insisting that a change of model is justified. Change of models is only justified from more accurate results, not theoretical arguments like you and Dale are making.

Consider the graph below showing an analysis of Robert Boyle's original data published in support of Boyle's law. Fitting this data to a traditional power law yields better agreement with the "known good" value for the exponent. Adding a third adjustable parameter (the vertical shift) gives a higher r-squared and a lower Chi-square, but it gives a less accurate value for the exponent in both the condensation and rarefaction cases. I prefer the method that gives an error of 0.004 over the method with an error of 0.089 (Rarefaction case). I prefer the method that gives an error of 0.002 over the method that gives an error of 0.01 (Condensation case).

Boyle 1662.png
 
  • #59
Dr. Courtney said:
Change of models is only justified from more accurate results,
You are confusing precision and accuracy. Removing the intercept is more precise, but it is not generally more accurate because it can introduce bias when there is an offset. Furthermore, an imprecise but unbiased method is preferable over a precise but biased method because imprecision can be overcome simply by acquiring more data while bias cannot.
 
  • Informative
Likes davenn
  • #60
Also consider the inaccuracies in the parameters introduced by adding a vertical shift to the power law as applied to Kepler's Third Law. Adding the vertical shift yields less accurate determinations of both the lead coefficient and the exponent for both the case of Kepler's original data and the modern data. I don't care about theoretical arguments about whether a vertical shift will yield more accurate (or unbiased) parameter values. Forcing the power law model through the origin actually does provide more accurate parameter values.

Kepler 3rd.png
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 142 ·
5
Replies
142
Views
14K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 19 ·
Replies
19
Views
3K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 15 ·
Replies
15
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
19
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K