Direct Echo-Based Measurement of the Speed of Sound - Comments

  • #76
Dr. Courtney
Education Advisor
Insights Author
Gold Member
3,242
2,381
Sure, but why pick that particular model? How do you decide your model needs exactly one improvement, and that this is it?
The point I doubt is that the direct proportion is unique among power laws in needing (or benefiting from) an added constant. Every power law can be expressed as a direct proportion with a suitable change of variables. Does this mean it is suitable to take this approach? Is this how the most accurate mass of the earth and sun really need to be determined? Even if this were so, does this make adding the constant is appropriate for models in intro physics courses when testing the hypothesis and accomplishing the learning the objectives can be done adequately without it?

Even if it were true that "every direct proportion needs an added constant in the analysis" (which I doubt), certainly this is a refinement that may be ignored for most intro physics labs (kinda like neglecting air resistance and other common simplifications.)
 
  • #77
29,946
6,335
Perhaps we should think of the model as representing the physical phenomenon as well as the error mechanisms.
That is a good approach. We discussed this briefly earlier in the context of Occham’s razor. It doesn’t make a difference for Occham’s razor if you have a simple effect model and a complicated error model or a complicated effect model and a simple error model. But if you prefer the complicated error model for other reasons then that is fine.
 
  • #78
29,946
6,335
Sure, but why pick that particular model? How do you decide your model needs exactly one improvement, and that this is it?
Because you can demonstrate that if you don’t make that specific one improvement then your other parameter estimates can become biased (among other specific considerations mentioned earlier). Adding other higher order terms is not similarly justifiable.

Personally, I like Bayesian statistics where this type of concern is directly addressable. Bayesian model comparison allows you to decide naturally between a more or less complicated model. Of course, those techniques are sensitive to your priors, which is not necessarily damning, but does require care.

With frequentist statistics there are other model comparison techniques which can be used. The BIC, in particular, helps guard against overfitting.
 
Last edited:
  • #79
29,946
6,335
this is a refinement that may be ignored for most intro physics labs
I do agree completely with this. It should be handled in a dedicated statistics class, not an intro physics class.

I just disagree that instructing students to click on the “fit with no intercept” button is ignoring the issue. Quietly leaving it at the default value without comment would be ignoring the issue.
 
  • #80
Dr. Courtney
Education Advisor
Insights Author
Gold Member
3,242
2,381
I do agree completely with this. It should be handled in a dedicated statistics class, not an intro physics class.

I just disagree that instructing students to click on the “fit with no intercept” button is ignoring the issue. Quietly leaving it at the default value without comment would be ignoring the issue.
But is it even needed to do a least squares fit? The traditional approach of computing a proportionality constant for each data point (and then averaging them, perhaps computing a standard error of the level of course warrants) ALSO completely ignores a potential offset. Is it preferable to the science or the pedagogy to abandon this approach in favor of a least squares fit with offset? I actually had students do it two ways (compute speeds from each trial, average them and compute SEM) AND compute a linear fit without the offset. Adding the offset increases the estimated uncertainty, and yields a slope value further from the average of individual trials AND further from predicted speed of sound based on temperature.

I don't take scientific or teaching advice from default software settings, and to be honest I had forgotten what the default was on the LINEST spreadsheet command until I checked. Since I want students to see the error estimate for the slope, I teach them to type the whole command =LINEST(Y array,X array2,0,1). One changes the 0 to a 1 to add an offset. To have the software use the default, one types =LINEST(Y array, X array, , 1). It's my preference as a teacher to instruct students what all the arguments are and to use them in spreadsheet calls. It's also my preference to have due consideration of available estimates of uncertainty (which are always LARGER adding the offset to the speed of sound analysis.) In an experiment with only five data points, adding a second adjustable parameter almost always significantly increases the uncertainty of the slope when the offset is statistically no different from zero. If there is both confidence in the experimental design that there is no significant offset, and if this confidence is supported by trying an offset with data sets from pilot work, the offset is not only unnecessary, it is a bad idea.
 
  • #81
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
2019 Award
25,119
8,233
Because you can demonstrate that if you don’t make that specific one improvement then your other parameter estimates can become biased (among other specific considerations mentioned earlier).
Would you make the same argument if this were an Ohm's Law measurement? If you want to make the argument on mathematical grounds, I think you have to. Then you need to have to decide whether you want a model with current spontaneously flowing with zero voltage or a minimum voltage below which current doesn't flow.

If you're arguing you want a different model in the two cases, then you're agreeing with me: statistics won't tell you the model to use; it will only tell you how well it fits.
 
  • #82
29,946
6,335
But is it even needed to do a least squares fit? The traditional approach of computing a proportionality constant for each data point (and then averaging them, perhaps computing a standard error of the level of course warrants) ALSO completely ignores a potential offset.
That would be fine in my opinion. It would defer instruction about statistics for a dedicated statistics class where the statistical issues can be presented in appropriate depth.
 
  • #83
29,946
6,335
Would you make the same argument if this were an Ohm's Law measurement? If you want to make the argument on mathematical grounds, I think you have to.
Absolutely. As you say, this is a mathematical issue not a physical issue.

If you're arguing you want a different model in the two cases, then you're agreeing with me: statistics won't tell you the model to use; it will only tell you how well it fits.
Well, I am not arguing that I want a different model, but I do agree that statistics won’t tell you the model to use.

It can, however, tell you the possible failure modes of different methods. I would much rather lose a little precision than risk introducing bias. Precision can be “fixed” with additional data, bias cannot.
 
Last edited:
  • #84
Dr. Courtney
Education Advisor
Insights Author
Gold Member
3,242
2,381
That would be fine in my opinion. It would defer instruction about statistics for a dedicated statistics class where the statistical issues can be presented in appropriate depth.
Not at all. In two semesters of my lab physics courses, there still will be lots of least squares fitting to models that are functions other than direct proportions - over 10 cases. I suspect students would find it odd that we use least-squares fitting so often for other cases but avoid it for direct proportions. The smart ones would want to know why. Further, I've taught both intro and intermediate college statistics courses. Those courses tend to be packed with too much other material to spend much time on the theoretical development of least squares. The question of whether and why direct proportions are unique among functions in needing a vertical offset is far enough into the weeds that most instructors are not going to spend much time on it.

There are an infinite number of cases where fitting without the intercept is OK. There are also an infinite number of cases where fitting without the intercept introduces bias. This is what the "tired theoretical considerations" that you want to ignore says, so your presented data is not evidence contrary to the accepted theory. Your data is a non-random sample from the set of all experiments testing a physical theory having no intercept.

You have an alternative theory that you have not clearly formulated but seems to be along the lines of "as long as my physical theory has no intercept it is preferable for my statistical model to also have no intercept". Your data also supports this theory, but my data posted earlier contradicts it. (please feel free to express your theory clearly in your own words)

So, together we have a set of data presented in this thread, yours and mine, that is consistent with the standard and well-known "tired theoretical considerations" you dismiss, but are inconsistent with the idea that it is generally safe to fit statistical models without intercepts given a physical model with no intercept. If you really wish to be scientific and if you really wish to rely on experiment, then the correct conclusion supports the standard statistical theory which urges caution in fitting no-intercept models and explains possible failure modes and their causes. Your desire to ignore such established knowledge is not scientific at all.
It is misrepresenting my position to assert that I've dismissed the "tired theoretical considerations." But my approach to science is to hold theoretical assertions tentatively and keep on the lookout for real data sets against which to test them. I've given two cases of experiments where a vertical offset was needed and added due to known measurement challenges even though the physical theory goes through the origin.

Your assertion seems to be "one cannot know whether adding a constant will improve accuracy, so it should always be used to avoid the risk of introducing bias."

My assertion is that "the careful experimenter or data analyst can make a carefully considered choice of whether to fit to a constant with an added offset and often achieve a more accurate value for the slope for real experimental data without a significant risk of achieving a less accurate value." - The Dr. Courtney hypothesis.

Thanks to your simulation, I now see that simulated data sets won't do to arbitrate between our positions, since a choice is always made whether to add an offset in generating the data set. Real experimental data is needed that the experimenter or analysis believes does not have systematic errors large enough to require a vertical offset. Simulations might be useful in relating the magnitude of the systematic offset to the random noise to see where adding the offset begins to be required to recover the more accurate slope.

Arbitrating between our positions with experimental data requires data sets with known good slopes. Inclusion of data sets also requires the experimenter or analyst understand the physical system (including the measurement system) and determine the data set is a good choice for fitting without a slope. The crux of using real data to test my hypothesis is how to define "known good slope." So far, my approach has been to accept slopes as known good if they are known with greater accuracy than the analysis of the available data is likely to produce. The density of distilled water meets this criteria, as does pi. In V50's example of Ohm's law, using a resistance determined on a much more accurate Ohm meter would be a "known good." I've got some data sets from for or five different electronic balances. One might use the slope obtained from the most accurate electronic balance as the "known good."

You seem to be open to the idea of also using the average of slopes determined from individual measurements, since I don't recall you claiming that getting the slope this way is "biased." Error estimates of this process are usually comparable with the uncertainty of the best fit slope without the offset. So it depends on whether the selection criteria allows unbiased values of comparable accuracy or if it demands the known good be significantly more accurate. There are many more available data sets including this data.
 
Last edited:
  • #85
29,946
6,335
The question of whether and why direct proportions are unique among functions in needing a vertical offset is far enough into the weeds that most instructors are not going to spend much time on it.
They are not. Any OLS linear fit needs the intercept term. I am not sure why you believe that. As far as I know it is not supported in the literature.

My assertion is that "the careful experimenter or data analyst can make a carefully considered choice of whether to fit to a constant with an added offset and often achieve a more accurate value for the slope for real experimental data without a significant risk of achieving a less accurate value." - The Dr. Courtney hypothesis.
How do you know if your "carefully considered choice" is correct? Particularly in the general case without a "gold standard" reference to fall back on.

Real experimental data is needed that the experimenter or analysis believes does not have systematic errors large enough to require a vertical offset.
Here is one such example: https://arc.aiaa.org/doi/10.2514/1.B36120

See in particular their figure 19. The experimenter did not believe that they required a vertical offset. They have good theoretical reasons to believe that 0 power would produce 0 thrust, every bit as valid as your 0 volume is 0 weight and 0 distance is 0 time. But the data clearly should be fit to a model with an intercept and the no-intercept slope is clearly biased positive.
 
Last edited:
  • #86
Dr. Courtney
Education Advisor
Insights Author
Gold Member
3,242
2,381
They are not. Any OLS linear fit needs the intercept term.
So casting power laws as linear fits with a transformation of variables requires an offset, but performing a NLLS on the original functional form does not? Testing Kepler's Third as T = k a^1.5 needs an offset of the exponent is fixed as 1.5 but not if it is allowed to vary? The OLS model should be T = k a^1.5 + c, but the NLLS model can be T = k a^n ?

How do you know if your "carefully considered choice" is correct? Particularly in the general case without a "gold standard" reference to fall back on.
If having an offset of zero works in most cases of a "carefully considered choice" when there is a known good value, then there is no data to support the hypothesis that it is suddenly going to introduce significant errors in cases without a known good value. In every case, it will be possible to compare the slope of the best fit line with the value obtained from averaging the ratios. In every case it will also be possible to include the offset term and see if a fit yields a value significantly different from zero.

Here is one such example: https://arc.aiaa.org/doi/10.2514/1.B36120

See in particular their figure 19. They have good theoretical reasons to believe that 0 power would produce 0 thrust, every bit as valid as your 0 volume is 0 weight and 0 distance is 0 time. But the data clearly should be fit to a model with an intercept and the no-intercept slope is clearly biased positive.
Two points: 1) Data with vertical error bars that large is not very useful for testing a hypothesis of direct proportion or obtaining accurate slopes. Intro physics labs with error bars that large are either poorly designed or poorly performed, or both. We need to teach greater experimental care. 2) Most of the time experimenters will realize the weaknesses in their physical system or data and choose the right model.

Consider the paper below that I co-authored. Someone, somewhere may expect the bullet energy with zero gun powder would be zero and force a fit through the origin, introducing a bias in the slope. We knew the physical process required energy to be expended overcoming barrel friction, so we allowed a y-intercept to estimate the lost energy. Our measurements were sufficiently accurate for this procedure to work.
https://apps.dtic.mil/dtic/tr/fulltext/u2/a555779.pdf

In any event, the Dr. Courtney hypothesis is testable in a straight forward manner. The open question is whether the average of ratios can reasonably serve as a "known good" value. I'm doing some pilot work that suggests it can. In fact, I'm finding cases where the average of ratios is a much better estimate of the slope than the slope obtained by OLS with or without the offset.

If you are offering your assertion that is a "theoretical truth" that is not experimentally testable, then I'll simply point out that would make it unscientific. If we don't test our assertions against real-world data, we are only doing math.
 
  • #87
Dr. Courtney
Education Advisor
Insights Author
Gold Member
3,242
2,381
Kepler 3rd Linear.png


This is one of the cases where the average ratio (A^1.5)/T is much closer to the known good value (1.0000) than the slope obtained by OLS either with or without the offset. I'm expecting a trend where this will usually be true when the data set spans several orders of magnitude (a factor of 1000 in this case), since OLS will tend to weigh the higher values more heavily by minimizing the squared error. In contrast, computing the average value of the ratio weighs each data point in the set equally. For data sets that cover closer to 1 order of magnitude (a factor of 10), the trend I'm seeing is that the average ratio has comparable accuracy to the best fit slope (without an offset).

For both Kepler's original data, and the modern data, the OLS without the offset produces slopes closer to the known good value than including the offset. And for both Kepler's original and the modern data, the uncertainty in the offset using OLS is statistically not different from zero.

This raises an interesting question. For hundreds of years before OLS, scientists used ratios and their averages to estimate the leading constant in proportions. Do we need to have concern that this carries a risk of "bias"? Is it a useful exercise to perform an OLS with an added offset as a test for bias, as I have done above for Kepler's law?
 
  • #88
Dr. Courtney
Education Advisor
Insights Author
Gold Member
3,242
2,381
I posted these some time ago, but it bears repeating that while the statistics literature recommends due care in doing OLS fits through the origin, the practice is supported in some cases:


In certain circumstances, it is clear, a priori, that the model describing the relationship between the independent variable and the dependent variable(s) should not contain a constant term and, in consequence, the least squares fit needs to be constrained to pass through the origin.
(HA Gordon, The Statistician, Vol 30 No 1, 1981)

There are many practical problems where it is reasonable to assume the relationship of a straight line passing through the origin ... (ME Turner, Biometrics, Vol 16 No 3, 1960)

This article describes situations in which regression through the origin is appropriate, derives the normal equation for such a regression and explains the controversy regarding its evaluative statistics. (JG Eisenhauer, Teaching statistics, Vol 25 No 3 2003)
 

Related Threads on Direct Echo-Based Measurement of the Speed of Sound - Comments

  • Last Post
Replies
1
Views
1K
Replies
4
Views
9K
  • Last Post
Replies
2
Views
824
  • Last Post
Replies
17
Views
11K
  • Last Post
Replies
3
Views
2K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
2
Views
715
  • Last Post
Replies
2
Views
4K
  • Last Post
Replies
3
Views
2K
  • Last Post
Replies
13
Views
6K
Top