Direct Echo-Based Measurement of the Speed of Sound - Comments

In summary: I consider downloading real data acquired from a third party as a different (better)......type of lab.In summary, the three challenges in typical introductory physics labs are connecting learning objectives with experiments, ensuring accuracy, and keeping the Gee Whiz factor high.
  • #1
Dr. Courtney
Education Advisor
Insights Author
Gold Member
3,307
2,530
Greg Bernhardt submitted a new PF Insights post

Direct Echo-Based Measurement of the Speed of Sound
speedofsound2.png


Continue reading the Original PF Insights Post.
 

Attachments

  • speedofsound2.png
    speedofsound2.png
    11.7 KB · Views: 1,236
  • Like
Likes Dale and Greg Bernhardt
Physics news on Phys.org
  • #2
That is fun! Not often that you get to set off fireworks for science
 
  • #3
Dale said:
That is fun! Not often that you get to set off fireworks for science

Yep. I'm actually going to use "Chemistry of Pyrotechnics" to put together a few labs for next year (supposing the local school is pleased enough to let me coordinate a few labs for them again.)

The challenge with these deals is not making them fun. That's a given. The challenge is connecting the "Gee Whiz" part of it to some interesting science in a way that tests a hypothesis reasonably within the learning objectives and in the Goldilocks zone (not too hard, not too easy, just right.)

It's easy to pretend one is doing science when all the students remember is the "Gee Whiz" and no one remembers the learning objectives.
 
  • Like
Likes jedishrfu, DaveE, Nugatory and 1 other person
  • #4
Dr. Courtney said:
It's easy to pretend one is doing science when all the students remember is the "Gee Whiz" and no one remembers the learning objectives.
I think that is a succinct summary of the problem with pop-sci presentations. It is good that you are focusing on more than just the fun, but including both fun and learning objectives.
 
  • Like
Likes jedishrfu
  • #5
Dale said:
I think that is a succinct summary of the problem with pop-sci presentations. It is good that you are focusing on more than just the fun, but including both fun and learning objectives.

In a paper coming out this fall in TPT, colleagues and I identified three challenges in the typical introductory physics lab design:

1) simple experiments connected with learning objectives
2) experiments sufficiently accurate for comparisons between theory and measurements without gaps when students ascribe discrepancies to confounding factors (imperfect simplifying assumptions, measurement uncertainties, and “human error”), and
3) experiments capturing student attention to ensure due diligence in execution and analysis.

So that can be summarized in three goals: 1) learning objectives 2) accuracy (I like 1%) and 3) Gee Whiz factor. I like the firecracker echo experiment, because it has all three (which is rare) plus a 4th that is often a constraint 4) Cheap.

I've been working a lot this past year with a number of resource-constrained schools: home schools, private schools, foreign schools, and public schools in underfunded districts. Some times it feels like it comes down to:
A) What interesting things can you do with a microphone as an accurate timer?
B) What interesting kinematics can you catch with an available video camera and analyze in Tracker? (Or otherwise use the camera as a timer to 1/30 sec)
C) What "virtual" labs can you do by downloading historically important or other interesting data (Boyle, Kepler, etc.)?

I've got mixed feelings about calling an analysis activity a real "laboratory" if someone else did the experiment and collected the data. But these can have a hypothesis, a quantitative test of the hypothesis, data analysis, and a traditional lab report. I wouldn't want a lab program to rely too heavily on these, but better than skipping labs completely due to resource constraints.
 
  • Like
Likes jedishrfu
  • #6
A very cheap way that has good accuracy / consistency is to stand a distance from a large wall and use a hammer to hit a metal object. That is obvs so far. The clever bit is to strike the metal exactly when you hear the echo, and repeat. You repeat until you are accurately in sync with the echo pulses. Then you measure the time for 10, 20 or more echos. The accuracy gets better and better with more pulses.
Classic integration method to average out errors. ms timing accuracy is possible with enough pulses.
 
  • Like
Likes jedishrfu and Dr. Courtney
  • #7
Dr. Courtney said:
I've got mixed feelings about calling an analysis activity a real "laboratory" if someone else did the experiment and collected the data.
I really worries me that students seem to confuse simulation with reality all the time. It's the Startrek effect. They ask why their simulation is not giving the answers they expect. It's GIGO without having any way of chasing the fault in the model. A simulation is so much cheaper than hardware and you don't need lab space nor need to tidy up for the next class. You can see why 'the system' likes to encourage it.
 
  • Like
Likes jedishrfu, zoki85 and Asymptotic
  • #8
sophiecentaur said:
I really worries me that students seem to confuse simulation with reality all the time. It's the Startrek effect. They ask why their simulation is not giving the answers they expect. It's GIGO without having any way of chasing the fault in the model. A simulation is so much cheaper than hardware and you don't need lab space nor need to tidy up for the next class. You can see why 'the system' likes to encourage it.

I consider downloading real data acquired from a third party as a different (better) class of lab than computer simulations. For example, last year, I had a physical science class download and analyze both Brahe's original data and modern data for testing Kepler's third law. Later, (for a different lab), I had them download available orbital data for Earth satellites to test Kepler's third law in that system. I had a physics class analyze Robert Boyle's original data (from his historical publication) to test Boyle's law.

In my view, these labs are not as good as real, hands on experiments where students acquire the data themselves. But they do more accurately represent the scientific method by comparing predictions from proposed models (usually the hypothesis) against _real_ experimental or observational data. There are many historical cases where science really works this way - a model is validated against data acquired by a different party.

In contrast, testing a predictive model or hypothesis against a simulation is not a version of the scientific method that I think we should be teaching in introductory labs. That's not how the scientific method really works, and using simulations for labs runs a significant risk of confusing students about the scientific method itself.
 
  • Like
Likes marcusl, Asymptotic and sophiecentaur
  • #9
sophiecentaur said:
A very cheap way that has good accuracy / consistency is to stand a distance from a large wall and use a hammer to hit a metal object. That is obvs so far. The clever bit is to strike the metal exactly when you hear the echo, and repeat. You repeat until you are accurately in sync with the echo pulses. Then you measure the time for 10, 20 or more echos. The accuracy gets better and better with more pulses.
I encountered an equivalent phenomenon several years ago while walking on a local college campus. I passed between a blank wall of a building and a pulsating garden sprinkler. My left ear heard the sprinkler, which produced psst sound as it spurted about four times a second. My right ear heard the echo off of the building. I was able to position myself so I heard both sounds simultaneously. I saw that I was hearing the direct sound of the nth spurt and the echo if the (n-1)th spurt. Given the period of the sprinkler spurts and the distance from the sprinkler to the wall I could get the speed of sound.

If I could get my students access to that setup, I'd ask them to predict where the sound and echo are heard simultaneously, and design the experiment to test the prediction.
 
Last edited:
  • Like
Likes jedishrfu, Swamp Thing and Dr. Courtney
  • #10
Fun experiment. Sure to get the attention of the kids.

A few criticisms of the write up.
When fitting to a trendline in graph.exe, we were sure to check the box to set the vertical intercept to zero, as the hypothesis predicts not only a linear relationship, but also a vertical intercept of zero (a direct proportionality.)

Inductive thinking. It seems that you have 5 DATA points. The origin is not a data point, it is part of the hypothesis you are supposed to be testing.

You are not fitted to a trendline, you are fitting a trendline to the data. The use of the term "trend" is not appropriate either, you are fitting a linear model to the data.

Inspection of Figure 1 shows that the hypothesis was supported.
To large degree you induced this result. Not good teaching to suggest this "supported" the hypothesis.

If there was a finite intercept from the experiment , this could then be a point of discussion why this varied from what was expected. It may even be worth trying to induce this.

I find it odd that there is not a single mention of measurement uncertainty. Distance, time, accuracy of determining the exact time of the two events from the noisy sound recording. How the number of data points affects confidence in the slope.

Statistics of 5 points is not the experimental uncertainty, plus the false data point skews the stats.

No mention of how graph.exe fits the "trendline" (OLS it seems). No mention of dependent and independent variable ; nor the requirement in using OLS that only the dependent variable has significant experimental error.

Since distance is the controlled variable here, it should be plotted on the x-axis , not y, and the least squares is not being correctly applied as done.

The data here are quite tight and it does not induce a large error. However, where data are more spread out ( larger x and y errors ) there is what is called regression dilution and the slope is under-estimated by OLS. This is one reason why there could be a finite intercept when a zero intercept is expected. I have seen a whole room of Maths PhDs spend an afternoon faced with such an issue and not one of them knew where it came from. The slope was visibly wrong but they could not understand why.

I hope these comments can be used to improve the presentation and increase its educational value.
 
Last edited by a moderator:
  • #12
No, but you can do things properly, so that attentive students can pick things up correctly, rather then showing them bad ways of doing stuff. There are several things which need correcting here.

This is not time series data. Time is the dependent variable and should be plotted on the y axis.
The text underlines that care was taken to ensure the software was forced to go through the origin. This it totally wrong. It then incorrectly claims that this "supports the hypothesis" that it should go through the origin.

It would also be good practice to publish a table with the experimental data. That would not take much space in this case.

The idea of this experiment is great from an educational point of view. I hope Dr Courtney will be motivate to improve this write-up a bit.
 
Last edited by a moderator:
  • #13
fizzy said:
Time is the dependent variable and should be plotted on the y axis.
That is purely a convention, in relativity time is conventionally the independent variable and is plotted on the vertical axis. There is nothing that requires one axis to be dependent and the other independent.

Plotting it this way makes calculating the speed of sound easier, which was the main point of the lab. So setting the dependent variable on the horizontal axis is in fact a better choice for this experiment than following the arbitrary convention.

fizzy said:
The text underlines that care was taken to ensure the software was forced to go through the origin. This it totally wrong.
I agree with you on this, but teaching the students why belongs to a statistics class. Same with the fact that regression of x vs y is different than y vs x.
 
Last edited:
  • #14
The notion that the trendline goes trough the origin is supported in lots of ways without assuming a direct proportionality between distance and time. I explain it to students this way: the only possible distance any signal can travel in zero time is zero distance. If time permits, when we do this experiment in class, I'll also have the students try a power law fit to the data. This also enforces the physical constraint of going through the origin, but the varying power ends up very close to 1. When physical considerations demand that a mathematical relationship goes through the origin, there is no need to add a variable vertical shift artificially.

This lab is designed for students anywhere from 9th grade Physical Science to 1st year college Physics. It's up to the teacher to adapt the details to the available time given the needs and abilities of the students. One can do a lot more in a 3 hour college Physics lab. The version presented in the Insight article was completed in a single hour with a 9th grade Physical Science class with very weak math skills.
 
Last edited by a moderator:
  • Like
Likes russ_watters and Dale
  • #15
I explain it to students this way: the only possible distance any signal can travel in zero time is zero distance.
Scientific method demands that you conduct an experiment and then compare to theory / hypothesis. You do not start inserting assumptions from your hypothesis into you data then conclude that this "supports the hypothesis".

Again it is not a "trendline". That term belongs to time series analysis and principally comes from economics, as do spreadsheets. What you have here is a linear model you are trying to fit to the data.

If the aim is examine the experimental relationship between elapsed time and distance traveled you should be fitting a two parameter linear model. If your experiment is well designed and there are not any anomalous effects it should have an intercept very close to zero.

Plotting it this way makes calculating the speed of sound easier, which was the main point of the lab. So setting the dependent variable on the horizontal axis is in fact a better choice for this experiment than following the arbitrary convention.

That convention is not arbitrary. There is very good reason for following that convention if you are going to use standard OLS tools without knowing what you are doing because they are following that convention too !

It is in no way "better" to invert the axes and then do a totally invalid regression to estimate the principal result of the experiment.

When physical considerations demand that a mathematical relationship goes through the origin, there is no need to add a variable vertical shift artificially.
There is nothing "artificial" about the second parameter, there may be some experimental or physical conditions which produce something a little different from what you expect. You should analyse the data objectively without attempting to force the result you expect. That is the "need". It does not cost anything and if things go as expected you get near zero intersect and say to your students : "this is what we would expect from theory because ... ".
 
Last edited by a moderator:
  • Like
Likes Dale
  • #16
fizzy said:
Again it is not a "trendline"...
You can take up your trendline debate with those who make spreadsheets and other graphical and data analysis tools that refer to least squares fitting results as trendlines.
 
Last edited by a moderator:
  • Like
Likes berkeman
  • #17
fizzy said:
neither is that convention arbitrary.
I disagree. Like all conventions, it is completely arbitrary. There is no non arbitrary reason to put the dependent variable on the vertical axis. I challenge you to find a non-arbitrary for the vertical dependent axis.

fizzy said:
standard OLS tools ... are "blindly following" that convention too
I am not familiar with the specific tool used in the write up, but I disagree completely that standard OLS tools use that convention. The standard OLS tools that I have used typically have the variables horizontal and the observations vertical. Often even that can be overridden by the user. I don’t even know how the OLS tools could follow that convention in principle.

Perhaps you mean plotting tools instead of OLS tools, or maybe some specific OLS tools that are embedded into a plotting tool.

fizzy said:
If the aim is examine the experimental relationship between elapsed time and distance traveled you should be fitting a two parameter linear model. If your experiment is well designed and there are not any anomalous effects it should have an intercept very close to zero.
I agree with this point. Fitting a model without an intercept term is rarely advisable.
 
Last edited:
  • #18
For most high school science labs, testing a hypothesis is best understood in the sense of Popper's falsifiability. If the experiment and subsequent analysis have a reasonable possibility of refuting the hypothesis and the experiment is done with adequate care, then one can say that the hypothesis is supported if the data agrees with the hypothesis. One need not usually delve into the formal hypothesis testing of statistics to teach most high school science labs. (In some project-based courses, I do explain and show students how to compute uncertainties and p-values, as appropriate for the project and student capabilities.) I also doubt the wisdom of eschewing least squares fitting in high school science labs simply because one does not have time or inclination to delve into formal statistical hypothesis testing.

The question of whether to include a vertical intercept is more interesting. Certainly a strong case can be made that fitting to a single adjustable parameter (the slope) and the resulting r-squared values makes it very reasonable to conclude that the hypothesis is supported. But I suppose support can always be made stronger by showing the direct proportionality works better than other possible models. Several two parameter models are possible: the standard equation of a line, a parabola with zero constant term, and a power law come to mind. I'm not sure why the standard equation of a line would take priority over the other two. I actually taught a similar experiment recently where students measured mass vs. volume (weighing liquid in a graduated cylinder with the electronic balance zeroed with the graduated cylinder in place). Analysis of the residuals of the fit to a line forced through the origin suggested the small residuals were systematically due to widening of the cylinder at the top. Fitting to a quadratic with zero constant term made a lot more sense (as the two parameter model) in that case. But this was pretty far into the weeds relative to the initial hypothesis that mass was proportional to volume. A constant term in this case is just silly.

But fitting several different models and analyzing residuals are topics that may be introduced to high school students with available time, but certainly are not necessary. By the time you have good experimental data, supporting the hypothesis and in agreement with the known proportionality constant within 1% in a high school science lab, I think you can rest easy and think you did OK. I certainly would have been content with most students arriving in my college physics labs had they been capable of routinely achieving 1% accuracy.
 
Last edited:
  • Like
Likes Dale
  • #19
Dr. Courtney said:
The question of whether to include a vertical intercept is more interesting.
You should pretty much always include it. The only time you can leave it out is when it is actually 0, not just not significantly different from 0, but exactly 0. And in that case then leaving it in is the same as leaving it out, so you should always leave it in.

First, and most importantly, if you remove it then all of your other parameter estimates become biased. The EmDrive fiasco is a great example of this. This bias occurs even if the intercept is not significantly different from zero.

Second, your residuals will no longer be zero mean. This may be related to your observation.

Third, many software implementations change the meaning of the R^2 value they report when the intercept is removed. So the resulting R^2 cannot be meaningfully compared to other R^2 values nor interpreted in the usual fashion.

Fourth, even if your true intercept is zero if the function is not exactly linear then your fit can be substantially worse than a linear fit with an intercept.

I’m sure there are other reasons, but basically don’t do it. It is never statistically beneficial (since the only time it is appropriate is when it makes no difference) and it can be quite detrimental. If it makes a difference then you need to leave it in for the reasons above, and if it doesn’t make a difference then it doesn’t hurt to leave it in.

Honestly, with your data the above biases and problems should be minuscule. So this data seems to be on the “it doesn’t make a difference” side of the rule. But I would recommend leaving it in for the future. I wouldn’t proactively give any explanation to the students, but just use the default setting.
 
Last edited:
  • #20
Here is real some meteorological data with significant experimental error in both variables. A linear regression was done, first on x then on y. The two OLS slopes are both invalid because each one ignores the errors in one or other variable. OLS should never be applied to this kind of data in either direction.

It would be possible to construct data where the true slope lies outside this range but usually the true slope will lie between these two extremes. ( The locus of the points was plotted for other reasons , that is not relevant to this discussion. )

As can be seen this is not some purist pendant point , it can make an enormous difference to the supposed linear relationship between the two variables.

Even if there is not time to go into the details of the maths it would seem important to at least mention that it only minimises y residuals and that the basic criterion for this to work properly is to have very small errors on the x-axis variable. It is only under those conditions that it will produce the "best unbiased linear estimation" of the slope.

ols_scatterplot_regression2.png
 

Attachments

  • ols_scatterplot_regression2.png
    ols_scatterplot_regression2.png
    9.5 KB · Views: 694
Last edited by a moderator:
  • Like
Likes Dale
  • #21
Analysis of the residuals of the fit to a line forced through the origin suggested the small residuals were systematically due to widening of the cylinder at the top. Fitting to a quadratic with zero constant term made a lot more sense (as the two parameter model) in that case. But this was pretty far into the weeds relative to the initial hypothesis that mass was proportional to volume. A constant term in this case is just silly.

A constant term is not "silly". If the fit evaluates it near zero, it will not cost anything, and that is valuable information in itself, not "silly". Negative results can be as important a positive ones. Blinkering the analysis by trying to coerce the result is not only silly but unscientific.

A graduated cylinder which is not cylindrical to within the indicated precision seems a little unlikely. It seems far more likely that your spurious attempts force the fit through zero was leading to an incorrect regression slope which produced increasing residuals at higher volumes. It is hard to say without seeing the data but it sounds like it did have a finite intercept, but you were in denial about such things, regarding them as "silly".

Did you teach your students how to correctly read the meniscus of the fluid in the measuring cylinder? That could lead to a finite intercept, if you would allow that possibility to be seen. There clearly was some experimental error which needs to be identified. Had you not expressly removed the constant term, it would have given you some information about the problem. You have neatly demonstrated one reason not to bias the regression by excluding parameters.

If you suspected the cylinder was not straight, did you at least measure it to attempt to falsify this hypothesis. Apparently not. Did you substitute another cylinder to test the hypothesis.
 
Last edited by a moderator:
  • #22
Dale said:
You should pretty much always include it. The only time you can leave it out is when it is actually 0, not just not significantly different from 0, but exactly 0. And in that case then leaving it in is the same as leaving it out, so you should always leave it in.

First, and most importantly, if you remove it then all of your other parameter estimates become biased. The EmDrive fiasco is a great example of this. This bias occurs even if the intercept is not significantly different from zero.

Second, your residuals will no longer be zero mean. This may be related to your observation.

Third, many software implementations change the meaning of the R^2 value they report when the intercept is removed. So the resulting R^2 cannot be meaningfully compared to other R^2 values nor interpreted in the usual fashion.

Fourth, even if your true intercept is zero if the function is not exactly linear then your fit can be substantially worse than a linear fit with an intercept.

I’m sure there are other reasons, but basically don’t do it. It is never statistically beneficial (since the only time it is appropriate is when it makes no difference) and it can be quite detrimental. If it makes a difference then you need to leave it in for the reasons above, and if it doesn’t make a difference then it doesn’t hurt to leave it in.

Honestly, with your data the above biases and problems should be minuscule. So this data seems to be on the “it doesn’t make a difference” side of the rule. But I would recommend leaving it in for the future. I wouldn’t proactively give any explanation to the students, but just use the default setting.

For now I'm not buying it, and I intend to keep teaching students to set the vertical intercept to zero when the basic science of the experiment suggests the model will go through the origin. Here's why:

1. Of all models of the form f(x) = ax^n, why is n=1 so special that it is better modeled as f(x) = c + ax^n? I've never heard an argument or a need to add a constant term when n = 1/2 (for example, fall time as a function of drop height) or when n = 3/2 (for example Kepler's third law) or when the power is unknown (or treated as unknown) or in any other case except for suspected instrument issues where the instrumental measurement may be adding a constant offset to the measurement.

2. I've always been taught and been convinced that models with fewer adjustable parameters are better. In least squares fitting, a perfect fit can usually be achieved by having as many adjustable parameters as data points. Experiments with small numbers of data points require models with smaller numbers of adjustable parameters to better test a hypothesis. In the extreme, one could never support a direct proportionality using two data points fitting to a line with a constant term, but one can support it fitting to a line forced through the origin.

3. "Things should be as simple as possible, but no simpler." - Einstein. Though not an absolute arbiter, I also think it wise to keep Occam's Razor in mind. Other factors being equal, simpler models are usually better. I have nothing against a bit of exploratory data analysis, but regarding direct proportions, I see no compelling case why adding a constant should be the preferred two parameter model rather than adding a quadratic term or trying a power law.

4. Experience. I've been teaching these labs for a long time. My experience is that when the physics screams that the model must go through the origin, agreement will be closer between the best-fit slope and the known good value by fitting to a one parameter model. I have multiple data sets not just showing this in the case of speed of sound measurements, but also mass vs volume measurements and other high school type experiments. Even adding the second parameter in other ways (quadratic term, power law) yields less accurate results for the slope.

5. Physical meaning. I'm a big fan of teaching the physical meaning of both the slope and intercept when fitting to a linear model. Does the vertical intercept have a physical meaning or is it more of a fudge factor to get a better fit? If it seems to me like more of a fudge factor, it is best to skip it. And to me, it seems like a fudge factor for models of the form f(x) = ax^n. Sure, a constant term in a direct proportionality may have the meaning of a systematic offset in the measurement and is something to keep in mind as a possibility. But due care (as zeroing the electronic balance with the empty graduated cylinder on it) can pretty much eliminate it. I'm not keen on teaching students to add fudge factors "just in case."
 
  • #23
fizzy said:
Even if there is not time to go into the details of the maths it would seem important to at least mention that it only minimises y residuals and that the basic criterion for this to work properly is to have very small errors on the x-axis variable. It is only under those conditions that it will produce the "best unbiased linear estimation" of the slope.

This is a potential contradiction with your earlier assertion that the vertical axis should always have the dependent variable. Now you are saying the vertical axis should be the variable with the larger errors. Which is preferred in the case where the independent variable is expected to have the larger errors?

In the echo experiment, errors in the timing are on the order of 0.1% due to the sharpness of the sound leading edges and the accuracy of the clock in the sound card. In contrast, errors in the distance measurement arise from students measuring a distance to a wall with a fabric tape measure. Due care can reduce distance measurement errors near (or slightly below) 1%, but 0.1-0.2% is unlikely with high school students. So you are now saying that plotting the distance on the vertical axis was the right choice because the errors are larger?
 
  • Like
Likes Dale
  • #24
Dr. Courtney said:
Does the vertical intercept have a physical meaning or is it more of a fudge factor to get a better fit?
The practical question is simply which slope gives you the better approximation of the speed of sound. I guess it depends on the type of error you have, and the distribution of the samples.
 
  • #25
Going back and re-reading the experimental setup, we find: "igniting the firecracker a short distance from a microphone".

So the correct mathematical model would not be a linear fit in the first place. Instead, one has a trig problem -- a triangle with two long sides and a short side between. We want to consider the difference between the sum of the lengths of the two "long" sides and the length of the "short" side in the limit as the height of the triangle approaches zero.

Let us simplify the model by assuming that the firecracker and microphone are arranged perpendicular to the wall so that the triangle is isosceles. Ideally we are interested in the difference in path length as a function of the length of the perpendicular bisector of the "short" side (aka the distance to the wall).

Let "s" denote the length of the "short" side -- the short separation between firecracker and microphone.

Let "h" denote the height of the triangle -- the length of the perpendicular bisector/the distance to the wall.

Let "l" denote the length of one "long" side -- the diagonal distance from firecracker to midpoint on wall.

Let "d" denote the delta between the path lengths.

$$d=2l-s$$
$$l=\sqrt{h^2+\frac{s^2}{4}}$$
$$d(h)=2\sqrt{h^2+\frac{s^2}{4}}\ -\ s$$

Let us see what Excel has to say...
Code:
s h    2h correct      delta
1 0    0  0            0
1 0.5  1  0.414213562 -0.585786438
1 1    2  1.236067977 -0.763932023
1 2    4  3.123105626 -0.876894374
1 3    6  5.08276253  -0.91723747
1 4    8  7.062257748 -0.937742252
1 5    10 9.049875621 -0.950124379
1 6    12 11.04159458 -0.958405421
1 7    14 13.03566885 -0.964331152
1 8    16 15.03121954 -0.968780458
1 9    18 17.02775638 -0.972243623
1 10   20 19.02498439 -0.975015605
1 11   22 21.02271555 -0.977284454
1 12   24 23.0208243  -0.979175701
1 13   26 25.01922366 -0.980776337
1 14   28 27.01785145 -0.982148548

It looks like a correct linear fit will have a non-zero intercept.

Edit: Alternately, one could re-scale the independent variable h to reflect the computed path length difference.

Edit again: Ran a linear regression for a data set with s=1, h=0 through h=50 plus h=0.5 With the intercept nailed at zero, the result was y = 1.94h. With the intercept floating, the result was y = 1.99h - 0.74. The asymptotically correct result would of course be y = 2.00h - 1.

This turned out to be a fun mathematical exercise. I learned how to do linear regression with Excel.

We had an ironclad argument that the path length difference is zero at zero distance to wall. That argument was correct. We had an expectation that the path length difference increases linearly with distance to wall as long as firecracker to microphone separation is small. That expectation was correct as well. But it turned out that the linear relationship for large distances, when projected back to the y-axis nonetheless has a non-zero y intercept.
 
Last edited:
  • #26
Dr. Courtney said:
For now I'm not buying it, and I intend to keep teaching students to set the vertical intercept to zero
I would strongly encourage you to do some research into the statistical literature in order to get a better understanding of the issue. It is fine if you do not see me as credible on the topic, but you should make sure that you do some solid research into the statistical issues before you dismiss the suggestion.

Dr. Courtney said:
Of all models of the form f(x) = ax^n, why is n=1 so special that it is better modeled as f(x) = c + ax^n?
It is not special, all least squares linear regression models need the intercept term for the same reasons.

Dr. Courtney said:
I've always been taught and been convinced that models with fewer adjustable parameters are better.
This is a valid point. However, the issue is the statistical method. If you want to use ordinary least squares to do your fitting then you need an intercept term. If you want to do a test with a model that drops the intercept term then there are other methods to do so, but they are far more involved.

Dr. Courtney said:
Other factors being equal, simpler models are usually better.
This is basically a repeat of the previous point. You might find Bayesian statistics to your liking. Bayesian methods naturally include both Popper's falsifiability and Ockham's razor as a fundamental part of the method. It also allows for comparison of non-nested models in a rational way.

Dr. Courtney said:
Does the vertical intercept have a physical meaning or is it more of a fudge factor to get a better fit?
Neither, it is part of the mathematical machinery of minimizing the least squares residuals. One of the assumptions is that the residuals are zero-mean and constant variance. The intercept is what does that. If you eliminate that then you need to carefully consider your error model. It will no longer be zero mean. What does that imply about your measurements? Is that a reasonable error model? Does your new more complicated error model still satisfy Ockham's razor?

Again, don't take my word for it, but also don't simply assume that all is well with your approach either. Do your own research into the statistics literature on the topic and actually learn for yourself about these issues. Gather information you find to be credible and make your opinion an informed opinion, specifically an opinion informed by the statistical literature.
 
Last edited:
  • #27
Dr. Courtney said:
Which is preferred in the case where the independent variable is expected to have the larger errors?
The mathematical assumption is that the independent variable has 0 error. So from a statistics perspective that is what defines independent vs dependent. It is not a physical relationship.
 
  • #28
As often occurs, there is a tension between the better mathematical descriptions and improved statistical approaches with the pedagogical simplifications needed given the time constraints and mathematical limitations of real high school classes.

Sure, consulting the literature or an appropriate expert can always suggest a model or statistical approach that is in some sense "better" than a given set of simplifications chosen to deal with the time constraints and math limits of real high school students. But my experience is that from a pedagogical approach, it is best to keep the explanations in a zone where the intended audience (high school students) can understand them quickly. High school students can use least squares fitting to understand the most important scientific learning objectives of a laboratory without worrying too much about the advanced statistics behind it. My experience has also been that it works better than expected given that the assumptions are never really satisfied (measurements never truly have zero error, even for the independent variable.)

For me, it is enough if students learn to make measurements accurate to 1% and analyze them with sufficient rigor to say whether a hypothesis is supported in the sense of high school science rather than rigorous statistical hypothesis testing. My approach to lab science minimizes believing things based on appeal to authority in favor of believing things based on experimental data. It's hard to insist on a constant term based on the statistical arguments, and most high school science courses don't have time or room in the curricula for the more involved treatment of error analysis. My approach is to include a lot of error awareness along the way and point out where possible how to estimate errors (not rigorously) and identify the dominant sources of error in most experiments. But for the most part, getting students to be careful enough to have errors < 1% most of the time is already far superior to the 5-20% errors I see dominating most high school and even intro college science labs.
 
  • #29
Dale said:
The mathematical assumption is that the independent variable has 0 error. So from a statistics perspective that is what defines independent vs dependent. It is not a physical relationship.

I tend to prefer the science understanding of independent and dependent variables in intro science courses (rather than the mathematical definition). The independent variable is usually the thing that is controlled as the hypothetical cause, and the dependent variable is the outcome that is measured as the hypothetical effect. Of course, I cringed a bit plotting Distance vs. time for the echo experiment, because the distance is carefully controlled and the time is measured. But my experience is that if the slope falls out directly from the analysis, a lot more students will get it.

Plotting time vs. distance preserves the independent variable on the horizontal axis, but then the fit yields a slope that is the reciprocal of the speed of sound. Too many students get lost in the extra step to compute the speed of sound.
 
  • #30
Dr. Courtney said:
But my experience is that from a pedagogical approach, it is best to keep the explanations in a zone where the intended audience (high school students) can understand them quickly.
Sure, I was recommending reading the literature for you as a teacher, not for your students. You seemed reluctant to accept the validity of my explanation about why retaining the intercept is important, so you should inform yourself of the issue from sources you consider valid. Currently your opinion is not informed by the statistical literature. As a conscientious teacher surely you agree that it is important to make sure that your opinions are well informed.

Once you have established an informed opinion then I am sure that you can use that opinion to guide your lesson development in a way that will not detract from the learning objectives. Personally, I would simply use the default option to include the intercept without making much discussion about it. I would leave the teaching about the statistics to a different class, but I would quietly use valid methods.
 
  • #31
Dale said:
Sure, I was recommending reading the literature for you as a teacher, not for your students. You seemed reluctant to accept the validity of my explanation about why retaining the intercept is important, so you should inform yourself of the issue from sources you consider valid. Currently your opinion is not informed by the statistical literature. As a conscientious teacher surely you agree that it is important to make sure that your opinions are well informed.

Once you have established an informed opinion then I am sure that you can use that opinion to guide your lesson development in a way that will not detract from the learning objectives. Personally, I would simply use the default option to include the intercept without making much discussion about it. I would leave the teaching about the statistics to a different class, but I would quietly use valid methods.

My pedagogical disagreement with this is it trains students to accept terms in physics formulas in cases where those terms do not have clear physical meanings. Back to Einstein and Occam - my clear preference is to train students in science classes to want (even demand) explanations for every term in physics equations. In a distance vs. time deal with constant velocity, the physical meaning of the constant term is the position (or distance traveled) at time t = 0. This is problematic from the viewpoint of learning the science, and since students are unlikely to grasp the underlying mathematical justification, in the absence of a clear physical meaning, it will seem like a fudge factor whose need is asserted by authority. For pedagogical purposes, I expect to continue to teach my students that the meaning of the vertical intercept is the anticipated output for zero input. I value the science more than the math.

Demanding a physical meaning for the vertical intercept has born much fruit for my students. Several years back a group of 1st year cadets at the Air Force Academy used this approach to identify the vertical intercept of bullet energy vs. powder charge line as the work done by friction while the bullet traverses the rifle barrel. This method remains the simplest and one of the most accurate methods for measuring bullet friction at ballistic velocities. See: https://apps.dtic.mil/dtic/tr/fulltext/u2/a568594.pdf When studying Hooke's law for some springs, a non-zero vertical intercept is needed to account for the fact that the coils prevent some springs from stretching until some minimum force is applied. The physical meaning is clear: the vertical intercept when plotting Force vs. Displacement is the applied force necessary for the spring to begin stretching.

In contrast, the mass vs. volume lab doesn't lend itself to a physical meaning when plotting an experimental mass vs. volume. The mass of a quantity of substance occupying zero volume cannot be positive, and it cannot be negative. It can only be zero. Allowing it to vary presents a problem of giving a physical meaning to the resulting value, because "the expected mass for a volume of zero" does not make any sense. It may be mathematically rigorous, but in a high school science class, it's just silly. I'd rather not send my students the message that it's OK for terms in equations not to have physical meanings if someone mumbles some mathematical mumbo jumbo about how the software works. (Students go into Charlie Brown mode quickly.)

I use Tracker often in the lab for kinematics types of experiments and we do a lot with the kinematic equations. When fitting position vs. time, it is essential that each term in the fit for x(t) have the same physical meaning as in the kinematic equations. The constant term is the initial position, the linear coefficient is the initial velocity, and the quadratic coefficient is twice the acceleration. If the initial position is defined to be zero (as often the case), then a constant term in the model does not make sense. (Tracker allows t = 0 to be set at any frame and the origin can usually be placed at any convenient point; often the position of the object at t = 0 is a convenient point.)
 
  • #32
fizzy said:
A graduated cylinder which is not cylindrical to within the indicated precision seems a little unlikely. It seems far more likely that your spurious attempts force the fit through zero was leading to an incorrect regression slope which produced increasing residuals at higher volumes. It is hard to say without seeing the data but it sounds like it did have a finite intercept, but you were in denial about such things, regarding them as "silly".

I expect folks who think it is unlikely for high school lab equipment to not be within its indicated precision have not spent sufficient time with high school lab equipment. I teach students how to check and double check equipment accuracy. What better simple check on the accuracy of a graduated cylinder (accuracy spec 0.2 CC) than an electronic balance (verified accuracy spec 0.01 g)?

Once accepts the constant density of water, one can use the balance itself as the best available check on the accuracy of the graduated cylinder. About half the measurements with the graduated cylinder were outside its spec. This is not a train wreck for how graduated cylinders are usually used in science labs, but I do encourage students to take note of the limitation.

The resulting density of water without a vertical intercept was 0.9967 (g/cc) with an R-Squared of 0.9999. Adding a vertical intercept puts the R-Squared closer to one, but the resulting density of water is 1.0045 g/cc with a vertical intercept suggesting that 0 cc of water has a mass of -0.5467 g. Silly. The known good value for the density of water at 20 deg C is 0.998 g/cc.

fizzy said:
Did you teach your students how to correctly read the meniscus of the fluid in the measuring cylinder?

Yes, of course.

fizzy said:
That could lead to a finite intercept, if you would allow that possibility to be seen. There clearly was some experimental error which needs to be identified. Had you not expressly removed the constant term, it would have given you some information about the problem. You have neatly demonstrated one reason not to bias the regression by excluding parameters.

I only removed the constant term for the student method after my careful pilot experiment. My careful pilot included analysis with several possible models, linear with and without constant term, and quadratic with and without constant term. I also carefully considered the residuals of the different models for three different liquids with known densities: water, isopropanol, and acetone. The high correlations of the residuals for different liquids suggests the most likely source of error was the graduated cylinder itself.

fizzy said:
If you suspected the cylinder was not straight, did you at least measure it to attempt to falsify this hypothesis. Apparently not. Did you substitute another cylinder to test the hypothesis.

And with what instrument commonly found in high school labs would you suggest accurately measuring the inner diameter at the bottom of a graduated cylinder? The other available cylinders were from the same manufacturer and demonstrated the same trend. (Adding apparently equal volumes near the top added more mass on the balance.) But the most convincing evidence was seeing the same trend in two additional liquids (isopropanol and acetone.) I expect as a manufacturing convenience these plastic graduated cylinders are formed on molds that make them slightly narrower at the bottom than at the top so that they are easier to remove from the molds. It is much more cost effective and resistant to breakage than glassware, and adequate for many laboratory purposes if the limitations are understood. If need be, a cylinder could be recalibrated with water, but it is easier just to double check on a balance for liquids of known density.
 
  • #33
Dr. Courtney said:
My pedagogical disagreement with this is it trains students to accept terms in physics formulas in cases where those terms do not have clear physical meanings.
That is fine, but before doing so you should make sure that you have the necessary statistical background knowledge to wisely make that call. You should also realize that it is not clearly the right call and that valid informed objections and differences of opinion are to be expected on this point.

Personally, to me this issue is about understanding the limitations of your tools. A tool can often be used for a task in a way that it is not intended to be used. Sometimes it is ok, but sometimes it is not. If you are going to use a tool in a way it is not intended then you need to understand the likely failure modes and be vigilant.

I have seen other scientists publish papers misusing linear regression this specific way and claiming an effect where none existed due to the biasing. The tool was breaking under misuse. They also had no clear physical interpretation for the intercept and chose, as you did, to remove it on those same grounds. It is not a thing to be done lightly and they suffered for it. At a minimum the intercept can be used to indicate a failure of your experimental setup. If you have no theoretical possibility for an intercept and yet your data shows an intercept then that is an indication that your experiment is not ideal. In your case, your distance measurements and time measurements are not perfect. Perhaps there is a systematic error and not just random errors. A systematic error could lead to a non-zero intercept, which you are artificially suppressing.

Dr. Courtney said:
Back to Einstein and Occam - my clear preference is to train students in science classes to want (even demand) explanations for every term in physics equations.
I don't think that Ockham's razor justifies your approach here. The problem is that by simplifying your effect model you have unknowingly made your error model more complicated. Your errors are no longer modeled as zero mean, and the mean of your residuals is directly related to what would have been your intercept. All you have done is to move the same complexity to a hidden spot where it is easy to ignore. It is still there. You still have the same two parameters, but you have moved one parameter to the residuals and suppressed its output.

Dr. Courtney said:
It is much more cost effective and resistant to breakage than glassware, and adequate for many laboratory purposes if the limitations are understood.
A wise approach. You should treat statistical methods similarly.
 
Last edited:
  • #34
Dale said:
That is fine, but before doing so you should make sure that you have the necessary statistical background knowledge to wisely make that call. You should also realize that it is not clearly the right call and that valid informed objections and differences of opinion are to be expected on this point.

I do. You seem to have wrongly assumed that I do not, and that if I had there would only be one right call to make since you previously wrote:

Currently your opinion is not informed by the statistical literature. As a conscientious teacher surely you agree that it is important to make sure that your opinions are well informed.

Once you have established an informed opinion then I am sure that you can use that opinion to guide your lesson development in a way that will not detract from the learning objectives.

I have thoroughly reviewed the relevant statistics literature. I have authored a widely distributed least-squares fitting software package. I have taught several college level statistics courses. I am aware of the issues. A few quotes from the literature:

In certain circumstances, it is clear, a priori, that the model describing the relationship between the independent variable and the dependent variable(s) should not contain a constant term and, in consequence, the least squares fit needs to be constrained to pass through the origin.
(HA Gordon, The Statistician, Vol 30 No 1, 1981)

There are many practical problems where it is reasonable to assume the relationship of a straight line passing through the origin ... (ME Turner, Biometrics, Vol 16 No 3, 1960)

This article describes situations in which regression through the origin is appropriate, derives the normal equation for such a regression and explains the controversy regarding its evaluative statistics. (JG Eisenhauer, Teaching statistics, Vol 25 No 3 2003)

Dale said:
Personally, to me this issue is about understanding the limitations of your tools. A tool can often be used for a task in a way that it is not intended to be used. Sometimes it is ok, but sometimes it is not. If you are going to use a tool in a way it is not intended then you need to understand the likely failure modes and be vigilant.

Yes, I understand that the R-squared values and other goodness of fit statistics are not comparable with other models. A better way to compare with other models is to compute the variance of the residuals. There are columns in my analysis spreadsheet for my pilot experiments doing just that.

Dale said:
I have seen other scientists publish papers misusing linear regression this specific way and claiming an effect where none existed due to the biasing. The tool was breaking under misuse. They also had no clear physical interpretation for the intercept and chose, as you did, to remove it on those same grounds. It is not a thing to be done lightly and they suffered for it.

And I've seen scientists publish papers with vertical shifts that make no sense. The probability of an effect when the cause is reduced to zero should be exactly zero. (The risk of death from a poison should be zero for zero mass of poison. The probability of a bullet penetrating armor should be exactly zero for a bullet with zero velocity. The weight of a fish with zero length should be exactly zero.) Further, you are creating a strawman to claim my scientific justification for removing the constant term was the lack of a physical meaning. I justify removing the constant term based on strong physical arguments that for zero input, the output can only be zero. The lack of physical meaning was a pedagogical motive, not a scientific justification.

Dale said:
At a minimum the intercept can be used to indicate a failure of your experimental setup. If you have no theoretical possibility for an intercept and yet your data shows an intercept then that is an indication that your experiment is not ideal. In your case, your distance measurements and time measurements are not perfect. Perhaps there is a systematic error and not just random errors. A systematic error could lead to a non-zero intercept, which you are artificially suppressing.

As explained above, my practice is to try a number of analysis techniques on my pilot data, and then slim down the analysis for students to the one that makes the most sense for the overall context. Done the echo-based speed of sound experiment lots of time now. There has never been a problem not adding the extra constant term, and the resulting speed of sound has always been within 1% of the expectation based on the ambient temperature. When the extra parameter is used (by me, not students, but I do re-analyze their data to check for such things) it is invariably close to zero (relative to its error estimate), so one can say it is not significantly different from zero. Some teachers may see the pedagogical benefit of walking students through these steps, but software that provides the error estimates in the slope and vertical intercept tends to be harder for students to use and confusing, so I avoid it for most student uses.

Dale said:
I don't think that Ockham's razor justifies your approach here. The problem is that by simplifying your effect model you have unknowingly made your error model more complicated. Your errors are no longer modeled as zero mean, and the mean of your residuals is directly related to what would have been your intercept. All you have done is to move the same complexity to a hidden spot where it is easy to ignore. It is still there. You still have the same two parameters, but you have moved one parameter to the residuals and suppressed its output.

Occam's Razor here is more of a pedagogical motive for keeping the model simple. I know all along that the error model is more complicated, but the students are not usually cognizant of the error model. Much like ignoring air resistance in projectile motion problems, the motive is to keep the model the students see simpler. For published research, I do not doubt the value of the approach of trying linear models with a constant term to see if it is statistically different from zero, and if the slope is changed significantly. But having done both, one then faces the challenge of deciding which fit is better. This is way beyond the scope of a high school science class, but it is discussed here (Casella, G. (1983). Leverage and regression through the origin. The American Statistician, 37(2), 147-152.) Designing labs is about providing students new skills in manageable doses.

Most papers I've read on through the origin regression are not primarily concerned with whether models that go through the origin SHOULD be used in the first place, but rather how the descriptive statistics are used to assess the goodness of fit. Many possible criticisms do not just apply to linear least squares, but to most non-linear least squares models that are forced through the origin. There is now wide agreement that these models are appropriate in many areas of science, including weight-length in fish, a multitude of other power law models, probability curves, and a variety of economic models.
 
Last edited:
  • #35
Dr. Courtney said:
You seem to have wrongly assumed that I do not
I apologize for my wrong assumption. Based on your questions it seemed like you did not understand the statistical issues involved as you did not mention any of the relevant statistical issues but only the pedagogical/scientific issues. For me, if I had decided (due to pedagogical or scientific considerations) to use the no-intercept method then I would have gone through a few of the relevant statistical issues, identified them as being immaterial for the data sets in consideration, and only then proceeded with the pedagogical/scientific justification. I mistakenly thought that the absence of any mention of the statistical issues indicated an unfamiliarity with them.

Dr. Courtney said:
Yes, I understand that the R-squared values and other goodness of fit statistics are not comparable with other models.
That is not the only issue, nor even the most important. By far the most important one is the possibility of bias in the slope. It does not appear to be a substantial issue for your data, so that would be the justification I would use were I trying to justify this approach.

Dr. Courtney said:
A better way to compare with other models is to compute the variance of the residuals.
Or in the Bayesian framework you can directly compare the probability of different models.

Dr. Courtney said:
the resulting speed of sound has always been within 1% of the expectation based on the ambient temperature
This would be a good statistical justification. It is not a general justification, because the general rule remains that use of the intercept is preferred. It is a justification specific to this particular experiment that the violation of the usual process does not produce the primary effect of concern: a substantial bias in the other parameter estimates.

Dr. Courtney said:
Occam's Razor here is more of a pedagogical motive for keeping the model simple. I know all along that the error model is more complicated
Then you should know that your Ockham's razor argument is not strong in this case. It is at best neutral.

Dr. Courtney said:
But having done both, one then faces the challenge of deciding which fit is better.
In the Bayesian approach this can be decided formally, and in the frequentist framework this is a no-no which leads to p-value hacking and failure to replicate results.
 
Last edited:
<h2>1. What is direct echo-based measurement of the speed of sound?</h2><p>Direct echo-based measurement of the speed of sound is a method used to determine the speed of sound in a medium by measuring the time it takes for an echo to travel a known distance. This involves sending a sound wave towards a reflective surface and measuring the time it takes for the echo to return. By knowing the distance and time, the speed of sound can be calculated using the equation speed = distance/time.</p><h2>2. How accurate is direct echo-based measurement of the speed of sound?</h2><p>The accuracy of direct echo-based measurement of the speed of sound depends on various factors such as the quality of the equipment used, the distance between the sound source and reflective surface, and external factors like temperature and humidity. With proper calibration and high-quality equipment, this method can provide accurate results with an error margin of less than 1%.</p><h2>3. What are the advantages of using direct echo-based measurement of the speed of sound?</h2><p>One of the main advantages of this method is its simplicity and ease of use. It does not require complex equipment or extensive training to perform. Additionally, it can be used to measure the speed of sound in different mediums, making it a versatile tool for scientific research and experimentation.</p><h2>4. Are there any limitations to direct echo-based measurement of the speed of sound?</h2><p>While this method is generally accurate, it does have some limitations. The accuracy can be affected by external factors such as temperature and humidity, which can change the speed of sound in a medium. Additionally, the distance between the sound source and reflective surface should be large enough to minimize the effects of reverberations and reflections.</p><h2>5. How is direct echo-based measurement of the speed of sound used in real-world applications?</h2><p>Direct echo-based measurement of the speed of sound has various real-world applications, such as in the development and testing of acoustic materials, calibration of musical instruments, and in the field of seismology to measure the speed of seismic waves. It is also used in industries like aerospace and automotive to test the aerodynamics and performance of vehicles.</p>

1. What is direct echo-based measurement of the speed of sound?

Direct echo-based measurement of the speed of sound is a method used to determine the speed of sound in a medium by measuring the time it takes for an echo to travel a known distance. This involves sending a sound wave towards a reflective surface and measuring the time it takes for the echo to return. By knowing the distance and time, the speed of sound can be calculated using the equation speed = distance/time.

2. How accurate is direct echo-based measurement of the speed of sound?

The accuracy of direct echo-based measurement of the speed of sound depends on various factors such as the quality of the equipment used, the distance between the sound source and reflective surface, and external factors like temperature and humidity. With proper calibration and high-quality equipment, this method can provide accurate results with an error margin of less than 1%.

3. What are the advantages of using direct echo-based measurement of the speed of sound?

One of the main advantages of this method is its simplicity and ease of use. It does not require complex equipment or extensive training to perform. Additionally, it can be used to measure the speed of sound in different mediums, making it a versatile tool for scientific research and experimentation.

4. Are there any limitations to direct echo-based measurement of the speed of sound?

While this method is generally accurate, it does have some limitations. The accuracy can be affected by external factors such as temperature and humidity, which can change the speed of sound in a medium. Additionally, the distance between the sound source and reflective surface should be large enough to minimize the effects of reverberations and reflections.

5. How is direct echo-based measurement of the speed of sound used in real-world applications?

Direct echo-based measurement of the speed of sound has various real-world applications, such as in the development and testing of acoustic materials, calibration of musical instruments, and in the field of seismology to measure the speed of seismic waves. It is also used in industries like aerospace and automotive to test the aerodynamics and performance of vehicles.

Similar threads

Replies
142
Views
11K
Replies
5
Views
879
Replies
11
Views
2K
Replies
19
Views
2K
Replies
4
Views
1K
Replies
18
Views
3K
Replies
8
Views
942
Replies
15
Views
3K
Replies
14
Views
2K
Back
Top