Direct Echo-Based Measurement of the Speed of Sound - Comments

Answers and Replies

  • #2
Dale
Mentor
Insights Author
2020 Award
30,852
7,454
That is fun! Not often that you get to set off fireworks for science
 
  • #3
Dr. Courtney
Education Advisor
Insights Author
Gold Member
2020 Award
3,295
2,459
That is fun! Not often that you get to set off fireworks for science
Yep. I'm actually going to use "Chemistry of Pyrotechnics" to put together a few labs for next year (supposing the local school is pleased enough to let me coordinate a few labs for them again.)

The challenge with these deals is not making them fun. That's a given. The challenge is connecting the "Gee Whiz" part of it to some interesting science in a way that tests a hypothesis reasonably within the learning objectives and in the Goldilocks zone (not too hard, not too easy, just right.)

It's easy to pretend one is doing science when all the students remember is the "Gee Whiz" and no one remembers the learning objectives.
 
  • Like
Likes jedishrfu, DaveE, Nugatory and 1 other person
  • #4
Dale
Mentor
Insights Author
2020 Award
30,852
7,454
It's easy to pretend one is doing science when all the students remember is the "Gee Whiz" and no one remembers the learning objectives.
I think that is a succinct summary of the problem with pop-sci presentations. It is good that you are focusing on more than just the fun, but including both fun and learning objectives.
 
  • Like
Likes jedishrfu
  • #5
Dr. Courtney
Education Advisor
Insights Author
Gold Member
2020 Award
3,295
2,459
I think that is a succinct summary of the problem with pop-sci presentations. It is good that you are focusing on more than just the fun, but including both fun and learning objectives.
In a paper coming out this fall in TPT, colleagues and I identified three challenges in the typical introductory physics lab design:

1) simple experiments connected with learning objectives
2) experiments sufficiently accurate for comparisons between theory and measurements without gaps when students ascribe discrepancies to confounding factors (imperfect simplifying assumptions, measurement uncertainties, and “human error”), and
3) experiments capturing student attention to ensure due diligence in execution and analysis.

So that can be summarized in three goals: 1) learning objectives 2) accuracy (I like 1%) and 3) Gee Whiz factor. I like the firecracker echo experiment, because it has all three (which is rare) plus a 4th that is often a constraint 4) Cheap.

I've been working a lot this past year with a number of resource-constrained schools: home schools, private schools, foreign schools, and public schools in underfunded districts. Some times it feels like it comes down to:
A) What interesting things can you do with a microphone as an accurate timer?
B) What interesting kinematics can you catch with an available video camera and analyze in Tracker? (Or otherwise use the camera as a timer to 1/30 sec)
C) What "virtual" labs can you do by downloading historically important or other interesting data (Boyle, Kepler, etc.)?

I've got mixed feelings about calling an analysis activity a real "laboratory" if someone else did the experiment and collected the data. But these can have a hypothesis, a quantitative test of the hypothesis, data analysis, and a traditional lab report. I wouldn't want a lab program to rely too heavily on these, but better than skipping labs completely due to resource constraints.
 
  • Like
Likes jedishrfu
  • #6
sophiecentaur
Science Advisor
Gold Member
2020 Award
25,486
5,004
A very cheap way that has good accuracy / consistency is to stand a distance from a large wall and use a hammer to hit a metal object. That is obvs so far. The clever bit is to strike the metal exactly when you hear the echo, and repeat. You repeat until you are accurately in sync with the echo pulses. Then you measure the time for 10, 20 or more echos. The accuracy gets better and better with more pulses.
Classic integration method to average out errors. ms timing accuracy is possible with enough pulses.
 
  • Like
Likes jedishrfu and Dr. Courtney
  • #7
sophiecentaur
Science Advisor
Gold Member
2020 Award
25,486
5,004
I've got mixed feelings about calling an analysis activity a real "laboratory" if someone else did the experiment and collected the data.
I really worries me that students seem to confuse simulation with reality all the time. It's the Startrek effect. They ask why their simulation is not giving the answers they expect. It's GIGO without having any way of chasing the fault in the model. A simulation is so much cheaper than hardware and you don't need lab space nor need to tidy up for the next class. You can see why 'the system' likes to encourage it.
 
  • Like
Likes jedishrfu, zoki85 and Asymptotic
  • #8
Dr. Courtney
Education Advisor
Insights Author
Gold Member
2020 Award
3,295
2,459
I really worries me that students seem to confuse simulation with reality all the time. It's the Startrek effect. They ask why their simulation is not giving the answers they expect. It's GIGO without having any way of chasing the fault in the model. A simulation is so much cheaper than hardware and you don't need lab space nor need to tidy up for the next class. You can see why 'the system' likes to encourage it.
I consider downloading real data acquired from a third party as a different (better) class of lab than computer simulations. For example, last year, I had a physical science class download and analyze both Brahe's original data and modern data for testing Kepler's third law. Later, (for a different lab), I had them download available orbital data for earth satellites to test Kepler's third law in that system. I had a physics class analyze Robert Boyle's original data (from his historical publication) to test Boyle's law.

In my view, these labs are not as good as real, hands on experiments where students acquire the data themselves. But they do more accurately represent the scientific method by comparing predictions from proposed models (usually the hypothesis) against _real_ experimental or observational data. There are many historical cases where science really works this way - a model is validated against data acquired by a different party.

In contrast, testing a predictive model or hypothesis against a simulation is not a version of the scientific method that I think we should be teaching in introductory labs. That's not how the scientific method really works, and using simulations for labs runs a significant risk of confusing students about the scientific method itself.
 
  • Like
Likes marcusl, Asymptotic and sophiecentaur
  • #9
403
36
A very cheap way that has good accuracy / consistency is to stand a distance from a large wall and use a hammer to hit a metal object. That is obvs so far. The clever bit is to strike the metal exactly when you hear the echo, and repeat. You repeat until you are accurately in sync with the echo pulses. Then you measure the time for 10, 20 or more echos. The accuracy gets better and better with more pulses.
I encountered an equivalent phenomenon several years ago while walking on a local college campus. I passed between a blank wall of a building and a pulsating garden sprinkler. My left ear heard the sprinkler, which produced psst sound as it spurted about four times a second. My right ear heard the echo off of the building. I was able to position myself so I heard both sounds simultaneously. I saw that I was hearing the direct sound of the nth spurt and the echo if the (n-1)th spurt. Given the period of the sprinkler spurts and the distance from the sprinkler to the wall I could get the speed of sound.

If I could get my students access to that setup, I'd ask them to predict where the sound and echo are heard simultaneously, and design the experiment to test the prediction.
 
Last edited:
  • Like
Likes jedishrfu, Swamp Thing and Dr. Courtney
  • #10
193
17
Fun experiment. Sure to get the attention of the kids.

A few criticisms of the write up.
When fitting to a trendline in graph.exe, we were sure to check the box to set the vertical intercept to zero, as the hypothesis predicts not only a linear relationship, but also a vertical intercept of zero (a direct proportionality.)
Inductive thinking. It seems that you have 5 DATA points. The origin is not a data point, it is part of the hypothesis you are supposed to be testing.

You are not fitted to a trendline, you are fitting a trendline to the data. The use of the term "trend" is not appropriate either, you are fitting a linear model to the data.

Inspection of Figure 1 shows that the hypothesis was supported.
To large degree you induced this result. Not good teaching to suggest this "supported" the hypothesis.

If there was a finite intercept from the experiment , this could then be a point of discussion why this varied from what was expected. It may even be worth trying to induce this.

I find it odd that there is not a single mention of measurement uncertainty. Distance, time, accuracy of determining the exact time of the two events from the noisy sound recording. How the number of data points affects confidence in the slope.

Statistics of 5 points is not the experimental uncertainty, plus the false data point skews the stats.

No mention of how graph.exe fits the "trendline" (OLS it seems). No mention of dependent and independent variable ; nor the requirement in using OLS that only the dependent variable has significant experimental error.

Since distance is the controlled variable here, it should be plotted on the x axis , not y, and the least squares is not being correctly applied as done.

The data here are quite tight and it does not induce a large error. However, where data are more spread out ( larger x and y errors ) there is what is called regression dilution and the slope is under-estimated by OLS. This is one reason why there could be a finite intercept when a zero intercept is expected. I have seen a whole room of Maths PhDs spend an afternoon faced with such an issue and not one of them knew where it came from. The slope was visibly wrong but they could not understand why.

I hope these comments can be used to improve the presentation and increase its educational value.
 
Last edited by a moderator:
  • #11
Dale
Mentor
Insights Author
2020 Award
30,852
7,454
@fizzy you can’t teach everything in one lab
 
  • Like
Likes russ_watters
  • #12
193
17
No, but you can do things properly, so that attentive students can pick things up correctly, rather then showing them bad ways of doing stuff. There are several things which need correcting here.

This is not time series data. Time is the dependent variable and should be plotted on the y axis.
The text underlines that care was taken to ensure the software was forced to go through the origin. This it totally wrong. It then incorrectly claims that this "supports the hypothesis" that it should go through the origin.

It would also be good practice to publish a table with the experimental data. That would not take much space in this case.

The idea of this experiment is great from an educational point of view. I hope Dr Courtney will be motivate to improve this write-up a bit.
 
Last edited by a moderator:
  • #13
Dale
Mentor
Insights Author
2020 Award
30,852
7,454
Time is the dependent variable and should be plotted on the y axis.
That is purely a convention, in relativity time is conventionally the independent variable and is plotted on the vertical axis. There is nothing that requires one axis to be dependent and the other independent.

Plotting it this way makes calculating the speed of sound easier, which was the main point of the lab. So setting the dependent variable on the horizontal axis is in fact a better choice for this experiment than following the arbitrary convention.

The text underlines that care was taken to ensure the software was forced to go through the origin. This it totally wrong.
I agree with you on this, but teaching the students why belongs to a statistics class. Same with the fact that regression of x vs y is different than y vs x.
 
Last edited:
  • #14
Dr. Courtney
Education Advisor
Insights Author
Gold Member
2020 Award
3,295
2,459
The notion that the trendline goes trough the origin is supported in lots of ways without assuming a direct proportionality between distance and time. I explain it to students this way: the only possible distance any signal can travel in zero time is zero distance. If time permits, when we do this experiment in class, I'll also have the students try a power law fit to the data. This also enforces the physical constraint of going through the origin, but the varying power ends up very close to 1. When physical considerations demand that a mathematical relationship goes through the origin, there is no need to add a variable vertical shift artificially.

This lab is designed for students anywhere from 9th grade Physical Science to 1st year college Physics. It's up to the teacher to adapt the details to the available time given the needs and abilities of the students. One can do a lot more in a 3 hour college Physics lab. The version presented in the Insight article was completed in a single hour with a 9th grade Physical Science class with very weak math skills.
 
Last edited by a moderator:
  • Like
Likes russ_watters and Dale
  • #15
193
17
I explain it to students this way: the only possible distance any signal can travel in zero time is zero distance.
Scientific method demands that you conduct an experiment and then compare to theory / hypothesis. You do not start inserting assumptions from your hypothesis into you data then conclude that this "supports the hypothesis".

Again it is not a "trendline". That term belongs to time series analysis and principally comes from economics, as do spreadsheets. What you have here is a linear model you are trying to fit to the data.

If the aim is examine the experimental relationship between elapsed time and distance traveled you should be fitting a two parameter linear model. If your experiment is well designed and there are not any anomalous effects it should have an intercept very close to zero.

Plotting it this way makes calculating the speed of sound easier, which was the main point of the lab. So setting the dependent variable on the horizontal axis is in fact a better choice for this experiment than following the arbitrary convention.
That convention is not arbitrary. There is very good reason for following that convention if you are going to use standard OLS tools without knowing what you are doing because they are following that convention too !!

It is in no way "better" to invert the axes and then do a totally invalid regression to estimate the principal result of the experiment.

When physical considerations demand that a mathematical relationship goes through the origin, there is no need to add a variable vertical shift artificially.
There is nothing "artificial" about the second parameter, there may be some experimental or physical conditions which produce something a little different from what you expect. You should analyse the data objectively without attempting to force the result you expect. That is the "need". It does not cost anything and if things go as expected you get near zero intersect and say to your students : "this is what we would expect from theory because .... ".
 
Last edited by a moderator:
  • Like
Likes Dale
  • #16
Dr. Courtney
Education Advisor
Insights Author
Gold Member
2020 Award
3,295
2,459
Again it is not a "trendline"...
You can take up your trendline debate with those who make spreadsheets and other graphical and data analysis tools that refer to least squares fitting results as trendlines.
 
Last edited by a moderator:
  • Like
Likes berkeman
  • #17
Dale
Mentor
Insights Author
2020 Award
30,852
7,454
neither is that convention arbitrary.
I disagree. Like all conventions, it is completely arbitrary. There is no non arbitrary reason to put the dependent variable on the vertical axis. I challenge you to find a non-arbitrary for the vertical dependent axis.

standard OLS tools ... are "blindly following" that convention too
I am not familiar with the specific tool used in the write up, but I disagree completely that standard OLS tools use that convention. The standard OLS tools that I have used typically have the variables horizontal and the observations vertical. Often even that can be overridden by the user. I don’t even know how the OLS tools could follow that convention in principle.

Perhaps you mean plotting tools instead of OLS tools, or maybe some specific OLS tools that are embedded into a plotting tool.

If the aim is examine the experimental relationship between elapsed time and distance travelled you should be fitting a two parameter linear model. If your experiment is well designed and there are not any anomalous effects it should have an intercept very close to zero.
I agree with this point. Fitting a model without an intercept term is rarely advisable.
 
Last edited:
  • #18
Dr. Courtney
Education Advisor
Insights Author
Gold Member
2020 Award
3,295
2,459
For most high school science labs, testing a hypothesis is best understood in the sense of Popper's falsifiability. If the experiment and subsequent analysis have a reasonable possibility of refuting the hypothesis and the experiment is done with adequate care, then one can say that the hypothesis is supported if the data agrees with the hypothesis. One need not usually delve into the formal hypothesis testing of statistics to teach most high school science labs. (In some project-based courses, I do explain and show students how to compute uncertainties and p-values, as appropriate for the project and student capabilities.) I also doubt the wisdom of eschewing least squares fitting in high school science labs simply because one does not have time or inclination to delve into formal statistical hypothesis testing.

The question of whether to include a vertical intercept is more interesting. Certainly a strong case can be made that fitting to a single adjustable parameter (the slope) and the resulting r-squared values makes it very reasonable to conclude that the hypothesis is supported. But I suppose support can always be made stronger by showing the direct proportionality works better than other possible models. Several two parameter models are possible: the standard equation of a line, a parabola with zero constant term, and a power law come to mind. I'm not sure why the standard equation of a line would take priority over the other two. I actually taught a similar experiment recently where students measured mass vs. volume (weighing liquid in a graduated cylinder with the electronic balance zeroed with the graduated cylinder in place). Analysis of the residuals of the fit to a line forced through the origin suggested the small residuals were systematically due to widening of the cylinder at the top. Fitting to a quadratic with zero constant term made a lot more sense (as the two parameter model) in that case. But this was pretty far into the weeds relative to the initial hypothesis that mass was proportional to volume. A constant term in this case is just silly.

But fitting several different models and analyzing residuals are topics that may be introduced to high school students with available time, but certainly are not necessary. By the time you have good experimental data, supporting the hypothesis and in agreement with the known proportionality constant within 1% in a high school science lab, I think you can rest easy and think you did OK. I certainly would have been content with most students arriving in my college physics labs had they been capable of routinely achieving 1% accuracy.
 
Last edited:
  • Like
Likes Dale
  • #19
Dale
Mentor
Insights Author
2020 Award
30,852
7,454
The question of whether to include a vertical intercept is more interesting.
You should pretty much always include it. The only time you can leave it out is when it is actually 0, not just not significantly different from 0, but exactly 0. And in that case then leaving it in is the same as leaving it out, so you should always leave it in.

First, and most importantly, if you remove it then all of your other parameter estimates become biased. The EmDrive fiasco is a great example of this. This bias occurs even if the intercept is not significantly different from zero.

Second, your residuals will no longer be zero mean. This may be related to your observation.

Third, many software implementations change the meaning of the R^2 value they report when the intercept is removed. So the resulting R^2 cannot be meaningfully compared to other R^2 values nor interpreted in the usual fashion.

Fourth, even if your true intercept is zero if the function is not exactly linear then your fit can be substantially worse than a linear fit with an intercept.

I’m sure there are other reasons, but basically don’t do it. It is never statistically beneficial (since the only time it is appropriate is when it makes no difference) and it can be quite detrimental. If it makes a difference then you need to leave it in for the reasons above, and if it doesn’t make a difference then it doesn’t hurt to leave it in.

Honestly, with your data the above biases and problems should be minuscule. So this data seems to be on the “it doesn’t make a difference” side of the rule. But I would recommend leaving it in for the future. I wouldn’t proactively give any explanation to the students, but just use the default setting.
 
Last edited:
  • #20
193
17
Here is real some meteorological data with significant experimental error in both variables. A linear regression was done, first on x then on y. The two OLS slopes are both invalid because each one ignores the errors in one or other variable. OLS should never be applied to this kind of data in either direction.

It would be possible to construct data where the true slope lies outside this range but usually the true slope will lie between these two extremes. ( The locus of the points was plotted for other reasons , that is not relevant to this discussion. )

As can be seen this is not some purist pendant point , it can make an enormous difference to the supposed linear relationship between the two variables.

Even if there is not time to go into the details of the maths it would seem important to at least mention that it only minimises y residuals and that the basic criterion for this to work properly is to have very small errors on the x axis variable. It is only under those conditions that it will produce the "best unbiased linear estimation" of the slope.

ols_scatterplot_regression2.png
 

Attachments

Last edited by a moderator:
  • Like
Likes Dale
  • #21
193
17
Analysis of the residuals of the fit to a line forced through the origin suggested the small residuals were systematically due to widening of the cylinder at the top. Fitting to a quadratic with zero constant term made a lot more sense (as the two parameter model) in that case. But this was pretty far into the weeds relative to the initial hypothesis that mass was proportional to volume. A constant term in this case is just silly.
A constant term is not "silly". If the fit evaluates it near zero, it will not cost anything, and that is valuable information in itself, not "silly". Negative results can be as important a positive ones. Blinkering the analysis by trying to coerce the result is not only silly but unscientific.

A graduated cylinder which is not cylindrical to within the indicated precision seems a little unlikely. It seems far more likely that your spurious attempts force the fit through zero was leading to an incorrect regression slope which produced increasing residuals at higher volumes. It is hard to say without seeing the data but it sounds like it did have a finite intercept, but you were in denial about such things, regarding them as "silly".

Did you teach your students how to correctly read the meniscus of the fluid in the measuring cylinder? That could lead to a finite intercept, if you would allow that possibility to be seen. There clearly was some experimental error which needs to be identified. Had you not expressly removed the constant term, it would have given you some information about the problem. You have neatly demonstrated one reason not to bias the regression by excluding parameters.

If you suspected the cylinder was not straight, did you at least measure it to attempt to falsify this hypothesis. Apparently not. Did you substitute another cylinder to test the hypothesis.
 
Last edited by a moderator:
  • #22
Dr. Courtney
Education Advisor
Insights Author
Gold Member
2020 Award
3,295
2,459
You should pretty much always include it. The only time you can leave it out is when it is actually 0, not just not significantly different from 0, but exactly 0. And in that case then leaving it in is the same as leaving it out, so you should always leave it in.

First, and most importantly, if you remove it then all of your other parameter estimates become biased. The EmDrive fiasco is a great example of this. This bias occurs even if the intercept is not significantly different from zero.

Second, your residuals will no longer be zero mean. This may be related to your observation.

Third, many software implementations change the meaning of the R^2 value they report when the intercept is removed. So the resulting R^2 cannot be meaningfully compared to other R^2 values nor interpreted in the usual fashion.

Fourth, even if your true intercept is zero if the function is not exactly linear then your fit can be substantially worse than a linear fit with an intercept.

I’m sure there are other reasons, but basically don’t do it. It is never statistically beneficial (since the only time it is appropriate is when it makes no difference) and it can be quite detrimental. If it makes a difference then you need to leave it in for the reasons above, and if it doesn’t make a difference then it doesn’t hurt to leave it in.

Honestly, with your data the above biases and problems should be minuscule. So this data seems to be on the “it doesn’t make a difference” side of the rule. But I would recommend leaving it in for the future. I wouldn’t proactively give any explanation to the students, but just use the default setting.
For now I'm not buying it, and I intend to keep teaching students to set the vertical intercept to zero when the basic science of the experiment suggests the model will go through the origin. Here's why:

1. Of all models of the form f(x) = ax^n, why is n=1 so special that it is better modeled as f(x) = c + ax^n? I've never heard an argument or a need to add a constant term when n = 1/2 (for example, fall time as a function of drop height) or when n = 3/2 (for example Kepler's third law) or when the power is unknown (or treated as unknown) or in any other case except for suspected instrument issues where the instrumental measurement may be adding a constant offset to the measurement.

2. I've always been taught and been convinced that models with fewer adjustable parameters are better. In least squares fitting, a perfect fit can usually be achieved by having as many adjustable parameters as data points. Experiments with small numbers of data points require models with smaller numbers of adjustable parameters to better test a hypothesis. In the extreme, one could never support a direct proportionality using two data points fitting to a line with a constant term, but one can support it fitting to a line forced through the origin.

3. "Things should be as simple as possible, but no simpler." - Einstein. Though not an absolute arbiter, I also think it wise to keep Occam's Razor in mind. Other factors being equal, simpler models are usually better. I have nothing against a bit of exploratory data analysis, but regarding direct proportions, I see no compelling case why adding a constant should be the preferred two parameter model rather than adding a quadratic term or trying a power law.

4. Experience. I've been teaching these labs for a long time. My experience is that when the physics screams that the model must go through the origin, agreement will be closer between the best-fit slope and the known good value by fitting to a one parameter model. I have multiple data sets not just showing this in the case of speed of sound measurements, but also mass vs volume measurements and other high school type experiments. Even adding the second parameter in other ways (quadratic term, power law) yields less accurate results for the slope.

5. Physical meaning. I'm a big fan of teaching the physical meaning of both the slope and intercept when fitting to a linear model. Does the vertical intercept have a physical meaning or is it more of a fudge factor to get a better fit? If it seems to me like more of a fudge factor, it is best to skip it. And to me, it seems like a fudge factor for models of the form f(x) = ax^n. Sure, a constant term in a direct proportionality may have the meaning of a systematic offset in the measurement and is something to keep in mind as a possibility. But due care (as zeroing the electronic balance with the empty graduated cylinder on it) can pretty much eliminate it. I'm not keen on teaching students to add fudge factors "just in case."
 
  • #23
Dr. Courtney
Education Advisor
Insights Author
Gold Member
2020 Award
3,295
2,459
Even if there is not time to go into the details of the maths it would seem important to at least mention that it only minimises y residuals and that the basic criterion for this to work properly is to have very small errors on the x axis variable. It is only under those conditions that it will produce the "best unbiased linear estimation" of the slope.
This is a potential contradiction with your earlier assertion that the vertical axis should always have the dependent variable. Now you are saying the vertical axis should be the variable with the larger errors. Which is preferred in the case where the independent variable is expected to have the larger errors?

In the echo experiment, errors in the timing are on the order of 0.1% due to the sharpness of the sound leading edges and the accuracy of the clock in the sound card. In contrast, errors in the distance measurement arise from students measuring a distance to a wall with a fabric tape measure. Due care can reduce distance measurement errors near (or slightly below) 1%, but 0.1-0.2% is unlikely with high school students. So you are now saying that plotting the distance on the vertical axis was the right choice because the errors are larger?
 
  • Like
Likes Dale
  • #24
A.T.
Science Advisor
11,024
2,491
Does the vertical intercept have a physical meaning or is it more of a fudge factor to get a better fit?
The practical question is simply which slope gives you the better approximation of the speed of sound. I guess it depends on the type of error you have, and the distribution of the samples.
 
  • #25
jbriggs444
Science Advisor
Homework Helper
9,580
4,242
Going back and re-reading the experimental setup, we find: "igniting the firecracker a short distance from a microphone".

So the correct mathematical model would not be a linear fit in the first place. Instead, one has a trig problem -- a triangle with two long sides and a short side between. We want to consider the difference between the sum of the lengths of the two "long" sides and the length of the "short" side in the limit as the height of the triangle approaches zero.

Let us simplify the model by assuming that the firecracker and microphone are arranged perpendicular to the wall so that the triangle is isosceles. Ideally we are interested in the difference in path length as a function of the length of the perpendicular bisector of the "short" side (aka the distance to the wall).

Let "s" denote the length of the "short" side -- the short separation between firecracker and microphone.

Let "h" denote the height of the triangle -- the length of the perpendicular bisector/the distance to the wall.

Let "l" denote the length of one "long" side -- the diagonal distance from firecracker to midpoint on wall.

Let "d" denote the delta between the path lengths.

$$d=2l-s$$
$$l=\sqrt{h^2+\frac{s^2}{4}}$$
$$d(h)=2\sqrt{h^2+\frac{s^2}{4}}\ -\ s$$

Let us see what Excel has to say...
Code:
s h    2h correct      delta
1 0    0  0            0
1 0.5  1  0.414213562 -0.585786438
1 1    2  1.236067977 -0.763932023
1 2    4  3.123105626 -0.876894374
1 3    6  5.08276253  -0.91723747
1 4    8  7.062257748 -0.937742252
1 5    10 9.049875621 -0.950124379
1 6    12 11.04159458 -0.958405421
1 7    14 13.03566885 -0.964331152
1 8    16 15.03121954 -0.968780458
1 9    18 17.02775638 -0.972243623
1 10   20 19.02498439 -0.975015605
1 11   22 21.02271555 -0.977284454
1 12   24 23.0208243  -0.979175701
1 13   26 25.01922366 -0.980776337
1 14   28 27.01785145 -0.982148548
It looks like a correct linear fit will have a non-zero intercept.

Edit: Alternately, one could re-scale the independent variable h to reflect the computed path length difference.

Edit again: Ran a linear regression for a data set with s=1, h=0 through h=50 plus h=0.5 With the intercept nailed at zero, the result was y = 1.94h. With the intercept floating, the result was y = 1.99h - 0.74. The asymptotically correct result would of course be y = 2.00h - 1.

This turned out to be a fun mathematical exercise. I learned how to do linear regression with Excel.

We had an ironclad argument that the path length difference is zero at zero distance to wall. That argument was correct. We had an expectation that the path length difference increases linearly with distance to wall as long as firecracker to microphone separation is small. That expectation was correct as well. But it turned out that the linear relationship for large distances, when projected back to the y axis nonetheless has a non-zero y intercept.
 
Last edited:

Related Threads on Direct Echo-Based Measurement of the Speed of Sound - Comments

Replies
4
Views
9K
  • Last Post
Replies
2
Views
873
  • Last Post
Replies
17
Views
11K
  • Last Post
Replies
3
Views
2K
  • Last Post
Replies
2
Views
5K
  • Last Post
Replies
2
Views
3K
Replies
4
Views
4K
  • Last Post
Replies
10
Views
5K
  • Last Post
Replies
1
Views
1K
Top