I Why is saxophone growling produced by modulation of the sound waves?

AI Thread Summary
Saxophone growling occurs when a musician sings a note while playing, creating a subharmonic that results from amplitude modulation of the sound waves. This effect is linked to the non-linear behavior of the reed, which can produce additional harmonics when driven beyond its normal mechanical limits. The interaction between the vocal sound and the saxophone's fundamental frequency leads to complex sound wave mixing, which can be observed in spectrogram analysis. The discussion highlights that non-linear processes are essential for this mixing to occur, and variations in playing dynamics can influence the presence of the subharmonic. Understanding these principles can enhance the technique of growling on woodwind instruments.
  • #51
Baluncore said:
I disagree, it is a difference frequency if it comes from two inputs.
But don't we have two inputs here? The clarinet (carrier) and the singing tone (modulator)?

Or maybe I'm misreading your comment - are you agreeing that it is a difference frequency, but objecting to calling it a sub-harmonic? I can see that distinction, because as soon as I shift the frequency of either by a bit, it would no longer appear (to a musician) to be a sub-harmonic (it would not be harmonically related).


Daniel Petka said:
I hear what you are saying, you think that the amplitude modulation should work for any wind instrument, not just one that uses a vibrating membrane.
To find the subharmonic on a flute, I used an external sine generator, because singing might give you a false signal - the subharmonic may come from the vocal chords! I turned the volume way up and tried many instruments that are similar to a flute (glass bottle, wine glass, etc... Helmholz resonators essentially), but failed to find the subharmonic. ...
We'd need more details of your set up, but I suspect that it was not replicating the effect of the human voice well enough. I don't really understand how the signal generator was being used - you "turned the volume up" - what was the generator connected to, and how?

I think you need the signal amplified, and run to a speaker that is modulating the flow of air to the wind instrument. Just pointing a speaker at it (if that is what you did), might not have the desired effect. I'm thinking something along the lines of a speaker mounted to a closed box, with a controlled source of air running in and out of the box through hoses, to the wind instrument. This, I think, would provide a stream of air that has those 600 Hz (using my earlier examples) variations in pressure, but always positive, so as to keep the air column oscillating.

I'd love to set this up and test it myself, but I have a few other things going on at the moment.
 
Physics news on Phys.org
  • #52
NTL2009 said:
Or maybe I'm misreading your comment - are you agreeing that it is a difference frequency, but objecting to calling it a sub-harmonic?
I am insisting that it is a difference frequency, and that in a discussion of the physics, it should NOT be called a sub-harmonic.

One input, can generate harmonic or sub-harmonic frequencies.
Two inputs, can generate sum or difference frequencies.

Daniel Petka said:
You are not wrong, but to a listener, the phase doesn't matter. So a musician would just call it a subharmonic.
This is a physics discussion. The use of the correct physical term avoids confusion, and makes it possible to identify the mechanism of generation.

Daniel Petka said:
It just happens to be a subharmonic whenever the frequencies form a nice ratio (like 3/2, 4/3, and so on)
It is a difference frequency, that may have the same frequency as a sub-harmonic of one of the inputs.

Take a 1 kHz master oscillator and distort the output with a diode, then use two resonators to select the 2 kHz and 3 kHz harmonics generated. If you then multiply the 2 kHz and 3 kHz harmonics together, you will get a 1 kHz difference frequency that happens to be phase locked to the inputs. But that is not a sub-harmonic, it is a difference frequency, and it is exactly equal to the original master oscillator fundamental. To generate a sub-harmonic of that fundamental, takes more than amplitude distortion of the signal, it requires energy or information storage.

As an example, digital processors generate EMI across a wide frequency band. The frequencies below the processor clock are sub-harmonics, since any software loop, or digital frequency divider, is a sub-harmonic generator. Those sub-harmonics are rectangular digital waveforms, which have harmonics ranging upwards, well above the processor clock, where they fall in the spectrum between the integer harmonics of the processor clock.
 
  • #53
Baluncore said:
I am insisting that it is a difference frequency, and that in a discussion of the physics, it should NOT be called a sub-harmonic. ...
OK, thanks, we are in agreement on that now. It is a difference frequency, that in some specific cases, can be harmonically related.
 
  • #54
NTL2009 said:
But don't we have two inputs here? The clarinet (carrier) and the singing tone (modulator)?

Or maybe I'm misreading your comment - are you agreeing that it is a difference frequency, but objecting to calling it a sub-harmonic? I can see that distinction, because as soon as I shift the frequency of either by a bit, it would no longer appear (to a musician) to be a sub-harmonic (it would not be harmonically related).



We'd need more details of your set up, but I suspect that it was not replicating the effect of the human voice well enough. I don't really understand how the signal generator was being used - you "turned the volume up" - what was the generator connected to, and how?

I think you need the signal amplified, and run to a speaker that is modulating the flow of air to the wind instrument. Just pointing a speaker at it (if that is what you did), might not have the desired effect. I'm thinking something along the lines of a speaker mounted to a closed box, with a controlled source of air running in and out of the box through hoses, to the wind instrument. This, I think, would provide a stream of air that has those 600 Hz (using my earlier examples) variations in pressure, but always positive, so as to keep the air column oscillating.

I'd love to set this up and test it myself, but I have a few other things going on at the moment.
It's not as complicated as you think, the signal generator is just my phone playing a sine wave from an online tone generator. That's the whole setup, my mouth and my phone near it.

Edit: once again, this setup doesn't include the clarinet. This is what makes it reproducible. While only a few people have a clarinet, all people have vocal chords and a phone
 
Last edited:
  • #55
Daniel Petka said:
It's not as complicated as you think, the signal generator is just my phone playing a sine wave from an online tone generator. That's the whole setup, my mouth and my phone near it.

Edit: once again, this setup doesn't include the clarinet. This is what makes it reproducible. While only a few people have a clarinet, all people have vocal chords and a phone
OK, thanks - I got mixed up between the earlier statements about not being able to produce the difference frequency with a flute.

It's not clear to me how a voice near a speaker would end up amplitude modulating that speaker tone, wouldn't they simply add? It's different from the voice being a source of wind for an instrument, where it would affect the amplitude. It's also possible (likely?) that a significant non-linearity exists in the phone's microphone and/or signal chain?
 
  • #56
No, the voice doesn't affect the speaker. That's the joke, the speaker affects the voice and not the other way round. You can call it "nonlinearity" but all it is, is multiplication of the 2 sound waves.
The vocal chords are like the reed.
 
  • #57
Daniel Petka said:
No, the voice doesn't affect the speaker. That's the joke, the speaker affects the voice and not the other way round. ...
That seems like a theory. I'm not discounting it, it may be correct, but do we really know that's what's happening?

I hope I can get to setting up my own experiment as I described above. I have a soprano and a tenor recorder, an air compressor and amps and speakers, and a workshop to build a box as I described. I suspect modulating the air stream would be enough, that the vocal cords/reeds would not be needed, but I don't know. Or maybe they are acting the same, and the speaker is modulating our non-linear vocal cords?
Daniel Petka said:
... You can call it "nonlinearity" but all it is, is multiplication of the 2 sound waves.
The vocal chords are like the reed.
Well, my understanding is there must be a non-linearity (and they don't care what/if I call it anything! :) ). In a linear system, the signals add. With non-linearity, there are harmonics and/or sum/difference frequencies.
 
  • #58
NTL2009 said:
That seems like a theory. I'm not discounting it, it may be correct, but do we really know that's what's happening?

I hope I can get to setting up my own experiment as I described above. I have a soprano and a tenor recorder, an air compressor and amps and speakers, and a workshop to build a box as I described. I suspect modulating the air stream would be enough, that the vocal cords/reeds would not be needed, but I don't know. Or maybe they are acting the same, and the speaker is modulating our non-linear vocal cords?

Well, my understanding is there must be a non-linearity (and they don't care what/if I call it anything! :) ). In a linear system, the signals add. With non-linearity, there are harmonics and/or sum/difference frequencies.
Yes, you are right of course, I just like to use the more precise term whenever possible.
Sure, it is a theory. I'm not gonna repeat myself, this is why I think it happens:
Daniel Petka said:
Ok I might have an explanation for all of this. This comes a bit late because I focused on Uni instead of this peculiar effect.

I tried another experiment: sending a single frequency into my mouth cavity while singing a note (Don't try this at home unless no-one is around lol)
And guess what, this still creates a subharmonic. Doing this with a flute doesn't.
Which leads me to believe that the reason why you hear a subharmonic when singing while playing a flute is that the sound from the flute gets into your vocal chords.
The flute is just a resonator so it's not surprising that a single frequency that's not resonant doesn't do much.
The reed is different.
The nonlinearity or the switching of a diode can be eliminated by singing quietly and putting the sine wave into the vocal chords.
The most simple explanation that I could come up with is that higher air velocity => higher amplitude. Sinusoidally varying air velocity (sound) => sinusoidally varying amplitude, multiplication of the 2 sinusoids.
I tried to simulate this in Python by multiplying a recorded clarinet .wav file by this DC air and AC perturbation, basically
Growl(t) = clarinet(t)*(1+cos(wt))
Where w is the angular frequency one fifth above the clarinet's fundamental. The resulting sound is very close to the real thing.
Edit: I just did it with a subwoofer playing at 800Hz while not even singing directly at it. I doubt that the voice had any effect on the sub
 
Last edited:
  • #59
NTL2009 said:
Amplitude modulation
The appropriate term is not Amplitude Modulation; AM is a very specific no-linear process. Every instrument / player combination will have a unique non-linear characteristic with unique spectral products. If you don't have an accurate model then you can say very little about the resulting sounds. You can't even talk harmonics in describing instruments or vocal systems.

This is even harder in the case of an instrumentalist with an instrument. There is feedback with the person's perception which can easily 'pull' what they do with their vocal system and cause misconceptions of what they hear / feel. The experimenter becomes an unreliable witness. You'd have to invent some sort of double blind test to eliminate this problem.

This doesn't matter but I'm just pointing out that conversations about this sort of topic are unlikely to reach safe conclusions. You would need to build a hardware model - good luck with that.
 
  • #60
sophiecentaur said:
The appropriate term is not Amplitude Modulation; AM is a very specific no-linear process. Every instrument / player combination will have a unique non-linear characteristic with unique spectral products. If you don't have an accurate model then you can say very little about the resulting sounds. You can't even talk harmonics in describing instruments or vocal systems.

This is even harder in the case of an instrumentalist with an instrument. There is feedback with the person's perception which can easily 'pull' what they do with their vocal system and cause misconceptions of what they hear / feel. The experimenter becomes an unreliable witness. You'd have to invent some sort of double blind test to eliminate this problem.

This doesn't matter but I'm just pointing out that conversations about this sort of topic are unlikely to reach safe conclusions. You would need to build a hardware model - good luck with that.
1725374983914.jpeg

Very few things in science reach safe conclusions, but I would say it's safe to call this AM.
I don't quite see why perception matters here - the witness is my phone, not my ear.
 
  • #61
Daniel Petka said:
View attachment 350717
Very few things in science reach safe conclusions, but I would say it's safe to call this AM.
I don't quite see why perception matters here - the witness is my phone, not my ear.
It's true to say that the amplitude varies but are you sure that the envelope shape is identical to (one of) the inputs? Is there no distortion? If you want to call it AM then that would be a 'happens to look like AM in this case'. A lot of the effects you get from interaction between the instrument and the player are not just simple AM. What is your basis for saying it is? What you are getting is intermodulation so why not call it that? Covers all bases and avoids inconsistencies further down the line.

Also I read the thread in terms of the effect on the player as much as on the listener. There is a lot of subjective here.
 
  • #62
sophiecentaur said:
It's true to say that the amplitude varies but are you sure that the envelope shape is identical to (one of) the inputs? Is there no distortion? If you want to call it AM then that would be a 'happens to look like AM in this case'. A lot of the effects you get from interaction between the instrument and the player are not just simple AM. What is your basis for saying it is? What you are getting is intermodulation so why not call it that? Covers all bases and avoids inconsistencies further down the line.

Also I read the thread in terms of the effect on the player as much as on the listener. There is a lot of subjective here.
Yes, the envelope has the same fundamental frequency as one of the inputs. The other frequencies are contained in the spectrogram I posted. The (falsetto) voice does have a strong first harmonic that generates its own sum and difference frequencies. This generation of intermodulation products is inevitably linked to amplitude modulation via the product rule.

"Intermodulation (IM) or intermodulation distortion (IMD) is the amplitude modulation of signals containing two or more different frequencies, caused by nonlinearities or time variance in a system. " source wikipedia/intermodulation

This nonlinearity that causes it is unknown. Above I speculated that it's due to the Bernoulli principle and I'm unlikely to build a machine so I'll probably leave it there.
 
  • #63
Daniel Petka said:
"Intermodulation (IM) or intermodulation distortion (IMD) is the amplitude modulation of signals containing two or more different frequencies, caused by nonlinearities or time variance in a system. " source wikipedia/intermodulation
That's only half a definition but could be applied in some cases. This link is a bit more informative and the sketch diagram shows that we have much more than a carrier plus a pair of sidebands involved.
 
  • #64
Baluncore said:
This is a physics discussion. The use of the correct physical term avoids confusion, and makes it possible to identify the mechanism of generation.
Yes. There is quite a lot of confusion here because the term "subharmonic singing" is a very non-physics term; singers know what they. mean but I can't see anthing involving 'harmonics' in such singing. The intermodulation in the vocal cords when there is some forced excitation produces some very low notes.

I am not sure how appropriate it is to call a 'lower sideband' resulting from a non linear process a 'subharmonic' There is no multiple of a frequency of the source signals involved. Is this just a matter of usage?
 
  • #65
sophiecentaur said:
Yes. There is quite a lot of confusion here because the term "subharmonic singing" is a very non-physics term; singers know what they. mean but I can't see anthing involving 'harmonics' in such singing. The intermodulation in the vocal cords when there is some forced excitation produces some very low notes.

I am not sure how appropriate it is to call a 'lower sideband' resulting from a non linear process a 'subharmonic' There is no multiple of a frequency of the source signals involved. Is this just a matter of usage?
Harmonic = multiple of some frequency.
Subharmonic = fraction of some frequency.

For example the first subharmonic is one half of the fundamental frequency.
And I totally get your confusion because then it fact becomes the new fundamental frequency with new harmonics (those actually matter, the actual low bass notes are barely audible)
 
  • #66
sophiecentaur said:
Yes. There is quite a lot of confusion here because the term "subharmonic singing" is a very non-physics term; singers know what they. mean but I can't see anthing involving 'harmonics' in such singing. The intermodulation in the vocal cords when there is some forced excitation produces some very low notes.

I am not sure how appropriate it is to call a 'lower sideband' resulting from a non linear process a 'subharmonic' There is no multiple of a frequency of the source signals involved. Is this just a matter of usage?

sophiecentaur said:
That's only half a definition but could be applied in some cases. This link is a bit more informative and the sketch diagram shows that we have much more than a carrier plus a pair of sidebands involved.
Understood, thanks. So basically it's hard to tell whether a higher order intermodulation product is there
 
  • #67
Daniel Petka said:
For example the first subharmonic is one half of the fundamental frequency.
You need to be more careful referring to harmonics as first, second or third.

If the fundamental is f, then the second harmonic is 2f and is an even harmonic, while the third harmonic is 3f and is an odd harmonic. That means the first harmonic is the fundamental f.

Squaring a sinewave, doubles the frequency, and is called "second harmonic generation".

The second sub-harmonic is f/2, and the third is f/3. That means the first sub-harmonic is the fundamental f.
 
  • #68
Baluncore said:
You need to be more careful referring to harmonics as first, second or third.

If the fundamental is f, then the second harmonic is 2f and is an even harmonic, while the third harmonic is 3f and is an odd harmonic. That means the first harmonic is the fundamental f.

Squaring a sinewave, doubles the frequency, and is called "second harmonic generation".

The second sub-harmonic is f/2, and the third is f/3. That means the first sub-harmonic is the fundamental f.
This logic is flawed because then the first harmonic would be the fundamental. We agree that only 2f, 3f, 4f... are the harmonics (multiples of a fundamental), so 2f is the first harmonic, not 3f. But I agree that 2f=second harmonic is easier to hold in your head. The word harmonic in this case is like a synonym to frequency. I mean, some folks count them like this, other folks count them like that, I don't think it's reasonable to argue about how something should be called.
 
  • #69
  • #70
Daniel Petka said:
And I totally get your confusion because then it fact becomes the new fundamental frequency with new harmonics (those actually matter, the actual low bass notes are barely audible)
I'm glad you get it.
So, if the two intermodulating signals have (say) frequencies which are prime numbers, then the resultant lowest frequency product would be the 'what-th' subharmonic? 19/31th, for instance? This hole just gets deeper and deeper - and all for the sake of insisting we give it a certain name, other than 'lowest product'.
 
  • Like
Likes Daniel Petka
  • #71
Baluncore said:
It is the fundamental.

https://en.wikipedia.org/wiki/Harmonic#Terminology
"But more precisely, the term "harmonic" includes all pitches in a harmonic series (including the fundamental frequency) while the term "overtone" only includes pitches above the fundamental."
Ok fair enough, turns out that most musicians use harmonics and overtones interchangeably. Yeah it's definetly cleaner to separate them like this
 
  • Like
Likes sophiecentaur
  • #72
Daniel Petka said:
Ok fair enough, turns out that most musicians use harmonics and overtones interchangeably.
The two numbers differ by a vital one so never trust a musician. (That can't apply to real musicians, of course.. In the same way, sloppy non-engineer talk is used by non-engineers)
 
  • #73
Baluncore said:
You need to be more careful referring to harmonics as first, second or third.

If the fundamental is f, then the second harmonic is 2f and is an even harmonic, while the third harmonic is 3f and is an odd harmonic. That means the first harmonic is the fundamental f. ...
Yes, it is confusing between physics/engineering and musicians. Musicians sometimes refer to harmonics as "overtones", where the "first overtone" is the "second harmonic". Harmonic numbers are multiples of the fundamental (the first harmonic), but overtones are counted as starting at the first tone above the fundamental. Geez, now I just found some references (wiki!) that refer to the same numbering for each - that's new to me.

Here's a short video (search the name for more), of singer producing a very low note, apparently by creating a difference frequency from different areas of his vocal chords vibrating at different frequencies. And yes, they refer to them as 'sub-harmonics', but they are musicians. It looks like there is some follow up with scientists, I'll try checking those later to see if there are technical explanations.

 
  • #74
Daniel Petka said:
This logic is flawed because then the first harmonic would be the fundamental. We agree that only 2f, 3f, 4f... are the harmonics (multiples of a fundamental), so 2f is the first harmonic, not 3f. But I agree that 2f=second harmonic is easier to hold in your head. The word harmonic in this case is like a synonym to frequency. I mean, some folks count them like this, other folks count them like that, I don't think it's reasonable to argue about how something should be called.
No - as pointed out, the first harmonic IS the fundamental. And 2f is the SECOND harmonic, but musicians call it the FIRST overtone. The physics way is clean - everything is a multiple.

The music way is very, very messy for these conversations - if I play a square wave on my synthesizer, and you should know that a square wave has only odd harmonics, guess what the 'proper' musical way to count these is? Well 3f, being the very first frequency above the fundamental in a square wave is called the FIRST overtone. Because that's what it is! That's the definition of overtones. And the FIFTH harmonic is the SECOND overtone! See how confusing it gets (though it can be useful in musical terms, but keep it there)?

So yes, we should not 'argue' about how something is called - we are having a technical discussion, use the correct technical terms, please!
 
  • Like
Likes sophiecentaur, Daniel Petka and Baluncore
  • #75
Just stumbled across this thread again. It sounds an awful lot like arguing
"How many Angels can dance on the head of a pin?" :oldconfused:
 
  • #76
NTL2009 said:
Musicians sometimes refer to harmonics as "overtones", where the "first overtone" is the "second harmonic".
Musical instruments, not being ideal systems, tend to produce overtones which relate to modes of oscillation. actual harminics are in a minority. So the musicians are correct in their context and maybe the rest of us should stop feeling righteous in our terminology.
 
  • #77
sophiecentaur said:
Musical instruments, not being ideal systems, tend to produce overtones which relate to modes of oscillation. actual harminics are in a minority.
I can see how that may be true for percussion, but what then drives those non-harmonic overtones in other instruments.
 
  • #78
Their structure. Even the much quoted guitar has a non-harmonic spectrum, then there’s the trumpet.
This is even more relevant in the attack phase.
 
  • #79
sophiecentaur said:
This is even more relevant in the attack phase.
The attack mode is percussion. It is excitation by a step function.

But how relevant is this to the generation of undertones, when two signals meet in the vocal cords?
 
  • #80
  • #81
sophiecentaur said:
Musical instruments, not being ideal systems, tend to produce overtones which relate to modes of oscillation. actual harminics are in a minority. So the musicians are correct in their context and maybe the rest of us should stop feeling righteous in our terminology.
I'm not sure where this "feeling righteous" comment comes from. I think I acknowledged that the musical approach of thinking in terms of overtones can be useful for certain discussions.

But, as I understand it, the OP was looking for the physics behind what produced the lower frequencies. So I think it best to stick to the proper physics terms for that part of the discussion, and avoid confusion between the two approaches..
 
  • Like
Likes sophiecentaur
  • #82
Baluncore said:
The trumpet is harmonic, but the individual harmonic amplitude is shaped by the transfer function of the tube and the bell.
http://newt.phys.unsw.edu.au/jw/brassacoustics.html#spectrum
Did you ever play a trumpet? The bell / conical bore help to get close to harmonic overtones but to play in tune you need to bend / pull the higher notes.
The modes of a real string are affected by the mass of the bridge etc and the shape of the nut and bridge slots. Playing ‘harmonics’ by partial stopping at a point on the string involves choosing a different position than where the fret sits.
My evidence for all my assertions is that Hammond type organs never sound like the name on the tabs. They sound like a Hammond Organ.
If instruments sounded like the model you suggest, musical ensembles would sound boring.
 
  • #83
sophiecentaur said:
...
My evidence for all my assertions is that Hammond type organs never sound like the name on the tabs. They sound like a Hammond Organ.
If instruments sounded like the model you suggest, musical ensembles would sound boring.
I don't think he's saying harmonic content is the whole story to how we identify an instrument. The Hammond organ produces a static harmonic content (with the exception of the "Percussion" tab, which adds a quickly decaying tone). The harmonic content of an acoustic instrument varies over time, and with playing technique.

I agree, that in practice, the harmonics of many (most?) instruments are not exact whole multiples. Guitar strings have a stiffness that moves the effective 'end' of the string further in from where the string is physically stopped. And that stiffness affects higher frequencies more - 'compensated' saddles are a means to , ah, em, 'compensate' for this. The higher/stiffer strings are shortened a bit, relative to the low frequency strings. You can see this on just about any guitar, at the 'saddle/bridge' end.

But I think that in most cases, these slight variations from are just accepted, and we still call the harmonic by the whole number that it is approximately equal to. Probably in the same way that when we talk about voltage conversion in a transformer, we don't always add a caveat regarding losses, or slight differences in the winding ratio, unless that is the focus of the discussion, We just mostly say a 2:1 winding ratio provides a 2:1 voltage ratio.
 
  • #84
The origin of growling will not be identified by a discussion of what makes music sound good. It is clouding the physical issue.

NTL2009 said:
But I think that in most cases, these slight variations from are just accepted, and we still call the harmonic by the whole number that it is approximately equal to.
Musical overtones may have non-integer ratios, but the physical harmonics of a distorted sinewave cannot, they must have integer ratios.

Two signals, being mixed together in the non-linear vocal cords, will transfer energy to the real sum and difference frequencies, with numerically correct frequencies.
 
  • #85
NTL2009 said:
But I think that in most cases, these slight variations from are just accepted,
You seem to be implying that the 'slight variations' are not a good thing. I would say that they are what distinguishes a 'good' and a 'poor' instrument. I would say that they're highly relevant and making a good violin, for instance, involves being very aware of the non-harmonic nature of the sound it produces. So trying to characterise an instrument using just the term 'harmlonic' is rather pointless.

A point that seems to have been missed in this discussion is that most of the passive filtering (wind instrument tubing, string instrument bodies - and even the vocal chords (when they are not being excited in there relaxatation oscillator mode) is low Q and will not select within a narrow band of any exciting waveform. A waveform full of strange overtones will not be filtered hard enough to be left with a fundamental so the spectrum will be more or less maintained. We certainly do not need to talk in terms of somehow changing overtones into pukkah harmonics; can't be done.
 
  • #86
Baluncore said:
Musical overtones may have non-integer ratios, but the physical harmonics of a distorted sinewave cannot, they must have integer ratios.
That assumes the waveform is a distorted sinewave. If you look at the trace (over time) of most single note sounds (from a real instrument), the high frequency parts do not remain stationary relative to the fundamental; they march along, showing that they are at non-harmonic frequencies. So it is definitely not a simple "distorted sinewave".
The same goes for vocal sounds.
 
  • #87
sophiecentaur said:
That assumes the waveform is a distorted sinewave.
Yes, that was the point I was making, but I am interested in clarifying the issue, not clouding the issue.

I want to identify the mechanism where non-linear mixing, transfers real energy to the difference frequency.
 
  • #88
Baluncore said:
Yes, that was the point I was making, but I am interested in clarifying the issue, not clouding the issue.

I want to identify the mechanism where non-linear mixing, transfers real energy to the difference frequency.
Ok, you don’t want to cloud the issue. So use the term ‘waveform’ and don’t make approximations using a simple continuous waveform. If you restrict your analysis to a continuous waveform, repeating at the fundamental rate you really can’t be sure of the results. Inappropriate windowing is responsible for many mistakes.

In the case of ‘growling’ you don’t have simple mixing of tones in a separate passive filter/mixer. Everything interacts within the same system. When do overtones become straightforward harmonics? Can you say how?
 
  • #89
sophiecentaur said:
When do overtones become straightforward harmonics? Can you say how?
If by "straightforward harmonics", you actually mean integer harmonics, as in the Fourier representation of a regular waveform, then it will not happen.

That does not preclude one of the many tones, from mixing with another tone, to produce a numerically correct difference frequency. We can ignore all your rogue overtones, to focus only on the sine waves that are present in the analysis, and which generate the difference frequency that is the growl.

This thread needs to get back on topic. Would you prefer it was locked?
 
  • #90
if the topic could be discussed using the correct (Physics) terms without confusing shorthand, I’d love to read posts on the topic.

The words ‘sub harmonic’ and ‘harmonic’ should have been discarded very early in the thread, however unless someone can seriously justify them (more than to say they are being used very vaguely.)

You don’t need to threaten the nuclear option.
 
  • #91
Baluncore said:
integer harmonics,
An interesting term and brought in here to maintain the illusion of correctness, maybe. It could make an interesting (?) topic for another post.
 
  • #92
sophiecentaur said:
NTL2009 said:
But I think that in most cases, these slight variations from are just accepted,
You seem to be implying that the 'slight variations' are not a good thing. I would say that they are what distinguishes a 'good' and a 'poor' instrument. ....

A point that seems to have been missed in this discussion ...
I'm not trying to make any statement at all (in this thread) about what is 'good' or 'bad' (though I do agree with you in the musical realm).

Maybe I'm misreading this whole thread, but I thought OP was just looking for the physics behind what produces a tone that is lower in frequency than either the clarinet tone or the singing tone? Good or bad aren't a part of that.
 
  • #93
NTL2009 said:
Good or bad aren't a part of that.
Of course not. My point was that the model in the attempted explanation is flawed if you assume harmonics and that you can easily detect audibly that departure by the difference between the sound of instruments and simple 'synthesised' sounds.

There should be serious caveats attached when inappropriate terms are used in an explanation. Approximations can be relevant and need to be justified properly - or at least mentioned.
 
Back
Top