modulation

Modulation vs. Beating Confusion

[Total: 1    Average: 5/5]

A long time ago I read a paper in the IEEEProceedings recounting the history of the superheterodyne receiver. Overall it was a very interesting and informative article, with one exception: in it the author remarked that the modulation (or mixing) principle was really nothing new, being already known to piano tuners who traditionally used a tuning fork to beat against the piano string’s vibrations. I did a double-take on this assertion and wrote the IEEE a letter to that effect, which was published, explaining why this interpretation was fallacious. Most unexpectedly, the author replied by contesting my explanation, maintaining that the beat frequency was indeed equivalent to the intermediate frequency (IF) formed when the radio frequency (RF) and local oscillator (LO) signals are mixed to form a difference-frequency signal. With no further letters supporting (or opposing) my viewpoint, it was left to the Proceedings readers to decide who was right. (I did get one private letter of support).

I thought maybe it was time to exhume the argument and so include those in the the PF community who might have an interest and/or opinion in the subject.

Let me start by stating that modulation (a.k.a. mixing) is a nonlinear process while beating is a linear process. In physics these are distinctly different processes.

The term “mixing” has a very specific meaning in radio parlance. Mixing two signals of differing frequencies f1 and f2 results in sidebands (f1 + f2) and |f1 – f2|. These are new signals at new frequencies. Depending on the mixing circuit there can be many higher sidebands as well but let’s assume a simple multiplier as the mixer:

Mixed signal = sin(ω1t)*sin(ω2t) = ½ cos(ω1 – ω2)t – ½ cos(ω1 + ω2)t, ω= 2πf.

Note that two new frequencies are produced. (The original frequencies are lost in this case but that does not always obtain; cf. below).Now consider two signals beating against each other. Let the tuning fork be at f1 and the piano string at f2; then:
Beat signal = sin(ω1t) + sin(ω2t). This can be rewritten as

Beat signal = [cos(ω1 – ω2)t/2][cos (ω1 + ω2)t/2].

Now, this may look like the same two new signals generated by mixing. But that would be wrong. There are no new signals of frequencies (ω1+ω2) and |ω1 – ω2| generated. A look at a spectrum analyzer would quickly confirm this. (I am aware that the human ear does produce some distortion-generated higher harmonics, but these are small in a normal ear and certainly not what the piano tuner is listening to).

The beat signal is just the superposition of two signals of close-together frequencies. Assuming |f1 – f2| << f1, f2, the “carrier” frequency of the beat signal is at (f1 + f2)/2 and so approaches f1 = f2 when the piano string is perfectly strung, and the beat signal amlitude varies with a frequency of |f1 – f2|. |f1 – f2| can be extremely small before essentially disappearing altogether to the piano tuner, certainly on the order of a fraction of 1 Hz It should be obvious that no human ear could detect a sound at that low a frequency (the typical human ear lower cutoff frequency is around 15-20 Hz).

This confusion is not helped by other authors of some repute. For example, my Resnick & Halliday introductory physics textbook describes the beating process as follows:

“This phenomenon is a form of amplitude modulation which has a counterpart (side bands) in AM radio receivers”.

Most inapposite in a physics text! Beating produces no sidebands. A 550-1600 KHz AM signal is amplitude-modulated and of the form
[1 + a sin(ωmt)]sin(ωct)

which produces sidebands at (fc + fm) and (fc – fm) in addition to retention of the carrier signal at fc. Here fc is the carrier (say 1 MHz “in the middle of your dial”), ωm is the modulating signal,and a is the modulation index, |a| < 1 . Of course, in a radio signal, asin(ωmt) is really a linear superposition of sinusoids, typically music and speech, in the range 50 – 2000 Hz, .

Comments welcome!

Click For Forum Comments

AB Engineering and Applied Physics
MSEE
Aerospace electronics career
Used to hike; classical music, esp. contemporary; Agatha Christie mysteries.

36 replies
Newer Comments »
  1. rude man
    rude man says:

    Yes, that seems to be the problem here.  I thought I made it pretty clear what kind of "beats" I was referring to, but I also acknowledge that the term "beat" can include mixing.   A clear example was already mentioned, to wit, the BFO, which of course is a mixing operation.I do disagree totally with whoever thinks mixing is done in the ear to any audible extent. The lowest audible sound would have to be at the sum frequency, i.e. at twice the t-f frequency, which it clearly isn't; or it would have to be a very high harmonic of the difference frequency, which still would be at a very low frequency, near the lower end of audibility, which again is not at all what the tuner hears.  So please, folks, forget about nonlinear ear response!  :smile:

  2. Baluncore
    Baluncore says:

    Maybe we need to stop using the term “mixer”. To an audio engineer, mixing involves adding signals in a linear device, so as to prevent energy appearing at new frequencies. To a radio engineer, mixing involves multiplying signals in a non-linear device, so as to cause energy to appear at new frequencies.

  3. nsaspook
    nsaspook says:

    The heterodyne process is non-linear (mixing) but modulation can be both.
    [URL]http://www.comlab.hut.fi/opetus/333/2004_2005_slides/modulation_methods.pdf[/URL]
    Non-linear Ring Rodulator.
    \""

    or linear as with a AM modulator where we have a bandwidth amplitude that’s equal to the modulation signal that obeys the principle of superposition.

    Yes, I agree that an audio ‘beat’ like when using a BFO on a Morse code receiver is not the same as a ‘mixed’ signal but ‘modulation’ in general is not restricted to superposition (or the lack of superposition) of signals.

  4. Averagesupernova
    Averagesupernova says:

    This subject has been beat to death with disagreement in each related thread here on PF. meBigGuy pretty much hit the nail on the head. Mixing and modulation are both multiplication. Depending on who you ask they are linear or non-linear. A true linear amplifier can have many signal input to it and will not generate new frequencies. So this implies to me that mixing and modulation are non-linear processes.

    I think that the word beat was originally used interchangeably with mixing. A BFO used in a SSB or CW (morse) receiver is in fact mixed with the IF in order to generate an audio signal. It is NOT linear. The ear is in fact non-linear but we don’t beat a couple of MHz signals together in our ear to get an audible signal. The non-linear process has to occur in the radio, not the ear.

  5. nsaspook
    nsaspook says:


    I think that the word beat was originally used interchangeably with mixing. A BFO used in a SSB or CW (morse) receiver is in fact mixed with the IF in order to generate an audio signal. It is NOT linear. The ear is in fact non-linear but we don’t beat a couple of MHz signals together in our ear to get an audible signal. The non-linear process has to occur in the radio, not the ear.

    The signal AM demodulation process (envelope detector diode in this circuit) is non-linear but the actual BFO injection circuit is usually a simple linear signal injection (added to the antenna signal here) like in this simple crystal radio circuit.
    \""

  6. meBigGuy
    meBigGuy says:

    Let’s talk about beating from the frequency domain perspective. If beating is like modulation, then there must be a carrier (real or suppressed) and sidebands. (and it turns out there are such, in a crazy sort of way)

    \""

    If you look at the trig function for adding two sinewaves (of equal amplitude), the right side represents a carrier of frequency (x+y)/2 being multiplied by a modulation function at (x-y)/2.

    If you look at the summed signals (x and y) in the frequency domain, there has to be those two signals at x and y, and nothing else (because I am just linearly adding two sine waves). So, where are the carrier and sidebands?

    Well, it turns out the two signals, x and y, ARE the sidebands, and the suppressed carrier (x+y/2) is halfway between them. It’s strange to think about it that way, but it is an accurate portrayal of what is actually happening.

    That illustrates that beating causes no new frequencies, and is therefor totally useless as (and is totally distinct from) a mixing function. It does cause modulation. Beating is analogous to what happens when you create standing waves. When the waves are 180 out, they cancel, but no new frequencies are created. Think of what is happening as you walk through a room with a tone playing. (and think dopplar)

    Therefor, any conclusion that beating caused by linearly adding two sinewaves is the same as, or even similar to, hetrodyning is totally incorrect.

  7. meBigGuy
    meBigGuy says:

    I disagree. I am in total support of rude man. Everything I am saying is in his paper.

    When you linearly add two sine waves, as described in the OP paper, no new frequencies are created, so there is no way to create the equivalent of a IF frequency. The beats experienced during the tuning of a piano in no way represent a prior art with regard to a superhet architecture.
    [B]
    The summed signals only have the [U]appearance[/U] of a modulated signal.[/B] They were not created by modulation. They could be created by a true modulator that started with an (x+y)/2 carrier, but in that case the x and y frequencies would be newly created by the modulator.

    Remember, the disagreement here is with regard to this sentence in an article discussing the history of the superheterodyne receiver:
    ”in it the author remarked that the modulation (or mixing) principle was really nothing new, being already known to piano tuners who traditionally used a tuning fork to beat against the piano string’s vibrations.”

    The summed time domain waveform ONLY APPEARS as a modulated signal. Its method of creation is of no value in a superhet architecture since no new frequencies are created in the frequency domain. That is a key point, and cannot be ignored. In any truly modulated or mixed signal, new frequencies are actually created.

    The beating of linearly summed sinewaves is in no way (either practically or mathematically) similar in principle to mixing or modulating to produce true new frequencies.

    Fell free to write up the terms as you want, such that one could consider piano tuning beating in any way similar to superhet mixing or true modulation.

    J[COLOR=#b30000]ust because the beating signal looks like a signal created by modulation does not mean the process to create it in any way involved modulation.[/COLOR]

  8. Jeff Rosenbury
    Jeff Rosenbury says:

    I disagree. I am in total support of rude man. Everything I am saying is in his paper.

    When you linearly add two sine waves, as described in the OP paper, no new frequencies are created, so there is no way to create the equivalent of a IF frequency. The beats experienced during the tuning of a piano in no way represent a prior art with regard to a superhet architecture.
    [B]
    The summed signals only have the [U]appearance[/U] of a modulated signal.[/B] They were not created by modulation. They could be created by a true modulator that started with an (x+y)/2 carrier, but in that case the x and y frequencies would be newly created by the modulator.

    Remember, the disagreement here is with regard to this sentence in an article discussing the history of the superheterodyne receiver:
    ”in it the author remarked that the modulation (or mixing) principle was really nothing new, being already known to piano tuners who traditionally used a tuning fork to beat against the piano string’s vibrations.”

    The summed time domain waveform ONLY APPEARS as a modulated signal. Its method of creation is of no value in a superhet architecture since no new frequencies are created in the frequency domain. That is a key point, and cannot be ignored. In any truly modulated or mixed signal, new frequencies are actually created.

    The beating of linearly summed sinewaves is in no way (either practically or mathematically) similar in principle to mixing or modulating to produce true new frequencies.

    Fell free to write up the terms as you want, such that one could consider piano tuning beating in any way similar to superhet mixing or true modulation.

    J[COLOR=#b30000]ust because the beating signal looks like a signal created by modulation does not mean the process to create it in any way involved modulation.[/COLOR]

    I agree with the idea. But I could understand someone including beating in their definition of modulation, basically using modulation as a catch all term for any signal “mixing”.

    I don’t think that’s the way the definition should go. Modulation should not include beating, IMO. Words have meanings and meanings are particularly important in technical fields. But words are also defined by use, and I don’t hold myself up as an arbiter of use.

    I do agree beating is not an example of prior art for non-linear mixing.

  9. Averagesupernova
    Averagesupernova says:

    It IS in fact about semantics. You can define the word beat to mean whatever you want. The perception of the difference signal commonly referred to as a beat means that it actually is *there*. Now the nitpicking can start concerning where the *there* actually is. In the case of a couple of notes played on a synthesizer keyboard or piano, the new note is created in our ears due the nature of our hearing being logarithmic. I am sure I have seen in text books that frequency mixing and beating are the same thing. I am not really one to pick sides on semantics so I won’t make an argument either way. My nitpicking is as I preciously stated in this thread as well as other threads here on PF about frequency mixing and AM modulation being the same thing. If no new frequencies are created then it is neither.

    Incidentally, the quote:

    in it the author remarked that the modulation (or mixing) principle was really nothing new, being already known to piano tuners who traditionally used a tuning fork to beat against the piano string’s vibrations.

    may have a little more validity than would appear. When was it determined that it is the non-linearity of our ears that create the perception of a new signal with signals that are simply summed together and listened to? Was this knowledge responsible for the idea of superhet? Who was the first person to understand that non-linearity is required?

  10. meBigGuy
    meBigGuy says:

    When was it determined that it is the non-linearity of our ears that create the perception of a new signal with signals that are simply summed together and listened to?

    That ear thing may be a real phenomenon, but is not the cause of the beat we hear. The beat we hear is the same as what we hear when we walk through a reflective room with a 1KHz tone playing. It is caused by actual increases and decreases in amplitude (wavelength of 1KHz = 1 foot). THERE IS NO NEW FREQUENCY. (well, not exactly, because dopplar from moving effectively changes the single tone to 2 tones)

    There is no non linearity of any kind involved (needed?) in the beating we hear when we sum two tones. PERIOD! It is detectable by a fully linear system.

    I repeat from my previous post: You can create the two tones (x and y) by modulating an (x+y)/2 carrier with an (x-y)/2 signal. That action will produce two new tones, x, and y. If piano tuners were doing that then I would agree.

    [U]Saying two tones in ANY way represents a modulated signal is the same as saying 1 tone represents an SSB signal[/U]. Is whistling a precursor to ssb modulation? After all, the signals happen to look the same, just as in the piano tuner case.

  11. Averagesupernova
    Averagesupernova says:

    That ear thing may be a real phenomenon, but is not the cause of the beat we hear. The beat we hear is the same as what we hear when we walk through a reflective room with a 1KHz tone playing. It is caused by actual increases and decreases in amplitude (wavelength of 1KHz = 1 foot). THERE IS NO NEW FREQUENCY. (well, not exactly, because dopplar from moving effectively changes the single tone to 2 tones)

    You can claim to be rapidly and changing the volume of a tone with a volume control but this is not all that is happening. You ARE generating new frequencies at the rate you are moving the volume control. The same thing when you walk through the room in your example. But in the walk through example it is happening in the ear.

    There is no non linearity of any kind involved (needed?) in the beating we hear when we sum two tones. PERIOD! It is detectable by a fully linear system.

    Except the human each which is not linear.

    I repeat from my previous post: You can create the two tones (x and y) by modulating an (x+y)/2 carrier with an (x-y)/2 signal. That action will produce two new tones, x, and y. If piano tuners were doing that then I would agree.

    Are you claiming that I have said the following? Because I have not.

    [U]Saying two tones in ANY way represents a modulated signal is the same as saying 1 tone represents an SSB signal[/U]. Is whistling a precursor to ssb modulation? After all, the signals happen to look the same, just as in the piano tuner case.

    They are not modulated for the same reason the carrier and the audio are not modulated until after the modulator/mixer stage. When the ear is involved, that is the modulator/mixer stage.

    The problem with this is that you are making a comparison between a system where all the signals are measurable such as a mixer or modulator stage in radio equipment and a system where the products (new frequencies) are not measurable because they are generated in the ear. Yes there are similarities but I would hoped I have made it clear what happens where. Maybe I have failed in that.

  12. Averagesupernova
    Averagesupernova says:

    Had a look in a couple of text books after I got home. Malvino’s Electronic Principles 3rd Edition page 700 under a short paragraph about diode mixers states: “Incidentally, heterodyning is another word for mix, and beat frequency is synonymous with difference frequency. In Fig. 23-8 we are heterodyning two input signals to get a beat frequency of Fx – Fy.” Fig. 23.8 shows a transistor mixer.

    Bernard Grob’s Basic Television Principles and Servicing Fourth Edition page 287 talks about detecting the 4.5 MHz sound carrier. “The heterodyning action of the 45.75 MHz picture carrier beating with the 41.25 MHz center frequency of the sound signal results in the lower center frequency of the 4.5 MHz.” Right or wrong it is not uncommon to use the word BEAT when dealing with frequency mixing. Grobs TV book also talks about interference page 415: “As one example, the rf interference can beat with the local oscillator in the rf tuner to produce difference frequencies that are the the IF passband of the receiver.” There are other sections talking about various signals ‘beating’ together to form interference in the picture. Co-channel picture and sound carriers that ‘beat’ with the LO to cause interference. The word is thrown around pretty loosely.

    Those were the first two books I picked up out of the many I still have. Didn’t look at the amateur radio books I have, but I am sure it has been mentioned there as well.

  13. meBigGuy
    meBigGuy says:

    You can claim to be rapidly and changing the volume of a tone with a volume control but this is not all that is happening. You ARE generating new frequencies at the rate you are moving the volume control. The same thing when you walk through the room in your example. But in the walk through example it is happening in the ear.

    There is nothing non linear happening in the ear that is required to hear beating. If you used a perfectly linear microphone you would see the same thing. The envelope varies between 0 and 2A. That is what you hear. You can see it in a scope.

    If you look at a spectrum, there are no new frequencies created by the summation. The two sine waves are the frequencies created by the modulation of a (x+y)/2 carrier. When you look at them in the time domain they look exactly like a modulated (x+y)/2 carrier. Those two frequencies are ALL THAT EXISTS. There is no energy at any other frequency, and the envelope effect is detectable with a linear microphone.

    Just the fact that you sum the two frequencies creates the appearance of the results of modulation of a (x+y)/2 carrier. But, there is no spectral energy at (x+y)/2 (unless you want to venture into instantaneous frequency land)

    In a room, when you move a microphone through it (forget the ear), you see peaks and valleys cause by summing of different phases. If you move through those at some rate v, then dopplar creates the equivalent of two tones (since there a different relative velocities to the reflective sources). Now, don’t tell me dopplar is modulation, because it creates the appearance of 2 tones in a microphone, and they also appear in the spectrum analysis of the microphone output.

    You are not going to like this next paragraph at first. Your comments about a volume control are an interesting phenomenon. That is 1 frequency varying in amplitude. What does it look like in a spectrum analyzer. It appears as two sine waves (sidebands of the modulation). The original sine wave is the equivalent of the (x+y)/2 carrier in the original example, and the volume control is the (x-y)/2 modulating signal.

    This is really simple if you abandon preconceptions. Look at the trig identity and think about what it means:
    \""

    The left side is 2 sine waves summed, which are EXACTLY IDENTICAL to the right side product, which represents an (x+y)/2 carrier modulated by an (x-y)/2 signal. You can think of your volume control being varied at an (x-y)/2 rate as the modulator (which it actually is). The hard part is “what happened to the (x+y)/2 carrier when I modulated it”

    [URL]http://hyperphysics.phy-astr.gsu.edu/hbase/sound/beat.html[/URL] (replace the ear with a linear microphone and the effect is the same)

  14. meBigGuy
    meBigGuy says:

    Forget the meaning of beat. I agree that it gets thrown around in a way that confuses the issues. The question is whether the wah-wah effect of two sine waves is in anyway related to hetrodyning. It is absolutely not related. Two sine waves happen to also be the output of a certain modulation function. But they are not created by, nor do they represent, modulation.

  15. Averagesupernova
    Averagesupernova says:

    There is nothing non linear happening in the ear that is required to hear beating. If you used a perfectly linear microphone you would see the same thing. The envelope varies between 0 and 2A. That is what you hear. You can see it in a scope.

    You can’t actually say that since no one I know of has ears that hear in a linear manner. I am not saying a person would not hear any effect at all if hearing was linear but it likely would not be perceived as the same thing it is now.

    If you look at a spectrum, there are no new frequencies created by the summation. The two sine waves are the frequencies created by the modulation of a (x+y)/2 carrier. When you look at them in the time domain they look exactly like a modulated (x+y)/2 carrier. Those two frequencies are ALL THAT EXISTS. There is no energy at any other frequency, and the envelope effect is detectable with a linear microphone.

    Where have I said that if you look at the spectrum we would see more than 2 signals after said 2 signals have been summed?

    Just the fact that you sum the two frequencies creates the appearance of the results of modulation of a (x+y)/2 carrier. But, there is no spectral energy at (x+y)/2 (unless you want to venture into instantaneous frequency land)

    In a room, when you move a microphone through it (forget the ear), you see peaks and valleys cause by summing of different phases. If you move through those at some rate v, then dopplar creates the equivalent of two tones (since there a different relative velocities to the reflective sources). Now, don’t tell me dopplar is modulation, because it creates the appearance of 2 tones in a microphone, and they also appear in the spectrum analysis of the microphone output.

    You are not going to like this next paragraph at first. Your comments about a volume control are an interesting phenomenon. That is 1 frequency varying in amplitude. What does it look like in a spectrum analyzer. It appears as two sine waves (sidebands of the modulation). The original sine wave is the equivalent of the (x+y)/2 carrier in the original example, and the volume control is the (x-y)/2 modulating signal.

    Why would I not like that paragraph? That is exactly what I would expect.

    This is really simple if you abandon preconceptions. Look at the trig identity and think about what it means:
    \""

    The left side is 2 sine waves summed, which are EXACTLY IDENTICAL to the right side product, which represents an (x+y)/2 carrier modulated by an (x-y)/2 signal. You can think of your volume control being varied at an (x-y)/2 rate as the modulator (which it actually is). The hard part is “what happened to the (x+y)/2 carrier when I modulated it”

    [URL]http://hyperphysics.phy-astr.gsu.edu/hbase/sound/beat.html[/URL] (replace the ear with a linear microphone and the effect is the same)

Newer Comments »

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply