Minimum Frequency of FM Data / Catastrophic Error Scenario?

AI Thread Summary
The discussion revolves around the potential issues with phase-locked loop (PLL) error correction in frequency modulation (FM) systems, particularly when low-frequency data signals are involved. It highlights a scenario where a PLL misinterprets the carrier frequency due to a low-frequency signal, leading to catastrophic errors in signal processing. Participants discuss the historical context of FM, including the challenges of DC components in analog TV and the use of techniques like DC restoration to mitigate carrier shifts. The conversation also touches on the functionality of superheterodyne receivers and the role of stable LC circuits in demodulation, contrasting them with PLLs. Overall, the thread emphasizes the complexities and potential pitfalls in FM data transmission and the importance of understanding modulation techniques.
Silly Questions
Messages
58
Reaction score
7
TL;DR Summary
Is there a minimum frequency for data carried on analog FM and can falling below it cause a catastrophic failure?
Ever since I learned about FM something's been bugging me, which is that the PLL error correction acts on the encoded data, seeming to leave open the possibility of the shape of the data itself interfering with the PLL's interpretation of what the carrier frequency is. It seems dangerous to mix the carrier error-correcting machinery with the data-decoding machinery.

Here's the scenario I cooked up: a very low-frequency data signal, so low the discriminator misses it, drifts the PLL's perceived carrier frequency over to the extreme of the frequency band. (Let's normalize the band to -1 and +1.)

The transmitter is slowly stretching the broadcast wave out, further and further from the carrier, all the way to +1, encoding this extremely low-frequency signal with full knowledge at all times of what "signal" and "carrier frequency" are, as the transmitter must and does carry an internal clock.

On the receiver however the PLL is blithely "correcting" its perceived unmodulated carrier to the wrong frequency of the modulated carrier.

At +1 deflection the transmitter sends "carrier + 1" but the receiver has recalibrated to "(wrong) carrier + 0" -- it thinks the carrier wave is +1 units away from its actual frequency!

Now a very radical signal comes in that deflects the carrier frequency all the way to -1 -- from the transmitter's point-of-view. The receiver sees -2, which is double the actual input, and so the attached device, hammered with a -2 that should've been -1, halts and catches fire. (And explodes like a Star Trek control panel, why not?)

Either nobody attaches FM radios to devices that have very low-frequency inputs, i.e. this is a known problem solved by restricting the domain of allowed inputs and is the first thing you learn on Day One of Radio School, or I so misunderstand FM that I only imagine there's a problem. Either way, count on me for your daily dose of silly questions.
 
Engineering news on Phys.org
As you mention, a PLL can be used for automatic frequency control or for demodulation. An AFC PLL tracks low frequency data and so reduces the modulation index.
In analogue days, if carrying TV, there was a DC component caused by scene brightness, so it was the practice to remove any DC component using a capacitor or a pre emphasis network to avoid a shift in carrier frequency.
 
  • Like
Likes Silly Questions
A DC component is impossible to FM analog encode, isn't it? From what I gather there's no way to tell the difference between data and carrier when the data isn't wobbling around. Did the parameter of "scene brightness" eventually get encoded via some FM-friendly manner? (I mean before digital of course.)

How common is it (or "was" as everything's digital these days) to compensate for various data shapes in various FM applications? Do you have any interesting stories of particularly novel carrier-frequency-preserving techniques?

Lastly, re: "Reduces the modulation index", I hadn't considered the possibility as I was distracted by the possibility of awesome electrical sparks and explosions caused by the opposite problem. Is there such a thing as a catastrophically over-scaled modulation index, or do the problems mostly come in from a drifting carrier putting the squeeze on the under-scaled side?
 
A DC component is impossible to FM analog encode, isn't it? From what I gather there's no way to tell the difference between data and carrier when the data isn't wobbling around.
Not necessarily. If your local oscillator (VCO in the PLL) defaults to the 'no-modulation-present' frequency of your signal, everything works out.

A PLL detects the phase difference between an incoming signal (RF in your example) and controls the frequency of its local oscillator to keep a fixed phase difference between the two.

This is done by generating a voltage proportional to the phase difference. This voltage, the error voltage or control voltage, then feeds a Voltage Controlled Oscillator (VCO) to keep the phase difference between the two essentially constant.

The end result is the error (control) voltage changes as the frequency of the incoming signal changes; it essentially converts a frequency difference to a DC voltage. So monitor the error voltage, this is your frequency modulation recovered.

Hope this helps!
Tom

p.s. The above is somewhat simplified in that the Phase Detector actually puts out a pulse at the edge of the signal, and the pulse is stopped at the edge of the local oscillator (VCO). These pulses are then filtered to a DC error voltage to control the VCO.
 
Last edited:
  • Informative
Likes Klystron
Silly Questions said:
A DC component is impossible to FM analog encode, isn't it? From what I gather there's no way to tell the difference between data and carrier when the data isn't wobbling around. Did the parameter of "scene brightness" eventually get encoded via some FM-friendly manner? (I mean before digital of course.)

How common is it (or "was" as everything's digital these days) to compensate for various data shapes in various FM applications? Do you have any interesting stories of particularly novel carrier-frequency-preserving techniques?

Lastly, re: "Reduces the modulation index", I hadn't considered the possibility as I was distracted by the possibility of awesome electrical sparks and explosions caused by the opposite problem. Is there such a thing as a catastrophically over-scaled modulation index, or do the problems mostly come in from a drifting carrier putting the squeeze on the under-scaled side?
Frequency modulation can handle DC but it is undesirable in some cases as it creates a carrier shift. With analogue TV, if we block the DC component it can be re-inserted later. We can either use DC restoration with a diode/capacitor or by what is called black level clamping. The analogue TV signal is chopped by synchronizing pulses at the end of every line and after each one there is a short period of black level for reference purposes.
If we demodulate frequency modulation using a phase detector then there is an issue if the modulating signal is DC, because there is no phase change for the detector to measure.
In the case of analogue microwave systems using FM for TV or multi channel telephony, it was unusual to employ a PLL detector and more usually we relied on stable LC circuits operating at a relatively low intermediate frequency, usually 70 MHz. Automatic Frequency Control is easily done using LC resonant circuits or discriminator circuits.
Regarding excessive modulation index, it is usual to put a clipper in the signal path to avoid any voltage excursion which might exceeding the max deviation, but of course we need to adjust the system so we don't hit this clipping level.
For an FM receiver we must use a bandwidth before the detector which is approx 2x(deviation plus max modulating frequency). If this is not done there is severe distortion.
 
  • Informative
  • Like
Likes Klystron and Silly Questions
Fascinating.

On the topic of stable LC circuits and intermediate frequencies, is this the same thing as a "superheterodyne circuit"? I remember superhet also reducing carrier frequency to intermediate frequency for more convenient demodulating.
 
@Silly Questions you need to do a little research on radio if you have to ask about superheterodyne receivers. You will understand it much better when you have your head wrapped around superhet.
-
As far as receiver PLL goes, the main PLL in a receiver typically ignores the incoming signal. It is there to precisely tune the 1st or only local oscillator. That is not to say that AFT does not exist but it is typically going to have a longer averaging time and a fairly narrow window of tuning. A detector that utilizes a PLL will output the proper signal regardless of how slow the FSK signal is.
 
Silly Questions said:
Fascinating.

On the topic of stable LC circuits and intermediate frequencies, is this the same thing as a "superheterodyne circuit"? I remember superhet also reducing carrier frequency to intermediate frequency for more convenient demodulating.
Superhet receivers utilize intermediate frequencies (IF) referenced to a local oscillator for many signal processing chores including demodulation and amplification. So, yes expect an IF section in a superhet receiver. Other types of RF receivers such as for radar applications also utilize IF and LO circuits without reference to audio signals; so not superheterodyned in the original sense of supersonic.

Historically, radio and radar techs shared lab space and certain test equipment but different goals. FM receivers and 'radio' in general reproduced information in the form of simple audible codes then human speech and the audio frequencies detected by human ears. RADAR wavelength choices depend on criteria unrelated to human hearing such as the size and range of intended targets, avoiding interference from water vapor in atmosphere and gas/dust in space, and, as always, technology limits.
 
  • Like
Likes Silly Questions
@Averagesupernova: My problem's exactly backwards: I don't quite know what a "stable LC-resonant circuit" is. I saw "intermediate frequency" and took the bait, but reading more carefully it seems like the LC circuit is taking the frequency up while superhet takes it down. I slipped on a banana peel.

Taking a crack at "stable LC-resonant circuit intermediate frequency" I get:

1. "Stable". I've heard this term used for a piezoelectric crystal in a PLL producing a "stable" timing pulse, but of course in olden analog times nobody had quartz timers, so I think "stable LC circuit" is the next-best-thing?

2. "Resonant". A circuit that multiplies the IF sent by the LC. Because it "resonates" the multiplicated pulses are very evenly-timed and precise.

3. "Intermediate Frequency": Easier to make stable in olden times pre-piezo-effect than an actual in-one-go stable high-frequency oscillator.

All of this is guesswork ... am I close?

4. The Thing I Don't Understand: What makes the way the "stable LC resonant circuit" interacts with the FM carrier and data preferable to a PLL? Does the LC hold the true carrier better than the PLL as the data wobbles the carrier around?

@Klystron:
"... not superheterodyned in the original sense of supersonic ..."
Ahh, right, there's a more general term for overlapping two high frequencies into an intermediate beat frequency. And I do not remember what that general term is. Maybe just "heterodyne"?

WWII-era radars had all kinds of problems which through a lot of sweat by MIT's RadLab got solved in time for the Essex rollout. Frustrated stories abound in the early days, like of choppy seas turning the PPI into a rippling ocean of useless returns. Complaints about the IFF not working were of two kinds: one, it hardly ever worked, and two, when it did work it wasn't cross-compatible to other types of radar, so a friendly close to an enemy on the PPI might well mean your FC radar is ranging a solution on the friendly. The frustration coming out of those 1942 reports compared to how smoothly things worked in 1944 shows how important was 1943. Search radar, IFF, fire control radar, and CIC plot all had to overlap in fault-tolerant layers to finally give the Navy a better-than-even chance against low-light optics in the chaos of naval battles fought among islands at night.

Superhet receivers also caused problems: submariners couldn't risk listening to broadcast radio on patrol until they got new radios that worked on other principles. The IF was easily detectable, and the Navy was worried the IJN had figured that out. That makes Tokyo Rose extra creepy, huh?
 
  • #10
Question about AM! What is the amplitude of silence?
 
  • #11
Just the carrier, at whatever level the signal strength is at your location.
 
  • #12
As the carrier amplitude changes how long does it take the receiver to "recalibrate" (if that's the right term) to the new carrier strength? How long does it take the receiver to sort out "new carrier amplitude" from "these changes in amplitude are the signal"?
 
  • #13
Silly Questions said:
As the carrier amplitude changes how long does it take the receiver to "recalibrate" (if that's the right term) to the new carrier strength? How long does it take the receiver to sort out "new carrier amplitude" from "these changes in amplitude are the signal"?
The carrier never changes in amplitude.
 
  • #14
I was thinking in terms of environmental factors but never specified.

Imagine the "changes in carrier amplitude" are only relative to the receiver, as if windmill blades (the fat kind, not skinny turbine blades) were constantly attenuating then clearing the AM signal and changing the carrier amplitude relative to the receiver. How long does it take the receiver to sort "new carrier amplitude" from "amplitude-modulated signal"? How fast does the windmill need to go before the radio stops working?
 
  • #15
Averagesupernova said:
The carrier never changes in amplitude.
Ever heard of/experienced signal fading? In sever cases it is called a 'dead spot', as when your receiver (phone, for instance) cannot detect the transmitter.
Silly Questions said:
How long does it take the receiver to sort "new carrier amplitude" from "amplitude-modulated signal"?
That is a design option, therefor does not have a fixed answer.

In general it must be slower than the lowest frequency of the information/data being transferred. If faster than the data rate, it reduces the amplitude of the data signal. Generally not a useful situation!

Cheers,
Tom
 
  • Like
Likes Silly Questions
  • #16
Silly Questions said:
How long does it take the receiver to sort "new carrier amplitude" from "amplitude-modulated signal"?
That's just determined by the modulation bandwidth you allow (via filtering). In normal radios, DC is seldom part of the modulation signal. After all how could you possibly separate modulation from other attenuation, without lots of other complexity? Although you could see it in telemetry where the path was well controlled, or measured, I guess. I've used it (DC AM) in isolation circuits with transformers several times, but there's no "radio" involved, the path was perfect.
 
  • Like
Likes Silly Questions
  • #17
Averagesupernova said:
The carrier never changes in amplitude.
This is a real overstatement of 'the facts'. It all depends on the bandwidth that you are measuring it with. With the appropriate deviation and a sinusoidal modulation, the amplitude of the carrier component of the spectrum goes to zero. If you look at the table in this wifi link, you will see the amplitudes of various sidebands and at a deviation of 2.41, there is no carrier component. Ideally, the total signal power will not change, of course, but that includes al the FM sidebands. That 2.41 figure can be used (Bessel Zero Method) to calibrate the deviation, which can be a very slippery fish to estimate in practice.
 
  • #18
sophiecentaur said:
This is a real overstatement of 'the facts.
My reply was in reference to the OP's question about AM. However, I didn't realize the question was directed more towards signal fading and things of this nature and not simply low frequency amplitude modulation.
 
  • Like
Likes sophiecentaur
  • #19
You guys are sticklers for precise wording and exact descriptions. I understand. If I get an answer to the wrong question it's my responsibility to narrow the question down and try again.

DaveE said:
That's just determined by the modulation bandwidth you allow (via filtering)

Ah-hah! "Filtering".

Am I correct in thinking an AM radio receiver can't work without a filter? Or would an "unfiltered" receiver (your great-uncle's dental work he swears tunes in when he's too close to a radio tower) actually produce some usable signal?

Can you speculate on how different filter designs would react to the windmill blade problem?
 
  • #20
Silly Questions said:
Am I correct in thinking an AM radio receiver can't work without a filter? Or would an "unfiltered" receiver (your great-uncle's dental work he swears tunes in when he's too close to a radio tower) actually produce some usable signal?
The sound producing device, such as an earphone, possesses low pass filtering (or smoothing) action due to its mechanical inertia, so the output of a diode detector can be applied directly to an earphone and will be heard as sound.
 
  • Like
Likes Silly Questions
  • #21
Silly Questions said:
Am I correct in thinking an AM radio receiver can't work without a filter?
Just a cotton picking' second. There's no radio receiver that can do without 'filtering' at various parts of the circuit. You have to be able to select a narrow band of frequencies from what's buzzing around us all the time. (This isn't strictly true but you still need selection of some sort). You have to 'detect' the modulation that's on the carrier and low pass filter that before giving it to an audio output stage, to protect it from the carrier components that can overload audio amplifiers. (Not relevant for a basic crystal set with mechanical earphone - as pointed out above.)

@Silly Questions This topic seems to have grabbed your attention and I suggest that you will get much more from you efforts if you read around, rather than using the Q and A system. Q and A is a diverging path and can become a nightmare. Start with Wiki and follow other links from Google. Be prepared for some nonsense but that's what PF (and similar) is for!
Silly Questions said:
Can you speculate on how different filter designs would react to the windmill blade problem?

Windmill blades will modulate the carrier that you receive but the modulation will be at an infra sound frequency. There's very little low frequency response of an AM receiver (unless it's been specially designed) so that's not likely to be an issue. Caveat here; if the blades produce really terrible minima, once a cycle, then you could conceivably cause 'cross modulation' of the blade rotation frequency onto every component of the programme spectrum. Very unpleasant and that can't be filtered at all easily - depends on how smart your receiver is.

You have to realize that Ancient Modulation is very ancient and is not suited to most purposes, except very local MF radio and some comms applications. But narrow band FM has replaced AM for the remaining ship to ship systems etc. at around 120MHz, although AM is still legal on the same bands I believe.
 
  • Like
Likes Silly Questions
  • #22
Silly Questions said:
Am I correct in thinking an AM radio receiver can't work without a filter?
It won't work well without filters of some sort. On a normal broadcast band they are what keep you from hearing multiple stations at the same time. This gets into the philosophical question of what does it mean for a radio to work. Perhaps it will suffice to say all radios have filters, usually several.

Radio receivers are a complex subject that requires some study to understand. Some reading on your part about communications systems would be the next step. Your questions are good but basic, asking them one at a time isn't as efficient as reading something that's prepared to teach the subject.
 
  • Like
Likes Klystron, Silly Questions, sophiecentaur and 1 other person
  • #23
sophiecentaur said:
You have to realize that Ancient Modulation is very ancient and is not suited to most purposes, except very local MF radio and some comms applications. But narrow band FM has replaced AM for the remaining ship to ship systems etc. at around 120MHz, although AM is still legal on the same bands I believe.
AM is used for aircraft communications.
 
  • Like
Likes Silly Questions and sophiecentaur
  • #24
DaveE said:
Radio receivers are a complex subject that requires some study to understand.
Absolutely. The way that an audio signal can be put on a 'carrier wave' and transmitted wirelessly is total magic to many people. TV even more so.
 
  • #25
Searching for the needle of "carrier-divergence-induced modulation index distortion" in the haystack of FM radio theory could take years. In fact I bet if you search-engined "modulation index distortion" this thread would probably end up near the top. But how can you search a term before you've heard it?

I read the theory, sure, but at some point my admittedly overly-sensitive BS detector goes off, and I think, "If it really worked that way what about X?" Compounding the problem is that I don't know what "X" is called -- in the case of this thread "modulation index distortion" -- until someone actually drops that phrase (shoutout to Tech99) in the course of answering questions about it.

So you see the chicken-and-egg problem? How do I know what to search for if I don't know what it's called? How many FM theory books would I have to sift through until at long last I struck paydirt and happened across the gold nugget of "modulation index distortion"?

I'm just relieved I asked the question precisely enough someone understood what I was asking and could answer it. You have no idea how long it's been bugging me. I've got a lot of unanswered questions that bother me.
 
  • Like
Likes Tom.G
  • #26
Silly Questions said:
Am I correct in thinking an AM radio receiver can't work without a filter?
What exactly do you expect folks on this forum to think when you ask such a question?
 
  • #27
"Hmm, this guy doesn't know that the earpiece in his uncle's homebuilt crystal set is acting as a low-pass filter. No wonder he's confused. I bet he'll understand how the theory translates to the engineering a lot better once I tell him that the earpiece is acting as both transducer AND filter."

Also:

"The 'windmill problem' -- yeah that's not really a science problem within its theoretical boundaries but an engineering problem. Whether or how much the blades affect the signal depends on how the filters are configured versus the frequency of the blades."

Speculation:

The mention of "amplification" in connection with the windmill blades leads me to think it also crossed everyone's mind that receiving through windmill blades only makes sense for a receiver with an amplifier so that the output drive isn't dependent on signal strength.

New Question!

Having an amplifier versus slow-turning windmill blades doesn't automatically mean you don't have to constantly adjust the volume, does it? Now that I know how to filter out fast (but not too fast) whirling blades what if they're going very slowly? Do I need something (which I think is called "automatic gain control") to keep from having to constantly adjust the volume, or can the second filter be designed to intrinsically normalize the signal however attenuated the carrier?
 
  • #28
Silly Questions said:
Do I need something (which I think is called "automatic gain control") to keep from having to constantly adjust the volume, or can the second filter be designed to intrinsically normalize the signal however attenuated the carrier?
You'll need AGC; it's how everyone solves that problem (in various implementations). Particularly since your desired output amplitude is easy to define in advance. AGC is simple compared to alternatives.

A normal filter can't add information, especially if you don't know what that information is. You usually have no way of knowing what the original transmission was. Most all filters also don't vary their response with time. However, if you can predict in advance what will be wrong with the signal (noise), usually by analyzing recent outputs compared to what you expected, then you can build a "smart" filter (Kalman filter for example) to "fix" it. This requires the assumption that whatever was recently wrong with the signal will continued to occur. There are also restrictions on the type of noise that can be fixed. However, this is a very advanced topic in control theory.

I think the distinction between exotic filters and an AGC circuit is unclear. If you described a Kalman filter as performing an AGC function, I wouldn't argue too much. The difference depends on what "rules" you give it.
 
  • Like
Likes Silly Questions and sophiecentaur
  • #29
AGC (named Automatic Volume Control at the time) was patented by Armand Dennis, a Belgian inventor who spent the royalties on taking his glamorous wife Michaela around Africa making wildlife films. He is more famous for his many TV series which many of us old’uns will remember from the 50’s.
Eat your heart out David Attenburgh.😉
 
  • Like
Likes Silly Questions
  • #30
I finally have this figured out: AM doesn't work the way I imagined it should.

To work the way my imagination wanted the AM receiver would have to do this:
1. Detect the baseline amplitude of the carrier
2. Compute a modulation index coefficient from the baseline amplitude
3. Subtract the carrier from the signal
4. Multiply the signal by the index coefficient
5. Pass the normalized output to the amp for high-fidelity AM

Only FM does that (or that-equivalent for frequency). AM is way less sophisticated. The "modulation index" is set by the volume knob and must be hand-cranked to a new one whenever the baseline amplitude changes, barring kludgy "AGC / AVC" circuitry which from how you've described it still doesn't do steps 1-4 to reconstitute the signal.

I hope re-reading my questions with this in mind proves of comedic value.
"How do you make AM work like FM?"
"You don't. Thanks for playing."

I think I get it now.
 
  • Like
Likes DaveE
  • #31
Silly Questions said:
AM doesn't work the way I imagined it should
AM is just like moving an amplitude control knob in time with the music signal - at a rate of kHz. Modulation is a Non-Linear process and so is demodulation. FM is the equivalent of moving the frequency knob on an oscillator in time with the music signal. It's also Non-Linear.
Demodulating (aka detecting) the signal that's being carried also needs a non linear process and there are many alternative methods for both forms of mod.

Your descriptions don't make sense to me so perhaps just put it to one side and use the standard descriptions. Private models are ok as long as you don't share them around - we all have our own pictures in our heads but they can be very risky.

Silly Questions said:
"How do you make AM work like FM?"
You can sort of do the reverse. Just to add confusion, I could add that an FM signal can be detected on an AM receiver by centring the receiver tuning on one side of the receive filter peak. As the frequency deviates about the rest position (over the sloping portion of the filter), the filter output goes up and down with the carrier frequency. That produces an AM (crude) signal. Slope detection has been used by radio Hams for decades to receive low deviation FM on an ordinary AM receiver.
 
  • Like
Likes Silly Questions
  • #32
The "private model" in question doesn't actually exist IRL -- it wasn't so much a "private model" as a gross misconception based on the workings of FM.

What I was asking, foolishly as it turns out, is how an AM signal holds its fidelity across changes in carrier attenuation. The answer is that it doesn't. If a visiting delivery truck attenuates an FM carrier by 50% the signal itself doesn't degrade in the slightest, nor is the signal suddenly boosted when the truck drives away. My question was more-or-less, "How do AM receivers do that?" The answer is, "They don't." For AM to actually work that way would require extremely sophisticated signal processing to detect baseline amplitude and normalize the signal against it. But nobody does that; instead we all accept that with AM carrier and signal quality are tightly coupled.

The "windmill problem" was actually two questions, as if the blades are spinning fast enough -- but not too fast -- they hit a "sweet spot" where they could be filtered out. But if they're not he best you can do is hear a *POP* for every change in carrier attenuation and suffer partial loss of signal during every occlusion event which you'd have to compensate for by cranking the volume. That's the real answer to the "windmill problem" as I finally understand it. Even in the case of blades being filtered out the signal quality would rapidly rise and fall, so you'd hear something, maybe *woosh* *woosh* *woosh* instead of *whop* *whop* *whop*.

What you wouldn't be is blissfully unaware that the carrier is repeatedly changing baseline amplitude. The "windmill problem" at any RPM > 0 would play havoc with an AM signal -- compared to FM, and there's the rub. No one knew I was comparing the signal quality to FM because that would be nuts.

sophiecentaur said:
You can sort of do the reverse. ... As the frequency deviates about the rest position ... the filter output goes up and down ... That produces an AM (crude) signal ... to receive low deviation FM
That is about the coolest thing I've ever heard of. Are there any YouTube videos demonstrating this? I'm curious what it sounds like.
 
  • #33
I don't think you can guarantee a reduction in fidelity just because the signal strength is reduced. Until you get into the noise floor, simply attenuating the signal shouldn't be a big deal.
 
Last edited:
  • #34
If you transformed a16-bit digital signal onto its lower 8 bits then transformed it back to 16 it is now effectively 8 bits of resolution.

In the analog world this is probably one of those, "It's more an engineering problem," kind of things. The sensitivity of the receiver is I imagine what determines how much resolution is lost. Unless "loss of resolution" isn't the same thing as "loss of fidelity" I'm skeptical, as I'm skeptical in general of the possibility of lossless signal attenuation.

"Skeptical" is alas as far as I can get lacking any expertise in the subject, but it is at least a question worthy of asking.
 
  • Skeptical
Likes DaveE
  • #35
Silly Questions said:
If you transformed a16-bit digital signal onto its lower 8 bits then transformed it back to 16 it is now effectively 8 bits of resolution.

In the analog world this is probably one of those, "It's more an engineering problem," kind of things. The sensitivity of the receiver is I imagine what determines how much resolution is lost. Unless "loss of resolution" isn't the same thing as "loss of fidelity" I'm skeptical, as I'm skeptical in general of the possibility of lossless signal attenuation.

"Skeptical" is alas as far as I can get lacking any expertise in the subject, but it is at least a question worthy of asking.
Um... What?

This is exactly why even lowly engineers use math. Resolution, sensitivity, loss, attenuation, all have definitions that can generate equations. "Fidelity", OTOH, means almost nothing. That's OK, sometimes we need imprecise words, so we can tell people we don't want to do the work to carefully describe or analyze things.

Clearly you have a deeper interest in communications systems than the english language can really support. Reading, at this point, will be the most productive approach. There are millions of resources on the web, but I would also consider a good introductory text.
 
  • Like
Likes Klystron, sophiecentaur and Averagesupernova
  • #36
@Silly Questions I've had a hard time figuring out how to reply to a lot of your posts. It's obvious you are putting thought into this but it seems that you have a little bit of knowledge and start making assumptions after that.
-
You speak of fidelity. That is a very loosely defined word. I can have a recording studio that captures sound well above 40 khz. Then 'dumb it down' and cut off all the frequencies above the range of human hearing. Have I reduced the fidelity of the signal? Clearly I have taken away signals but considering I never heard them in the first place the fidelity remains acceptable. Same way with attenuation. The signal can be attenuated, then amplified to the intended level. As long as the signal did not get reduced to the point that the noise floor becomes audible we have an acceptable signal fidelity-wise. Again, fidelity being a loosely defined word.
-
Now considering this wind mill blade thing chopping up the signal. The actual truth is that you cannot do anything non-linear to a sine wave without creating new frequencies. You are not just varying the amplitude of the carrier with a power control knob on a transmitter when you rapidly move it back and forth. You create sidebands. Yep, new frequencies. If you diddle the power control knob at a rate of 10 cycles per second, you have sidebands 10 Hertz either side of the carrier, and the amplitude of the carrier itself remains unchanged on a spectrum analyzer. Difficult to believe, but it's true. Now when you use the windmill blades to reduce the signal with each passing blade, think of it as modulating both the carrier as well as the sidebands. Suppose a given rpm of the blade puts us at 100 Hertz. Every signal will be affected. The windmill doesn't care which is the carrier and which is the sideband(s). Each sideband as well as the carrier will now have side bands 100 Hertz either side. Confused even more?
 
  • #37
If the original question is more-or-less, "How does an AM receiver distinguish between modulation versus carrier attenuation," and the answer is, "it can't," that comports with the idea that all changes to carrier amplitude will be interpreted as "modulation". So I'm actually less confused!

The next question, "Wouldn't there be a minimum frequency below which you'd have to call a change to the carrier 'attenuation' rather than 'modulation'?" I think I can answer myself. 3 seconds will sound like this:

THAT'S ONE SMALL STEP FOR *POP* man, one giant *POP* LEAP FOR MANKIND

But if the *POP*'s themselves are subtly altered into Morse code to tap out, "Help! I'm trapped in this windmill!" then not only is there a 3-second side-band it's actually carrying information. Any "minimum frequency" would seem to exist only relative to whether the analyzer can observe at least two events.

(Aliens winking out prime numbers atop their instructions for building spaceships send their regards.)
 
  • #38
Averagesupernova said:
I don't think you can guarantee a reduction in fidelity just because the signal strength is reduced. Until you get into the noise floor, simply attenuating the signal shouldn't be a big deal.
The characteristics of any added noise have an effect but the info carried by a digital is not harmed until a high enough noise spike occurs. The original digitization has already introduced an ‘acceptable’ distortion / noise. So that may or may not be considered a downside. The raw digital data can be compressed without losing information as long as you can use a long enough delay in the processing. The noise is replaced by error bursts.

Problem is that the fancier the system, the more definite the threshold of operation.
It’s like wide deviation FM which has a pro rata noise advantage but which dies catastrophically once the input filter let's enough noise power in. There’s nothing as robust as AM under really noisy conditions. Your brain pulls out some sort of message from dreadful (unpleasant to listen to ) shash.
 
  • Like
Likes DaveE
  • #39
Silly Questions said:
But if the *POP*'s themselves are subtly altered into Morse code to tap out, "Help! I'm trapped in this windmill!" then not only is there a 3-second side-band it's actually carrying information
Yes. A low enough data rate can be hidden under an audio channel.

Did you know that the LF transmissions by the BBC were (are?) used to carry a very low data rate signal by frequency modulation of the 198kHz sound programme. The coverage of Radio Data is the whole of the UK, more or less and the only place you can hear it is is the mush area between the main transmitter and the two fill-ins in Scotland.

The system used is better than Morse Code but the ideas the same.
 
  • Like
Likes Silly Questions
  • #40
Silly Questions said:
If the original question is more-or-less, "How does an AM receiver distinguish between modulation versus carrier attenuation," and the answer is, "it can't," that comports with the idea that all changes to carrier amplitude will be interpreted as "modulation". So I'm actually less confused!
I think you are more confused. Take an AM station that is perfectly quiet. It's only transmitting a carrier. Now add modulation. The average power transmitted as well as received will increase. If you have a baseline to go by, you will find you can tell the difference. Suppose atmospheric conditions cause the received power of a steady carrier to drop. Until you wait for it to go up again (even higher than previously to satisfy the requirement that with modulation average power increases) you cannot tell if it was intentional as part of modulation or not. My whole point in staying in this discussion is that I'm not sure you really understand what AM really is in it's entirety. Btw, not saying I'm an expert, but I get the feeling you are missing a few things.
 
  • #41
Averagesupernova said:
The average power transmitted as well as received will increase.
Hmm... I don't think I've noticed the S-meter going up when modulation comes on.

It's been a long time though, have I perhaps not been observant enough?
 
  • Like
Likes DaveE
  • #42
Averagesupernova said:
Take an AM station that is perfectly quiet. It's only transmitting a carrier. Now add modulation. The average power transmitted as well as received will increase.
So suppose we are transmitting at 500KHz and modulating with a 10KHz sine wave. What you are saying implies there is more power in ##sin(2\pi⋅10^4⋅t)⋅sin(2\pi⋅5⋅10^5⋅t) ## than just ##sin(2\pi⋅5⋅10^5⋅t)##. Are you sure about that?
sin mod.jpg
 
  • #43
Tom.G said:
Hmm... I don't think I've noticed the S-meter going up when modulation comes on.

It's been a long time though, have I perhaps not been observant enough?
The power for the modulation comes from the Audio source for a ‘proper’ amplitude modulator. The envelope goes above and below the mean carrier level.
The S meter will be low pass filtered to eliminate the envelope and shouldn’t twitch unless the mean power changes.
@DaveE the lower graph shows Suppressed Carrier AM and not AM. Add the two graphs together and you get proper AM (with a constant level of carrier).
i can’t easily find a link at the moment but just look at wiki for the right maths and graphs.

Note: AM is not just multiplication.
 
Last edited:
  • Like
Likes Silly Questions, DaveE and Averagesupernova
  • #44
My understanding is that for an AM transmission with 100% modulation, the power in the sidebands is half that in the carrier. I would therefore expect the total power to increase by 50% when modulation is applied.
 
  • #45
tech99 said:
My understanding is that for an AM transmission with 100% modulation, the power in the sidebands is half that in the carrier. I would therefore expect the total power to increase by 50% when modulation is applied.
Correct. Thinking in terms of phasors and single frequency modulation waveform. There is a carrier vector and two half length phasors, rotating clock and anti clockwise. That's two quarter powers, giving half power in the sidebands, which corresponds to a second, big sweaty power valve for the modulation.
 
  • Like
Likes Averagesupernova
  • #46
Silly Questions said:
Any "minimum frequency" would seem to exist only relative to whether the analyzer can observe at least two events.
Averagesupernova said:
I think you are more confused ... Until you wait for it to go up again ...
The "second event' in question would be the power going back up, so in this case perhaps not more confused. The power needing to exceed a certain threshold to count as "modulation" is entirely new information, however, so I did learn something.
Averagesupernova said:
(even higher than previously to satisfy the requirement you cannot tell if it was intentional as part of modulation or not.
Is this a case of analysis versus simple reaction? What a human thinks when looking at the analyzer has very little to do with what an AM receiver scores as "modulation".
As I understand an AM receiver thinks every change is "modulation" whether the peak meets the threshold of "true modulation" or not.
Yet I can see that whether a rising and falling signal represents "true modulation" is of critical importance to for instance analyzing the nature of interstellar emissions caught on radiotelescope.
Just to make sure I'm straight on this, an AM receiver will pass to the output every click, buzz, or whine whether or not the sideband peaks exceed the threshold for "true modulation"?

Averagesupernova said:
I get the feeling you are missing a few things.
That almost certainly remains true.

sophiecentaur said:
Did you know that the LF transmissions by the BBC were (are?) used to carry a very low data rate signal
So in Britain even the radios have a silly walk! Is there a code for, "And now for something completely different?"
 
  • #47
@Silly Questions try thinking in the frequency domain. As I said before, anything done to a sinewave will create new frequencies (sidebands). Try not to think of the amplitude of the carrier changing. If there is a signal (single frequency) that drops into a channel of am AM station it will show up as a tone in the audio output of the receiver.
Example: Suppose there is a station broadcasting a program on 1 MHz in the AM band. Now suppose your neighbor is an electronics experimentor who's latest project generates a single frequency of 1.000800 MHz and this frequency escapes the laboratory workbench by more than a small amount. Your receiver which is tuned to 1 MHz suddenly has a 800 Hertz tone in the middle of the program you were listening to. You speak of 'change' and 'threshold of true modulation', and I'm not really sure what you mean. Threshold of modulation to me isn't even a thing.
-
Bottom line is this: The AM receiver gets the audio signal back by intermodulating the carrier with ANY other signal in the passband.
-
It's like the chicken and the egg. We can attempt to change the carrier level at a certain rate and what actually happens in the spectrum is the carrier remains unchanged and we end up with new frequencies. Or, we simply add several signals together in correct proportions and when we look at the composite on a scope it certainly looks like the AM envelope we all recognize. Quite the rabbit hole isn't it?
 
  • #48
It was thought by some scientists, when the concept of side frequencies was proposed by Carson in the '20s, that the they were merely a mathematical construct and did not actually exist. However, the imminent introduction of single sideband working for the first UK-USA telephone service (using Long Wave) proved them wrong.
 
  • Like
Likes Averagesupernova
  • #49
tech99 said:
merely a mathematical construct and did not actually exist.
Without the experimental equipment then no one could be 100% sure about the equivalence between time and frequency domains but, after the triode valve was invented in 1906, I would have thought that spectrum analysis of a (albeit very low frequency carrier) DSB AM signal would have been very possible long before 1920.

You'd have only needed to demonstrate the time / frequency thing once for the whole of the theory which Fourier proposed in 1807 to be justified (and it survives, pretty much unchanged today). Those 1920's 'nay sayers' would have been a bit late to the party, I think.
 
  • Like
Likes Averagesupernova
  • #50
Silly Questions said:
As I understand an AM receiver thinks every change is "modulation" whether the peak meets the threshold of "true modulation" or not.
Electronic equipment doesn't 'think' anything. Whether or not there is anything there that makes sense (the signal - not my writing) would have to depend on how the output signal from a demodulator (of any kind) is analysed after the receiver.

Deciding whether a signal is carrying any information other than random noise can be a hard job. But the information about the way the comms channel is varying can be at a fraction of a Hz and be detectable even when way down in the stash. That would still be "true modulation"
 
Back
Top