Description of a telemetry transmitter

  • Thread starter Thread starter senmeis
  • Start date Start date
  • Tags Tags
    Transmitter
AI Thread Summary
The discussion focuses on the concept of "optimum peak deviation" in telemetry transmitters, specifically regarding the IRIG Standard 106-11. Optimum peak deviation is defined as the difference between the carrier frequency and the modulated frequency, with a recommended value of 0.35 times the bit rate for efficient bandwidth use. The conversation highlights the trade-off between deviation width and noise power, emphasizing that wider deviations can enhance signal levels but may also increase noise interference. It also touches on the efficiency of frequency shift keying (FSK) and Gaussian Minimum Shift Keying (GMSK) in digital signaling, noting their applications in systems like GSM mobile phones. Understanding these concepts is crucial for optimizing telemetry signal transmission.
senmeis
Messages
72
Reaction score
2
Hi,

the following statement is taken from the telemetry standard IRIG Standard 106-11:

The RF signal is typically generated by filtering the baseband non-return-to-zero-level (NRZ-L) signal and then frequency modulating a voltage-controlled oscillator (VCO). The optimum peak deviation is 0.35 times the bit rate and a good choice for a premodulation filter is a multi-pole linear phase filter with bandwidth equal to 0.7 times the bit rate.

Could anyone explain what “optimum peak deviation” means? Is it a parameter of filter? Are such constraints difficult to achieve?

Senmeis
 
Engineering news on Phys.org
It's a property of the VCO/modulator - it is the difference between the carrier freq and the modulated freq. 'Optimum' in this context (I think) just means that using more bandwidth (than 0.7x the NRZ bit rate) is unnecessary/wasteful.
 
  • Like
Likes trurle
Is the following calculation correct?

Given: fosc = 10 MHz, bit rate = 1 Mbps
optimum peak deviation = 1M x 0.35 = 350 KHz

The modulated signal takes the frequency 9.65 MHz and 10.35 MHz. This is in fact FSK.

Senmeis
 
  • Like
Likes sophiecentaur
The Hz of deviation per Volt of input signal can be anything we choose. The wider the deviation used, the more volts you will get out of your demodulator. Good for signal level but the wider bandwidth required will increase the power of the noise that gets in. The optimum deviation / bandwidth trade off will depend on the application.
For binary digital signalling it's common to use FSK between two basic frequencies but you have to remember that the frequencies may well have a difference that's of the order of the rate that the digital signal. It's hard to recognise this when you look at the modulated signal (on a scope) - not like a burst of one frequency and then a burst of the other. But that method is very efficient use of spectrum space.
 
This sounds like the very narrow form of frequency shift keying called Gaussian Minimum Shift Keying (Wiki). The phase is continuous as the carrier swings between the bits; it does not switch between two frequencies. GMSK minimises interference to an adjacent channel, but suffers from inter symbol interference, or blurring of the received pulses. It is used for the GSM mobile phone system.
 
  • Like
Likes sophiecentaur
tech99 said:
it does not switch between two frequencies.
Whether it's frequency shift keying or phase shift keying is all a matter of how the modulating signal is filtered. Frequency is the rate of change of phase. Hard to get your head round until you're familiar with it. The pre-emhasis that's applied to the audio signal in fm sound radio actually turns it into phase mod and the de-emphasis that's done in the receiver improves the demodulated channel noise.
 
  • Like
Likes Averagesupernova
Back
Top