Minimum Tone Spacing: Coherent Frequency Shift Keying

  • Thread starter Master1022
  • Start date
  • #1
501
92
Summary:
What is the reasoning behind the minimum tone spacing for coherent FSK?
Hi,

I was reading through some online notes and was wondering: when dealing with coherent FSK, what is the minimum tone spacing and why?

I know that for non-coherent FSK, we can show that the minimum is: ## f_1 - f_0 = \frac{1}{T} ## where ## T ## is the symbol period. However, if we are now dealing with coherent FSK, how can I go about finding the minimum difference required (which will help me find the bandwidth)?

After a quick google search, the only reference I could find to the topic was in some online lecture notes (shown in picture below) which just simply stated that ## f_1 - f_0 = \frac{1}{2T} ## for the minimum without any explanation. I think I must be overlooking something quite obvious. What would be a good starting point for me to be able to derive/understand this property?

Thanks in advance for any help.

Screen Shot 2021-04-29 at 5.55.35 PM.png
 

Answers and Replies

  • #3
501
92
I was hoping someone would be of more help than I can. Anyhow, here is what I found.

Rather heavy on the math, not a beginners article:
https://www.dsprelated.com/showarticle/1016.php

That and others found with:
https://www.google.com/search?&q=minimum+shift+keying+tutorial

Good Luck!
Tom
Thanks @Tom.G !

That link with the maths basically answers the question perfectly! However, I think one question I have is: what do coherent and non-coherent mean?

When we do that integration orthogonality proof for non-coherent signals, there is a phase off-set ## \phi ##, as well as a difference in frequency, between the two signals - that is we have ## cos(2 \pi f_0 t) ## and ## cos(2 \pi f_1 t + \phi) ##. However, for coherent this is not the case - why is this?

Does that just follow from the definition of the terms? I read that we can think about coherent signals as coming from the same source, whereas non-coherent is when they come from different sources. Therefore, we assign an arbitrary phase difference ## \phi ## to account for phase differences between the two signals. Is this heading towards the right direction?
 
  • #4
Tom.G
Science Advisor
3,890
2,590
That is my (non-expert) understanding.
 
  • #5
tech99
Gold Member
2,124
782
If we look at a bit stream, the maximum frequency Fm is half the bit rate. So if applied to an FM transmitter we would have two side frequencies spaced Fm Hertz away from the central carrier. This means the transmission occupies 2 Fm Hertz. However, if we use quadrature modulation with a coherent detector, we can superimpose a second transmission on the same carrier and the two bit streams are independent. Now in the same bandwidth of 2 Fm we can carry twice as many bits, so the transmission now occupies a bandwidth of Fm, where Fm is the maximum frequency contained in the combined bit stream. MSK is a method of achieving quadrature modulation in a simple way.
 
  • #6
sophiecentaur
Science Advisor
Gold Member
2020 Award
25,988
5,282
Summary:: What is the reasoning behind the minimum tone spacing for coherent FSK?

how can I go about finding the minimum difference required (which will help me find the bandwidth)
If you can accept that FSK is basically the same as Phase Shift Keying (Frequency is just the time differential of phase) then it may be easier to see that there is no minimum frequency deviation required.
The required carrier to noise ratio goes up as you reduce the deviation but there is no basic "minimum" deviation. (Advanced systems use very small phase shifts and multiple signalling levels to achieve high data rate in small bandwidth.)
 
  • #7
hutchphd
Science Advisor
Homework Helper
3,026
2,170
I must be misunderstanding you. There are information theoretic constraints here due to Shannon and Nyquist that fundamentally constrain any attempt to code information. For an arbitrary code block time T there are lower limits on the frequency that go something like 1/T.
 
  • #8
sophiecentaur
Science Advisor
Gold Member
2020 Award
25,988
5,282
I must be misunderstanding you.
No, you are just applying those two concepts out of context. (I'm being a bit smartarse here but my comments are valid.)
Firstly, the Nyquist Criterion tells you about the limits of reconstructing a perfect 'original' from samples. It implies a vast amount of information is being transferred (highly accurate values of an analogue waveform amplitude over time) and not a simple string of digital values. Also, Sub-Nyquist sampling is very successful in many analogue cases, where the resulting distortion is not 'seen'. An example is in digitising PAL TV signals where the spectrum of a stationary picture has a comb structure and the sub-nyquist artefacts are arranged to fall in the gaps in the comb - you can't see them.
Next, the Shannon limit refers to the total information that a channel can carry. A simple FSK system is easy to engineer (that's very relevant, of course) but you don't need to know the precise values of the varying signal - just the comparatively slow rate of the actual data. Limiting the bandwidth produces inter symbol interference, of course, but as long as the signal to noise ratio is high enough, that ISI can be eliminated by examining the received (analogue) signal over a range of data bit intervals (can be done with a temporal filter, for instance).
The limit to this process is what the Shannon theorem states.
 
  • #9
hutchphd
Science Advisor
Homework Helper
3,026
2,170
I guess I am unsure what the discussion is about. They formal information content of any signal does not depend upon how it is encoded (the useful information is another thing entirely and encoding schemes are a high form of art).

I am talking about the formal content and that is rigorously constrained by information theory. It can be represented by a bit stream for instance. Within that framework I believe the frequency limitation is fundamental meaning there is a hard lower limit.

Surely the "biggest" (literally) example of this is the VLF array in Cutler Maine which is constrained to use frequencies below 24 kHz to allow sufficient penetration depth into seawater to talk to submarines. Even with optimization it is a very limiting constraint.
.
 
  • #10
sophiecentaur
Science Advisor
Gold Member
2020 Award
25,988
5,282
Surely the "biggest" (literally) example of this is the VLF array in Cutler Maine which is constrained to use frequencies below 24 kHz to allow sufficient penetration depth into seawater to talk to submarines. Even with optimization it is a very limiting constraint.
And the carrier to noise ratio will be a huge limitation. Also it's likely to be a very simple modulation / demodulation as with a lot of military equipment. One problem with marginal systems is that they need long delay in the decoding and that in itself limits the suitability in a combat situation.

FSK is very convenient but, in most circumstances, you can do a lot better. Multiple level Amplitude / Phase shift keying will yield much higher data rates, albeit with a penalty in error rates. But that's just another way of saying that, as I noted to start with, there is not a simple value of the "minimum" frequency shift that's allowed. It all depends on how hard you are prepared to work at it and what received signal level you can expect.
Someone may quote achievable information rates that approach the Shannon limit but such systems aren't common.
 
  • #11
hutchphd
Science Advisor
Homework Helper
3,026
2,170
I think we are in agreement that it is difficult to approach the Shannon limit. My ongoing point is that it is in fact a limit just as fundamental as, say, Carnot efficiency for heat engines. Or Heisenberg uncertainty.
Since I believe the ultimate purpose of the Cutler station is really to transmit the final "go" code to launch thermonuclear weapons while subs stayed submerged, I hope their coding scheme is not esoteric but is robust!
 
  • #12
anorlunda
Staff Emeritus
Insights Author
9,427
6,434
I agree that you can't exceed the Shannon limit for lossless transmission. But I don't recall the Shannon paper addressing the topic of lossy transmission.

If you tolerate some error rate, you should be able to transmit some information at lower frequencies.

The GO code for launch could be repeated 1000 times, and the captain instructed to launch if he receives GO more than N times within a time window T. Even with good quality transmission, I hope that the GO code must be repeated several times.

As @sophiecentaur mentioned, some applications like video can be very error tolerant. I recall analog TV in the old days, watching programs when more than 50% of the pixels were "snow", and audio was barely understandable above the noise, yet it was good enough to get the message through.
 
  • #13
hutchphd
Science Advisor
Homework Helper
3,026
2,170
First I very much recommend the (~general audience) book "The Information" by James Gleick. I really found it interesting and on point...learned much.

Without getting deep into the arcane definitions of information, a signal that is only partially correct is necessarilly degraded as to actual information content. If I get the mojo I will take a look at how this works out. Clearly for any "message" there is a minimum information content required even if the context is already known. I think this can generically be reduced to binary and that is what runs into the Shannon limit.

Gleick's discussion of "jungle drums" is fascinating in this arena.
 
  • #14
sophiecentaur
Science Advisor
Gold Member
2020 Award
25,988
5,282
But I don't recall the Shannon paper addressing the topic of lossy transmission.
By a lossy transmission channel, I presume you include the noise that's present. The upper limit for information capacity is:
C = B log2(1+S/N)
FSK falls a long way short of that. That upper limit includes any possible error correction used.
some applications like video can be very error tolerant.
given the right programme content. :wink:
 
  • #15
anorlunda
Staff Emeritus
Insights Author
9,427
6,434
By a lossy transmission channel, I presume you include the noise that's present.
No, I was not focused on noise at all. I meant that some of the message is received incorrectly or not at all. But thanks to redundancy or thanks to human ability to guess the missing parts, it succeeds anyhow.

Correct me if I'm wrong, but Shannon's equations all refer to the message being received correctly in entirety.
 
  • #16
sophiecentaur
Science Advisor
Gold Member
2020 Award
25,988
5,282
No, I was not focused on noise at all.
But you have to be. The formula above gives an infinite channel capacity if the noise is zero. Noise is always with us and affects our lives all the time.
 
  • #17
anorlunda
Staff Emeritus
Insights Author
9,427
6,434
But you have to be. The formula above gives an infinite channel capacity if the noise is zero. Noise is always with us and affects our lives all the time.
I must be thinking upside down. I'm thinking of receiving a partial message even when signal/noise ratio approaches zero.
 
  • #18
sophiecentaur
Science Advisor
Gold Member
2020 Award
25,988
5,282
I must be thinking upside down. I'm thinking of receiving a partial message even when signal/noise ratio approaches zero.
What do you mean by a "partial message"?
Going back to the FSK theme, if you reduce the frequency deviation of the carrier (the coherent system is only there for convenience but it's actually more akin to PM because the deviation is low), the spectrum of the transmitted signal will still have sidebands of the modulating bitstream, whatever the deviation.
There is a harder conceptual problem when thinking about FM, compared with PM when the deviation is low, although they are essentially the same animal. An easy form of modulation is to use 0° phase for the symbol 1 and 180° for a 0. Once your receiver has synced up, you can decode the binary signals by simple synchronous detection. But you can use a system with the two symbols as quadrature phases (and you have room for two more states to be transmitted) but the spacing between the states is less so your SNR goes down. But there is no fundamental limit to reducing the phase deviation (and, at the same time, use some AM too) That can be described as QAM (Quadrature Amplitude Modulation) which can produce a whole lattice of amplitude and phase states - all using the same basic signalling bandwidth but, of course, sacrificing carrier to noise ratio. But perfect error correction is possible and you are in profit until the noise rises above a threshold, when the whole thing dies - that's Engineering.
 

Related Threads on Minimum Tone Spacing: Coherent Frequency Shift Keying

Replies
1
Views
7K
  • Last Post
Replies
5
Views
4K
Replies
2
Views
4K
  • Last Post
Replies
1
Views
3K
  • Last Post
Replies
15
Views
3K
Replies
3
Views
1K
Replies
6
Views
235
  • Last Post
Replies
1
Views
3K
Replies
1
Views
2K
Top