Is my understanding of the Nyquist theorem correct?

  • Thread starter fog37
  • Start date
  • #1
fog37
1,568
108
TL;DR Summary
Correct understanding of Nyquist theorem
Hello,
I would really appreciate it if you could check my understanding on Nyquist theorem:

  • We start with a continuous time signal f(t) and convert it to a digital and discrete signal ##f[t]##.
  • The discrete signal ##f[t]## to be a "good" approximation of the continuous signal ##f(t)## only if we sample ##f(t)## at a sufficiently high rate. The higher the better but too many samples may also be unnecessary. Nyquist theorem tells us that if we sample ##f(t)## at a rate higher than ##2 f_{max}##, we will be able to "perfectly" reconstruct the continuous signal ##f(t)## from the samples using interpolation. Not just that: the spectrum of the digital discrete signal ##f[t]## will be a good approximation of the actual spectrum ##F(f)## of ##f(t)##. Given a sequence of ##N## time samples after sampling ##f(t)##, we can take the Fourier transform DFT of those ##N## samples to get the spectrum which also has ##N## samples.
  • Interestingly, The DFT spectrum will only have values at frequencies which are ##k \frac {f_s} {N}## Hz where k=integer, ##f_s##=sampling rate, ##N##=number of samples.
Example:
  • ##f = 5Hz##, the frequency of our continuous sine function ##f(t) = sin(2 \pi f t)##
  • Time interval ##\Delta t##=1 s
  • period ##T## of continuous sine ##f(t)## is ##\frac {1}{f}## = 1/5 = 0.2s
  • number of cycles of continuous sine f(t) during ##\Delta_t##: 1/0.2= 5
  • ##fs## = sampling frequency = 20 Hz (much higher than 5Hz)
  • ##N##=number of samples = (20/cycle)*(5 cycles)=100
  • The frequency bins in the DFT are k*fs/N: f1=20/100=0.2 , f2=40/100=0.4, f3=60/100=0.3, f4= 80/100, ...etc.
  • The discrete signal version of ##f(t)## is given by ##f[n] = sin[2 \pi 5 fs n]## where n=integer.
  • In this example, the DFT spectrum will have a single spike at ##f=5Hz##, as we desire, because 5Hz corresponds to exactly an integer multiple ##k=25##. If ##f## was not an exactly multiple of ##\frac {f_s} {N}##, we would small nonzero values also at other DFT bin frequencies. But this is NOT aliasing.
  • Aliasing occurs, for example, if ##fs=8Hz##. The DFT spectrum would be VERY different from the actual spectrum of the continuous ##f(t)## signal with large and significant nonzero values at DFT bin frequencies other than the actual frequency...Mathematically, this can seen as "replicas" of the actual spectrum overlapping with each other in the bandwidth 0-fmax distorting the spectrum: multiplication in the time domain (the sampling) is convolution in the frequency domain.
  • The time domain signal is discrete and finite with ##N## samples. Its spectrum would be continuous and infinite DTFT. However, we get the DFT which is also discrete and finite. This means that the DFT is a discrete approximation of the DTFT, correct?
In real-world applications, the input signal to a system can be continuous (ex: audio signal)...Is that signal, before it is converted to the appropriate discrete version via sampling, first low-pass filtered to set the maximum frequency ##f_{max}## in the signal itself? I think so...But isn't that risky in the sense that we may low-pass filter the continuous signal too much and get rid of important high frequency components that are important in the signal makeup?

Is my understanding correct?

Thank you
 
Engineering news on Phys.org
  • #2
Mostly correct. We might write the sampled sequence as $$f_k=f(kT_s)$$ to make the discreteness explicit, where T_s is the sampling interval. The question of reconstructing the spectrum is complicated. The DFT has pitfalls, including limited resolution (equal to the inverse of the duration of the sampled sequence), spurious “picket fence” responses, and inaccurate amplitude or power estimates due to “straddle loss.” Ad-hoc bandaids such as windows help some of these but worsen others. Prior knowledge about the system allows use of model-based spectral estimators that can do better. The topic is suitable for a grad level course.
I didn’t read your examples.
Finally, yes you low pass filter first. Most signals have a cutoff frequency above which there is no relevant information. Human ears can’t hear sounds above 20 kHz, e.g., so there’s no information lost by filtering higher frequencies away.
 
  • Like
Likes fog37
  • #3
marcusl said:
Mostly correct. We might write the sampled sequence as $$f_k=f(kT_s)$$ to make the discreteness explicit, where T_s is the sampling interval. The question of reconstructing the spectrum is complicated. The DFT has pitfalls, including limited resolution (equal to the inverse of the duration of the sampled sequence), spurious “picket fence” responses, and inaccurate amplitude or power estimates due to “straddle loss.” Ad-hoc bandaids such as windows help some of these but worsen others. Prior knowledge about the system allows use of model-based spectral estimators that can do better. The topic is suitable for a grad level course.
I didn’t read your examples.
Finally, yes you low pass filter first. Most signals have a cutoff frequency above which there is no relevant information. Human ears can’t hear sounds above 20 kHz, e.g., so there’s no information lost by filtering higher frequencies away.
Thank you.

If the DFT is not that great, I guess other transforms, like the wavelet transform, may be better. For example, the wavelet transform is better at time-frequency localization, I believe...

For sound, as you mention, the 20kHz is the limit but for other signals (ex: seismic waves, etc.) we don't know what the largest frequency may be so the cut-off may be application dependent...
 
  • #4
The DFT is the most general, easiest to compute and universally used spectral estimator, so you should use it also. Just be aware of its idiosyncrasies and limitations.
 
  • #5
fog37 said:
If the DFT is not that great, I guess other transforms, like the wavelet transform, may be better. For example, the wavelet transform is better at time-frequency localization, I believe...
It's a while since I was learning about A to D conversion but I learned that two more factors can be relevant if we want to discuss the relative merits of methods for analysing the performance of the system. Firstly, the practical details of just how tight we want to push the Nyquist limit and secondly the time profile of the samples. If you really want a 'near perfect' reconstructed signal at the other end then there must be a combination of filtering before the ADC and after the DAC. Then depending on the nature of the input analogue signal to noise ratio it may be better to use a shorter sampling times to reduce the sinc distortion that simple box-car samples will introduce.
Where are we, in this thread, in terms of knowledge of the finer levels of digital signalling? There are any number of texts that could flesh out my above concerns. I don't have convenient access to the most suitable texts any more but it may be worth while going back to basics here if the OP wants the best advice.
 

1. What is the Nyquist Theorem?

The Nyquist Theorem, also known as the Nyquist-Shannon sampling theorem, is a fundamental principle in the field of digital signal processing. It states that in order to accurately reconstruct a signal from its samples, the sampling rate must be at least twice the highest frequency present in the signal. This minimum rate is known as the Nyquist rate.

2. Why must the sampling rate be at least twice the highest frequency?

The requirement that the sampling rate be at least twice the highest frequency in the signal (the Nyquist rate) is necessary to prevent a problem known as aliasing. Aliasing occurs when higher frequency components of the signal are indistinguishable from lower frequency components due to insufficient sampling, resulting in distortion and inaccuracies in the reconstructed signal.

3. What happens if I sample below the Nyquist rate?

If you sample a signal at a rate lower than the Nyquist rate, aliasing will occur. This means that frequencies higher than half of the sampling rate will be incorrectly mapped to lower frequencies, leading to distortion and a loss of information in the reconstructed signal. This can significantly affect the quality and integrity of the signal.

4. Can I sample a signal at a rate higher than the Nyquist rate?

Yes, sampling a signal at a rate higher than the Nyquist rate is often practiced and is known as oversampling. Oversampling can help in reducing noise and improving the resolution of digital to analog conversion. It also provides more flexibility in digital filtering and processing. However, it requires more memory and processing power.

5. How does the Nyquist Theorem apply to digital audio and imaging?

In digital audio, the Nyquist Theorem dictates that to accurately capture all frequencies that humans can hear (up to about 20 kHz), audio should be sampled at least at 40 kHz. In practice, a standard CD audio sampling rate of 44.1 kHz is used. In digital imaging, the theorem ensures that spatial frequencies (details) within an image are sampled adequately to reproduce the image without aliasing, thereby preserving the visual details and textures.

Similar threads

Replies
7
Views
3K
  • Electrical Engineering
Replies
3
Views
1K
  • Computing and Technology
Replies
3
Views
816
  • Engineering and Comp Sci Homework Help
Replies
1
Views
3K
  • General Engineering
Replies
3
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
8
Views
2K
Replies
1
Views
1K
  • Electrical Engineering
Replies
1
Views
933
Replies
26
Views
4K
Replies
1
Views
2K
Back
Top