Is my understanding of the Nyquist theorem correct?

  • Thread starter Thread starter fog37
  • Start date Start date
AI Thread Summary
The discussion centers on the Nyquist theorem, which states that a continuous signal can be accurately reconstructed from its samples if it is sampled at a rate greater than twice its maximum frequency. The participants highlight the importance of low-pass filtering before sampling to avoid aliasing, while acknowledging the risks of losing important high-frequency components. They also discuss the limitations of the Discrete Fourier Transform (DFT), including resolution issues and potential inaccuracies, suggesting that alternative methods like wavelet transforms may offer better time-frequency localization. The conversation emphasizes the need for careful consideration of sampling strategies and filtering techniques to achieve optimal signal reconstruction. Overall, understanding the nuances of digital signal processing is crucial for effective application in real-world scenarios.
fog37
Messages
1,566
Reaction score
108
TL;DR Summary
Correct understanding of Nyquist theorem
Hello,
I would really appreciate it if you could check my understanding on Nyquist theorem:

  • We start with a continuous time signal f(t) and convert it to a digital and discrete signal ##f[t]##.
  • The discrete signal ##f[t]## to be a "good" approximation of the continuous signal ##f(t)## only if we sample ##f(t)## at a sufficiently high rate. The higher the better but too many samples may also be unnecessary. Nyquist theorem tells us that if we sample ##f(t)## at a rate higher than ##2 f_{max}##, we will be able to "perfectly" reconstruct the continuous signal ##f(t)## from the samples using interpolation. Not just that: the spectrum of the digital discrete signal ##f[t]## will be a good approximation of the actual spectrum ##F(f)## of ##f(t)##. Given a sequence of ##N## time samples after sampling ##f(t)##, we can take the Fourier transform DFT of those ##N## samples to get the spectrum which also has ##N## samples.
  • Interestingly, The DFT spectrum will only have values at frequencies which are ##k \frac {f_s} {N}## Hz where k=integer, ##f_s##=sampling rate, ##N##=number of samples.
Example:
  • ##f = 5Hz##, the frequency of our continuous sine function ##f(t) = sin(2 \pi f t)##
  • Time interval ##\Delta t##=1 s
  • period ##T## of continuous sine ##f(t)## is ##\frac {1}{f}## = 1/5 = 0.2s
  • number of cycles of continuous sine f(t) during ##\Delta_t##: 1/0.2= 5
  • ##fs## = sampling frequency = 20 Hz (much higher than 5Hz)
  • ##N##=number of samples = (20/cycle)*(5 cycles)=100
  • The frequency bins in the DFT are k*fs/N: f1=20/100=0.2 , f2=40/100=0.4, f3=60/100=0.3, f4= 80/100, ...etc.
  • The discrete signal version of ##f(t)## is given by ##f[n] = sin[2 \pi 5 fs n]## where n=integer.
  • In this example, the DFT spectrum will have a single spike at ##f=5Hz##, as we desire, because 5Hz corresponds to exactly an integer multiple ##k=25##. If ##f## was not an exactly multiple of ##\frac {f_s} {N}##, we would small nonzero values also at other DFT bin frequencies. But this is NOT aliasing.
  • Aliasing occurs, for example, if ##fs=8Hz##. The DFT spectrum would be VERY different from the actual spectrum of the continuous ##f(t)## signal with large and significant nonzero values at DFT bin frequencies other than the actual frequency...Mathematically, this can seen as "replicas" of the actual spectrum overlapping with each other in the bandwidth 0-fmax distorting the spectrum: multiplication in the time domain (the sampling) is convolution in the frequency domain.
  • The time domain signal is discrete and finite with ##N## samples. Its spectrum would be continuous and infinite DTFT. However, we get the DFT which is also discrete and finite. This means that the DFT is a discrete approximation of the DTFT, correct?
In real-world applications, the input signal to a system can be continuous (ex: audio signal)...Is that signal, before it is converted to the appropriate discrete version via sampling, first low-pass filtered to set the maximum frequency ##f_{max}## in the signal itself? I think so...But isn't that risky in the sense that we may low-pass filter the continuous signal too much and get rid of important high frequency components that are important in the signal makeup?

Is my understanding correct?

Thank you
 
Engineering news on Phys.org
Mostly correct. We might write the sampled sequence as $$f_k=f(kT_s)$$ to make the discreteness explicit, where T_s is the sampling interval. The question of reconstructing the spectrum is complicated. The DFT has pitfalls, including limited resolution (equal to the inverse of the duration of the sampled sequence), spurious “picket fence” responses, and inaccurate amplitude or power estimates due to “straddle loss.” Ad-hoc bandaids such as windows help some of these but worsen others. Prior knowledge about the system allows use of model-based spectral estimators that can do better. The topic is suitable for a grad level course.
I didn’t read your examples.
Finally, yes you low pass filter first. Most signals have a cutoff frequency above which there is no relevant information. Human ears can’t hear sounds above 20 kHz, e.g., so there’s no information lost by filtering higher frequencies away.
 
  • Like
Likes scottdave and fog37
marcusl said:
Mostly correct. We might write the sampled sequence as $$f_k=f(kT_s)$$ to make the discreteness explicit, where T_s is the sampling interval. The question of reconstructing the spectrum is complicated. The DFT has pitfalls, including limited resolution (equal to the inverse of the duration of the sampled sequence), spurious “picket fence” responses, and inaccurate amplitude or power estimates due to “straddle loss.” Ad-hoc bandaids such as windows help some of these but worsen others. Prior knowledge about the system allows use of model-based spectral estimators that can do better. The topic is suitable for a grad level course.
I didn’t read your examples.
Finally, yes you low pass filter first. Most signals have a cutoff frequency above which there is no relevant information. Human ears can’t hear sounds above 20 kHz, e.g., so there’s no information lost by filtering higher frequencies away.
Thank you.

If the DFT is not that great, I guess other transforms, like the wavelet transform, may be better. For example, the wavelet transform is better at time-frequency localization, I believe...

For sound, as you mention, the 20kHz is the limit but for other signals (ex: seismic waves, etc.) we don't know what the largest frequency may be so the cut-off may be application dependent...
 
The DFT is the most general, easiest to compute and universally used spectral estimator, so you should use it also. Just be aware of its idiosyncrasies and limitations.
 
fog37 said:
If the DFT is not that great, I guess other transforms, like the wavelet transform, may be better. For example, the wavelet transform is better at time-frequency localization, I believe...
It's a while since I was learning about A to D conversion but I learned that two more factors can be relevant if we want to discuss the relative merits of methods for analysing the performance of the system. Firstly, the practical details of just how tight we want to push the Nyquist limit and secondly the time profile of the samples. If you really want a 'near perfect' reconstructed signal at the other end then there must be a combination of filtering before the ADC and after the DAC. Then depending on the nature of the input analogue signal to noise ratio it may be better to use a shorter sampling times to reduce the sinc distortion that simple box-car samples will introduce.
Where are we, in this thread, in terms of knowledge of the finer levels of digital signalling? There are any number of texts that could flesh out my above concerns. I don't have convenient access to the most suitable texts any more but it may be worth while going back to basics here if the OP wants the best advice.
 
Thread 'Weird near-field phenomenon I get in my EM simulation'
I recently made a basic simulation of wire antennas and I am not sure if the near field in my simulation is modeled correctly. One of the things that worry me is the fact that sometimes I see in my simulation "movements" in the near field that seems to be faster than the speed of wave propagation I defined (the speed of light in the simulation). Specifically I see "nodes" of low amplitude in the E field that are quickly "emitted" from the antenna and then slow down as they approach the far...
Hello dear reader, a brief introduction: Some 4 years ago someone started developing health related issues, apparently due to exposure to RF & ELF related frequencies and/or fields (Magnetic). This is currently becoming known as EHS. (Electromagnetic hypersensitivity is a claimed sensitivity to electromagnetic fields, to which adverse symptoms are attributed.) She experiences a deep burning sensation throughout her entire body, leaving her in pain and exhausted after a pulse has occurred...
Back
Top