Is the Sampling Theorem Always Accurate?

In summary, the tone at a frequency of (11025/2) with oscillating amplitude appears to be at odds with what is known about the sampling theorem. Audacity was used to generate the tone and it has audible oscillations in amplitude. At increasingly higher sampling rates this becomes less of a problem, and eventually the tone sounds consistent. conflicting with what was learned, that a sampling rate of 2f can exactly represented a signal of frequencies between 0 and f. My intuition told me that there were going to be problems with the signal near f, but this was not the case.
  • #1
Negatron
73
0
I used Audacity to generate a 5500Hz tone at a sampling rate of 11025.

This is the result:
oQ7xx.png


To me this appears to be a tone at a frequency of (11025/2) with oscillating amplitude rather than a frequency of 5500.

The tone also sounds like it looks. It has audible oscillations in amplitude.

At increasingly higher sampling rates this is less and less of a problem and eventually the tone sounds consistent.

This conflicts with what I have learned, that a sampling rate of 2f can exactly represented a signal of frequencies between 0 and f. My intuition told me that there are going to be problems with the signal near f and sure enough playing around with signal generators this seems to be confirmed.

So now I'm in conflict. I understand why the amplitude oscillations occur but I don't understand how the claim of the sampling theorem can be upheld when such problems exist within the range of sample-able frequencies.
 
Engineering news on Phys.org
  • #2
This is quite a common misunderstanding. Firstly let me point out that although the waveform looks like it has a low frequency "component" at 12.5Hz = ([itex]f_N - f_m[/itex]) it actually doesn't. For example if you applied that signal to a linear system with a 12.5Hz resonant frequency it wouldn't resonate it.

Mathematically it's just the sum of the original signal (at 5500 Hz) and the image frequency signal at (11025 - 5500) Hz. These two signals are at frequencies of (f_N - 12.5) Hz and (f_N + 12.5) Hz respectively. So their sum can be written as :

[tex]\cos(w_N - w_d) + \cos(w_N + w_d) = 2 \cos(w_N) \cos(w_d)[/tex]

The above form shows the obvious amplitude modulation at (in this case) [itex]w_d = 25 \pi[/itex] (where w = 2 Pi f). Note however that actual frequency components are still at (5512.5 - 12.5) and (5512.5 + 12.5) Hz despite the apparent 12.5 Hz component.

Now to the issue of why the results you hear seem to be inconsistent with Nyquists sampling theorem. Very simply it's because you're not band-limiting like you should do. You have to remove all frequency content above the Nyquist frequency (5512 Hz), that is the image frequencies, before you can recover the original signal.
 
Last edited:
  • #3
Much thanks. A bit to think about but I think I see the problem.

uart said:
You have to remove all frequency content above the Nyquist frequency (5512 Hz), that is the image frequencies, before you can recover the original signal.

Is this something that must be done in the analog signal or is there a miraculous way to "remove" these frequencies in the sample data such that it plays back correctly without low pass analog filters? I'm pretty sure I know the answer I just want to confirm I'm not overlooking something fundamental :smile:
 
  • #4
From a practical point of view one can't just cutoff everything above the half-nyquist frequency because real filters have response slopes that are not square. Further, the higher the cutoff slope the worse the level and phase response is near the cutoff. So there are tradeoffs to be made.

Ideally one can say that a 2x sample rate allows one to reconstruct a signal, but the phase information is lost near the cutoff. The way I like to think about it is, at close to cutoff, there are two samples per wave-form and those sample can happen at any point. If the wave is not exactly the same frequency as the samples, then they happen at varying positions that appear as a "beat" or "aliasing" signal added to the original.

OF course the way I like to think about things usually confuses the bejausis out of everyone else...
 
  • #5
Negatron said:
Much thanks. A bit to think about but I think I see the problem.

Is this something that must be done in the analog signal or is there a miraculous way to "remove" these frequencies in the sample data such that it plays back correctly without low pass analog filters? I'm pretty sure I know the answer I just want to confirm I'm not overlooking something fundamental :smile:

If you want to reconstruct an analog output then yes eventually some part of it needs to be done in the analog domain, but it's interesting that all the hard work can indeed be done while it's still digital, using a process called up-sampling (or over sampling).

As you've noticed when your signal has content which is too close to the Nyquist frequency then you have a very difficult filtering problem to properly recover the original analog signal. It is however possible to reconstruct the signal digitally through an algorithm that can recreate intersample points (in other words, effectively generate new intermediate samples) remarkably accurately.

Here I've made some waveforms of this reconstruction (resampling) in action (see attachments). The first waveform is very similar to what you had, it's a 5450 Hz sine wave sampled at 11025 Hz (F_N = 5512.5 Hz). Here you can clearly see similar "beat" type artifacts as per your waveform.

I saved the above waveform as a wav file at 11025Hz sample rate and used a freeware implementation of a resampling algorithm called "SSRC" to resample it to 96000 Hz. It's important to note here that the SSRC resampler had absolutely no access to anything other than the 11025 Hz sampled file and in no way "knew" that the waveform it was reconstruction was supposed to be a sine wave. Yet as you can see in the second waveform, it very accurately generated new samples to match the original sine wave.
 

Attachments

  • wav1.jpg
    wav1.jpg
    18.2 KB · Views: 441
  • wav2.jpg
    wav2.jpg
    19.8 KB · Views: 453
  • #6
Your original picture is not of a sampled lf signal - it is a picture of an amplitude modulated carrier. If it were what you think it is, the values of the sampled waveform, between the sample values, would be zero (or some other constant DC value).
The spectrum of a sampled signal ( assume it is lp filtered and sampled above the Nyquist rate) so you can see what's going on) will be that of the original waveform with a family of harmonics of the sampling frequency, each one with DSB AM sidebands on either side of it. These harmonics will actually die off at higher frequencies because the sampling pulses will be finite in width.
The reason for putting an anti aliasing filter before you sample is that there needs to be a 'gap' between the highest baseband frequency component and the lowest sideband components. Without this gap you will get aliasing - an overlap of the spectra and high frequency components of the sampled signal will turn up as low frequency components in the reconstructed signal. There are signals (like still TV pictures, which have a comb structure to their spectrum and it is possible to sub-sample them so that the alias components lie between the original comb components- which means they can be comb filtered out (you need to choose the right sampling frequency).
 
  • #7
sophiecentaur said:
Your original picture is not of a sampled lf signal - it is a picture of an amplitude modulated carrier. If it were what you think it is, the values of the sampled waveform, between the sample values, would be zero (or some other constant DC value).

Yes the "sampled signal" is just the points shown in picture, mathematically an impulse train. The lines connecting them are just an artefact of the software making the picture, I thought that was probably "understood".
 
  • #8
uart said:
Yes the "sampled signal" is just the points shown in picture, mathematically an impulse train. The lines connecting them are just an artefact of the software making the picture, I thought that was probably "understood".

I, now, see what the picture is showing ( I missed the point about the sampling being only just fast enough! - RTFM, again :redface:).
What the picture is showing is the input waveform plus the result of aliasing. If you were to filter the waveform with a cut-off which is between these two frequencies then you would see the wanted waveform on the resulting set of samples. I think you have chosen an extreme case for your demonstration of what sampling does. If you chose a much lower signal frequency then what is happening would be more obvious; our eyes aren't good at Fourier analysis of graphs. The only slight clues may be in the small amount of asymmetry near the 'node' in the waveform and in perception of the actual time scale (the spacing between peaks is approximately two samples and the amplitude of each zig zag is the vector sum of the two signals, which drifts from construction to destruction - beats)). If you look in the frequency domain then your simulation should show you two frequencies - the wanted one and the alias.
But you need to remember that Nyquist was being theoretical when introducing his 'limit' and the final limit to how far you can reduce the sampling rate is based on practical limitations - filters etc..
 
  • #9
uart said:
It is however possible to reconstruct the signal digitally through an algorithm that can recreate intersample points

Ah, very interesting. Presumably this "beating" effect is not a problem at <22kHz @ 44.1k samples (although I can't hear that high to be sure) as the sound card would likely implement a sharp low pass filter at that point.
 
  • #10
Once the original signal has been sampled (without anti-alias filtering) then there is no way of getting rid of the alias components - unless there is something about the actual spectrum of the original signal which leaves gaps in which the alias spectrum will fit (as I said in an earlier post). Then you can comb filter the unwanted stuff out. But, failing that, you are stuck with the artifacts, in the same way that you can't get rid of noise or other distortions except by using the (redundant) characteristics of the wanted signal. For instance, when the wanted signal is repeated, you can average out noise (random additions to it).
It is the sampling process that introduces the artifacts. After that, it may well be too late.
 
  • #11
I would have to say there is no "Beating" in this case, and there is no aliasing in this case (because the signal is band limited), and I second Uart in saying that the continuous signal can be reconstructed perfectly from the discrete signal with the use of a number of sinc functions. Nyquist-Shannon sampling theorem requires a band limited signal, and if there was actually a beating signal the samples would look different.

I have quite commonly seen this behavior in scopes and digitizers when the frequency of the signal gets close to the Nyquist frequency of the sample rate. It takes some caution when you are trying to understand what your looking at.

When good engineers make use of undersampling then the rules change a bit.
 
  • #12
Sampling (delta function) a 5500Hz sin wave at 11025Hz will produce a signal with a spectrum consisting of the original 5500Hz plus a comb of components at 11025Hz plus all harmonics PLUS sidebands at +/- 5500Hz of 11025Hz and harmonics.

A scope picture would be expected to show a set of samples which contain 5500Hz and (11025-5500) 5525Hz. (The effect of all the higher sampling harmonics and their sidebands is to produce the 'fine structure' of the impulses at 1/11025s intervals but it sort of clouds the issue, so concentrate on the two lf components.) You would expect a 'beat' between these with a frequency difference of 25Hz. Looking at the picture, supplied, I can count about 440 samples between the two maxima. This corresponds to 25Hz and I find it no surprise at all. To eliminate the 5525Hz component you would need to post filter and you would get back to your original 5500Hz. This is what you always do after any DAC, to some extent, because you don't want to be hitting the following amplification stages with all the high level shash which can produce further distortions and overload. The scope doesn't do any post filtering so you have to dig out the information by eye and that doesn't come naturally.
There is no aliasing here, of course, because the sampling frequency is high enough but hf artifacts, which can be eliminated, still exist nonetheless .
 
  • #13
Negatron said:
Ah, very interesting. Presumably this "beating" effect is not a problem at <22kHz @ 44.1k samples (although I can't hear that high to be sure) as the sound card would likely implement a sharp low pass filter at that point.

Yes the problem in this case is more a technical issue of reconstruction rather than a fundamental problem of aliasing. (Aliasing being where you have frequency components above the Nyquist frequency before you sample).

The answer to your original query could be pretty much summarized by saying that the problem of reconstruction is not so simple as just "joining the dots", even though that may give acceptable result if the Nyquist frequency is much larger then signal frequency.

BTW. If you take a look at the two waveforms I attached in reply #5 above you can see just how effective a good reconstruction algorithm can be. Note that the “reconstructor” used was a finite impulse response (FIR) approximation to an ideal Shannon reconstructor, so essentially just a good implementation of a low pass filter.
 
Last edited:
  • #14
Nyquist, Shannon and others were all very smart but they never had to actually BUILD something which would reach their theoretical limits.
 

1. What is the Sampling Theorem problem?

The Sampling Theorem problem, also known as the Nyquist-Shannon sampling theorem, is a fundamental concept in digital signal processing. It states that a continuous-time signal can be perfectly reconstructed from its discrete samples if the sampling rate is at least twice the maximum frequency component of the signal.

2. Why is the Sampling Theorem important?

The Sampling Theorem is important because it allows us to accurately represent and manipulate continuous signals in a digital format. This is essential for various applications such as audio and video processing, communication systems, and medical imaging.

3. What happens if the Sampling Theorem is not satisfied?

If the Sampling Theorem is not satisfied, aliasing will occur. This means that high-frequency components of the signal will be misrepresented as lower frequencies, leading to distortion and loss of information in the reconstructed signal.

4. How is the Sampling Theorem used in practice?

In practice, the sampling rate is chosen to be significantly higher than the maximum frequency of the signal to ensure that the Sampling Theorem is satisfied. This is typically done through a process called oversampling, where the sampling rate is increased by a factor of 2 or more.

5. Are there any limitations to the Sampling Theorem?

Yes, there are some limitations to the Sampling Theorem. It assumes that the original signal is band-limited, meaning that it does not contain any frequencies above a certain limit. In addition, it assumes that the signal is sampled at regular intervals, which may not always be possible in practical applications.

Similar threads

  • Electrical Engineering
Replies
4
Views
815
Replies
7
Views
3K
Replies
9
Views
1K
Replies
9
Views
1K
  • Electrical Engineering
Replies
1
Views
1K
  • Introductory Physics Homework Help
Replies
3
Views
197
Replies
1
Views
4K
Replies
6
Views
4K
  • Electrical Engineering
Replies
2
Views
2K
Replies
10
Views
2K
Back
Top