- #1
sodemus
- 29
- 0
So, let's say you have a signal, bandpass filtered between 10Hz and 1000Hz. The signal is then sampled at 2000Hz. A window of 20 samples is then FFT:d. You then get a frequency resolution of 2000/20 = 100Hz (the FFT contains 20 points). One consequently doesn't know anything about any other frequencies.
However, Shannon Nyquists Sampling Theorem (SNST) states that all information about a signal with maximum frequency f_m is preserved through sampling at 2*f_m. If the entire signal can be reconstructed at a specific sampling rate, why shouldn't you then be able to reconstruct a FFT with better frequency resolution than 100Hz even if the sample size only is 20 points?
Let's start with the ideal assumption with no limitations due to technology and than move on to what can be done in the real world.
Hope I have expressed myself clear enough. I realize this should be a very simple question with a simple answer (stationary assumptions of the signal perhaps... what do I know?) but I don't see the catch.
However, Shannon Nyquists Sampling Theorem (SNST) states that all information about a signal with maximum frequency f_m is preserved through sampling at 2*f_m. If the entire signal can be reconstructed at a specific sampling rate, why shouldn't you then be able to reconstruct a FFT with better frequency resolution than 100Hz even if the sample size only is 20 points?
Let's start with the ideal assumption with no limitations due to technology and than move on to what can be done in the real world.
Hope I have expressed myself clear enough. I realize this should be a very simple question with a simple answer (stationary assumptions of the signal perhaps... what do I know?) but I don't see the catch.
Last edited: