1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Digital Signal Processing - signals, reconstruction, filters

  1. Mar 9, 2010 #1


    User Avatar

    1. The problem statement, all variables and given/known data
    No problem statement - just the following questions.

    2. Relevant equations

    1.Display on a CRT is most similar to what reconstruction filter?
    2.How many samples are required to represent a given signal without loss of information?
    3.What signals can be reconstructed without loss for a given sampling rate?

    3. The attempt at a solution
    1. Gaussian filter? I'm not sure if this is correct but I thought of this since this is the most widely used.
    2. I know that a signal can be reconstructed from its samples if the original signal has no frequencies above 1/2 the sampling frequency. So is it that 2*B samples are required to represent a given signal without loss of information, where B=Bandwidth? I think I read this somewhere but I'm not sure why this is the case.
    3. Not sure if this question is asking for specific signals or a family of signals.
  2. jcsd
  3. Mar 9, 2010 #2
    I think your answer for 2 is better applied to 3. Would you still answer 2. the same way if the wording was "without ANY loss of information?"?
  4. Mar 9, 2010 #3


    User Avatar

    I guess to not have ANY loss of information, you need an infinite number of samples
  5. Mar 9, 2010 #4
    And when answering these types of questions, we often have to be pedantic. So since there is no meaningful difference between "without loss" and "without ANY loss", I think that is the answer they're probably looking for.

    And so, I think you can now apply the reasoning you originally used for (2) to think about (3).

    Sorry I don't know the answer to (1).
  6. Mar 10, 2010 #5


    User Avatar
    Science Advisor
    Gold Member

    Your first sentence is correct, so the sampling frequency must obey

    [tex]F_s\geq 2B_{max}[/tex]

    to allow for theoretically perfect signal reconstruction. The reason has to do with frequency aliasing--if the sampling frequency isn't high enough, then many different frequencies look the same when they are sampled. See the diagram about halfway down this article http://en.wikipedia.org/wiki/Aliasing" [Broken] for an example of two frequencies whose sampled sequences look the same when the precaution above is not followed.

    Your next sentence is correct if modified to read "2B samples per second". The total number of samples obviously depends on the length of the signal.
    3. If the sampling frequency is given, then the class of signals that can be exactly reconstructed is those whose highest frequency component satisfies

    [tex]B_{max}\leq \frac{F_s}{2}[/tex].
    Last edited by a moderator: May 4, 2017
  7. Mar 10, 2010 #6
    Although, I would point out that it wouldn't be right to call it theoretically perfect reconstruction... as frequencies approach the Nyquist frequency (Fs/2), there is no provision to take the phase into account.

    Example: if you had a signal consisting of only the Nyquist frequency, and the phase was such that each time a sample was taken the signal happened to be passing through its zero value, your data would be a series of zero values, no matter what the amplitude.

    Ahh, but a sample does have a finite window of time, you say? But what would happen if the zero crossings occured at the centers of your window? Each sample would still average out to zero!

    Now, what happens as you start decreasing the frequency of your signal? Well, you start taking fractionally more samples per signal period, and the reconstruction gets incrementally more accurate.

    This is one of the reasons why digital audio trends lead towards increasing the sampling frequency well beyond the limits of human hearing.
    Last edited: Mar 10, 2010
  8. Mar 10, 2010 #7


    User Avatar
    Science Advisor
    Gold Member

    You are bringing up the old and highly technical argument that says, in essence,

    [tex] F_s > 2B_{max} [/tex]

    should be used instead of

    [tex]F_s\geq 2B_{max} [/tex].

    At the level of myx's introductory class, either statement should be acceptable. myx, go with what's in your book or lecture notes.
  9. Mar 10, 2010 #8
    I think my argument is more that we can't claim perfect reconstruction, at any frequency north of DC. There will always be a time uncertainty of up to [tex] 1 / F_s [/tex] at every frequency. This translates to a negligible phase uncertainty at lower frequencies, but I think it can create up to 90 degrees of phase uncertainty at the Nyquist freq.

    I've never worked out what the potential amplitude uncertainty can be, but I think it could possibly be infinite.

    Anyhow, I think it's important at an introductory level to make it clear that the reconstruction is not perfect.

    But if you are only looking at it spectrally, then sure, anything below Nyquist will be reproduced.

    Did I mention something about pedantry earlier? I seem to be getting worse as of late. Oy vey, I should relax. Myx, you're on the right track.
  10. Mar 10, 2010 #9


    User Avatar
    Science Advisor
    Gold Member

    In that case I disagree completely. Information theory predicts exactly perfect reconstruction so long as the bandwidth limit is satisfied, and in an ideal mathematical sense (infinitely precise sampling by a comb of delta functions). This is the subject of the sampling theorem.

    I don't know what you mean about time uncertainty of 1/F_s at every frequency. If you mean there is timing uncertainty of 1/F_s at each sample, it's simply not true and doesn't happen in real analog to digital converter chips. There are, of course, practical limitations to physical hardware, but that is not what's under discussion here. Please don't confuse myx, who had a simple enough intro question.
    Last edited: Mar 10, 2010
  11. Mar 10, 2010 #10
    I thought the perfect reconstruction predicted is only valid for an infinitely long sample time, and no one has that long.

    You are right my language was unclear and I jumbled things I shouldn't have. I think I did mean that there is a timing uncertainty at 1/F_s, though. What do you mean by that not having any basis in reality?
  12. Mar 11, 2010 #11
    Ah, and I think here's where I really went wrong: I keep failing to take into account that mathematically, when we are talking about frequencies in the signal, we are speaking about infinitely long sine waves - and in the examples that I imagine, I picture extremely finite, rather short sections of a sine wave. But to produce a section of a sine wave that is only ten periods long, with everything else flat, it is created by adding together a whole mess (infinite?) number of other sine waves... making the bandwidth of the entire signal far far greater than the frequency of my short signal.

    So in my thought experiments, I would draw out a small section of periodic behaviour, and put in my sample points. Then I would try to draw out a slightly different wave that would still intersect at those sample points. I see now that the shape of the other waves, if repeated infinitely, would no doubt include frequencies greater than the set Bandwidth.

    So now I have to do some more thinking on how I think about the bandwidth of real world finite signals.
  13. Mar 11, 2010 #12


    User Avatar
    Science Advisor
    Gold Member

    Yes, Schlunk, you have now put your finger on the delicacy in this matter. No finite-length signal can be strictly band limited--the very act of cutting off the spectrum hard to zero beyond frequency B_max requires an infinite impulse response (IIR) filter. Accordingly, the sampling theorem deals mathematically with signals of infinite length.

    The application of the sampling theorem to real (finite) signals has been thoroughly discussed in the literature. Fortunately, it can mostly be ignored because sampling systems work well enough, extremely well in fact, for practical use with a little accommodation. For instance, a bit of oversampling such as setting

    [tex]F_s=1.2 \cdot 2B_{max}[/tex]

    makes most of the problems we are considering vanish. By comparison, errors introduced by quantization (an analog signal is quantized to 2^n discrete voltage levels by an n-bit ADC), and practical issues of noise and dynamic range are of far greater concern in real systems.
  14. Mar 11, 2010 #13


    User Avatar
    Science Advisor
    Gold Member

    Real ADC's typically take a snapshot of the input signal--they implement an analog sample and hold function, for instance--that they then quantize. I agree that a sloppy sampler with a long snapshot of T_s seconds that also jittered on the scale of T_s would deliver awful performance. You'd never recover an accurate (let alone perfect) representation of the continuous signal from it.

    ADC vendors approach this sampling operation very seriously, however, making it as ideal as practical. A medium speed sampler (100 MSPS, say) might take a snapshot of picoseconds duration, with similar timing accuracy, compared to the sample interval T_s of 10 ns. Someone who designs ADC's might chime in here with more concrete numbers and details.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook