Digital Signal Processing - signals, reconstruction, filters

In summary, the Gaussian filter is the most similar reconstruction filter to a CRT. The number of samples required to represent a given signal without loss of information is 2*B samples. The class of signals that can be exactly reconstructed is those whose highest frequency component satisfiesB_{max}\leq \frac{F_s}{2}.
  • #1
myx
2
0

Homework Statement


No problem statement - just the following questions.



Homework Equations



1.Display on a CRT is most similar to what reconstruction filter?
2.How many samples are required to represent a given signal without loss of information?
3.What signals can be reconstructed without loss for a given sampling rate?

The Attempt at a Solution


1. Gaussian filter? I'm not sure if this is correct but I thought of this since this is the most widely used.
2. I know that a signal can be reconstructed from its samples if the original signal has no frequencies above 1/2 the sampling frequency. So is it that 2*B samples are required to represent a given signal without loss of information, where B=Bandwidth? I think I read this somewhere but I'm not sure why this is the case.
3. Not sure if this question is asking for specific signals or a family of signals.
 
Physics news on Phys.org
  • #2
I think your answer for 2 is better applied to 3. Would you still answer 2. the same way if the wording was "without ANY loss of information?"?
 
  • #3
I guess to not have ANY loss of information, you need an infinite number of samples
 
  • #4
And when answering these types of questions, we often have to be pedantic. So since there is no meaningful difference between "without loss" and "without ANY loss", I think that is the answer they're probably looking for.

And so, I think you can now apply the reasoning you originally used for (2) to think about (3).

Sorry I don't know the answer to (1).
 
  • #5
myx said:
2. I know that a signal can be reconstructed from its samples if the original signal has no frequencies above 1/2 the sampling frequency. So is it that 2*B samples are required to represent a given signal without loss of information, where B=Bandwidth? I think I read this somewhere but I'm not sure why this is the case.
Your first sentence is correct, so the sampling frequency must obey

[tex]F_s\geq 2B_{max}[/tex]

to allow for theoretically perfect signal reconstruction. The reason has to do with frequency aliasing--if the sampling frequency isn't high enough, then many different frequencies look the same when they are sampled. See the diagram about halfway down this article http://en.wikipedia.org/wiki/Aliasing" for an example of two frequencies whose sampled sequences look the same when the precaution above is not followed.

Your next sentence is correct if modified to read "2B samples per second". The total number of samples obviously depends on the length of the signal.
3. If the sampling frequency is given, then the class of signals that can be exactly reconstructed is those whose highest frequency component satisfies

[tex]B_{max}\leq \frac{F_s}{2}[/tex].
 
Last edited by a moderator:
  • #6
marcusl said:
Your first sentence is correct, so the sampling frequency must obey

[tex]F_s\geq 2B_{max}[/tex]

to allow for theoretically perfect signal reconstruction.
[tex]B_{max}\leq \frac{F_s}{2}[/tex].

Although, I would point out that it wouldn't be right to call it theoretically perfect reconstruction... as frequencies approach the Nyquist frequency (Fs/2), there is no provision to take the phase into account.

Example: if you had a signal consisting of only the Nyquist frequency, and the phase was such that each time a sample was taken the signal happened to be passing through its zero value, your data would be a series of zero values, no matter what the amplitude.

Ahh, but a sample does have a finite window of time, you say? But what would happen if the zero crossings occurred at the centers of your window? Each sample would still average out to zero!

Now, what happens as you start decreasing the frequency of your signal? Well, you start taking fractionally more samples per signal period, and the reconstruction gets incrementally more accurate.

This is one of the reasons why digital audio trends lead towards increasing the sampling frequency well beyond the limits of human hearing.
 
Last edited:
  • #7
You are bringing up the old and highly technical argument that says, in essence,

[tex] F_s > 2B_{max} [/tex]

should be used instead of

[tex]F_s\geq 2B_{max} [/tex].

At the level of myx's introductory class, either statement should be acceptable. myx, go with what's in your book or lecture notes.
 
  • #8
I think my argument is more that we can't claim perfect reconstruction, at any frequency north of DC. There will always be a time uncertainty of up to [tex] 1 / F_s [/tex] at every frequency. This translates to a negligible phase uncertainty at lower frequencies, but I think it can create up to 90 degrees of phase uncertainty at the Nyquist freq.

I've never worked out what the potential amplitude uncertainty can be, but I think it could possibly be infinite.

Anyhow, I think it's important at an introductory level to make it clear that the reconstruction is not perfect.

But if you are only looking at it spectrally, then sure, anything below Nyquist will be reproduced.

Did I mention something about pedantry earlier? I seem to be getting worse as of late. Oy vey, I should relax. Myx, you're on the right track.
 
  • #9
In that case I disagree completely. Information theory predicts exactly perfect reconstruction so long as the bandwidth limit is satisfied, and in an ideal mathematical sense (infinitely precise sampling by a comb of delta functions). This is the subject of the sampling theorem.

I don't know what you mean about time uncertainty of 1/F_s at every frequency. If you mean there is timing uncertainty of 1/F_s at each sample, it's simply not true and doesn't happen in real analog to digital converter chips. There are, of course, practical limitations to physical hardware, but that is not what's under discussion here. Please don't confuse myx, who had a simple enough intro question.
 
Last edited:
  • #10
I thought the perfect reconstruction predicted is only valid for an infinitely long sample time, and no one has that long.

You are right my language was unclear and I jumbled things I shouldn't have. I think I did mean that there is a timing uncertainty at 1/F_s, though. What do you mean by that not having any basis in reality?
 
  • #11
Ah, and I think here's where I really went wrong: I keep failing to take into account that mathematically, when we are talking about frequencies in the signal, we are speaking about infinitely long sine waves - and in the examples that I imagine, I picture extremely finite, rather short sections of a sine wave. But to produce a section of a sine wave that is only ten periods long, with everything else flat, it is created by adding together a whole mess (infinite?) number of other sine waves... making the bandwidth of the entire signal far far greater than the frequency of my short signal.

So in my thought experiments, I would draw out a small section of periodic behaviour, and put in my sample points. Then I would try to draw out a slightly different wave that would still intersect at those sample points. I see now that the shape of the other waves, if repeated infinitely, would no doubt include frequencies greater than the set Bandwidth.

So now I have to do some more thinking on how I think about the bandwidth of real world finite signals.
 
  • #12
Yes, Schlunk, you have now put your finger on the delicacy in this matter. No finite-length signal can be strictly band limited--the very act of cutting off the spectrum hard to zero beyond frequency B_max requires an infinite impulse response (IIR) filter. Accordingly, the sampling theorem deals mathematically with signals of infinite length.

The application of the sampling theorem to real (finite) signals has been thoroughly discussed in the literature. Fortunately, it can mostly be ignored because sampling systems work well enough, extremely well in fact, for practical use with a little accommodation. For instance, a bit of oversampling such as setting

[tex]F_s=1.2 \cdot 2B_{max}[/tex]

makes most of the problems we are considering vanish. By comparison, errors introduced by quantization (an analog signal is quantized to 2^n discrete voltage levels by an n-bit ADC), and practical issues of noise and dynamic range are of far greater concern in real systems.
 
  • #13
schlunk said:
You are right my language was unclear and I jumbled things I shouldn't have. I think I did mean that there is a timing uncertainty at 1/F_s, though. What do you mean by that not having any basis in reality?

Real ADC's typically take a snapshot of the input signal--they implement an analog sample and hold function, for instance--that they then quantize. I agree that a sloppy sampler with a long snapshot of T_s seconds that also jittered on the scale of T_s would deliver awful performance. You'd never recover an accurate (let alone perfect) representation of the continuous signal from it.

ADC vendors approach this sampling operation very seriously, however, making it as ideal as practical. A medium speed sampler (100 MSPS, say) might take a snapshot of picoseconds duration, with similar timing accuracy, compared to the sample interval T_s of 10 ns. Someone who designs ADC's might chime in here with more concrete numbers and details.
 

1. What is Digital Signal Processing (DSP)?

Digital Signal Processing is the manipulation and analysis of digital signals using mathematical algorithms. It involves converting analog signals into digital form, processing them using various techniques, and then converting them back to analog form for use in electronic devices.

2. What types of signals can be processed using DSP?

DSP can be used to process various types of signals, including audio, video, and images. It can also be applied to signals in fields such as telecommunications, radar, and biomedical engineering.

3. How are signals reconstructed using DSP?

Signals are reconstructed using DSP by converting them from their digital form back to analog form. This is done by using a digital-to-analog converter (DAC), which converts the discrete digital signals into a continuous analog waveform.

4. What are the different types of filters used in DSP?

There are various types of filters used in DSP, including low-pass, high-pass, band-pass, and band-stop filters. These filters are used to remove unwanted frequencies from a signal and enhance certain aspects of the signal.

5. What are the applications of DSP?

DSP has a wide range of applications, including audio and video processing, speech recognition, image and video compression, and data compression. It is also widely used in telecommunications, radar, sonar, and biomedical engineering for various signal analysis and processing tasks.

Similar threads

  • Engineering and Comp Sci Homework Help
Replies
6
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
3
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
839
  • Engineering and Comp Sci Homework Help
Replies
2
Views
2K
  • Electrical Engineering
Replies
4
Views
799
  • Computing and Technology
Replies
0
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
Back
Top