Ideal Low-Pass Filter: Output When Gain ≠ 0 for f > F

  • Thread starter bobbyk
  • Start date
  • Tags
    Filter
In summary: looked like something that would be satisfied by a step function. i said "closer" to a delayed sinc function. if you delay it enough, the portion of the sinc() function that precedes t=0 has sufficiently low amplitude that the difference between it and zero is small. so, if we can delay the sinc() function sufficiently, we can get as close to a causal sinc() as we want.
  • #1
bobbyk
39
0
It is well-known that the step-response of an ideal low-pass filter (Gain = 1 for f = 0 to F and = 0 for f = F to infinity) is non-causal, in that the output appeares before t = 0.

But what if the filter's Gain is = 1 for f = 0 to F and = e (some small non-zero value) for f = F to infinity? Then the Paley-Wiener criterion for realizability is satisfied, and the output must be causal. What then is the output?
 
Engineering news on Phys.org
  • #2
with sufficient delay (and you weren't spec-ing the phase response of the nascent ideal LPF), you can have your causal impulse response get closer and closer to a delayed sinc() function.
 
  • #3
Thanks for responding! I think this is fun! It is also, no doubt, well-known, but I haven't seen it anywhere.

But a sinc() function is never causal, no matter what the delay, and I want a casual output function. There has to be one.

Let's say the filter phase is zero over the entire band, then what's the output for an inpulse input at t = 0 ?
 
  • #4
bobbyk said:
Thanks for responding! I think this is fun! It is also, no doubt, well-known, but I haven't seen it anywhere.

But a sinc() function is never causal, no matter what the delay, and I want a casual output function. There has to be one.

i said "closer" to a delayed sinc function. if you delay it enough, the portion of the sinc() function that precedes t=0 has sufficiently low amplitude that the difference between it and zero is small. so, if we can delay the sinc() function sufficiently, we can get as close to a causal sinc() as we want.

Let's say the filter phase is zero over the entire band, then what's the output for an impulse input at t = 0 ?

you can't have it zero phase over the entire band and causal at the same time (unless the impulse response was zero for all t <> 0). for it to be zero phase over the entire band, then the impulse response would have to be symmetrical about t=0 (even symmetry).
 
  • #5
bobbyk said:
But what if the filter's Gain is = 1 for f = 0 to F and = e (some small non-zero value) for f = F to infinity? Then the Paley-Wiener criterion for realizability is satisfied, and the output must be causal. What then is the output?

Rbj's already answered the substance of this question, but could you state the version of the Paley-Wiener criterion that you have in mind. IIRC, simply making the magnitude response nonzero is not enough to satisfy the criterion.
 
  • #6
I'm only aware of one version of the criterion, namely:

The N&S condition for a linear-time-invariant filter to have a causal response is that its Gain versus frequency, G(f), should satisfy:

The integral from -infinity to +infinity of df*|log(G(f)|/(1+f^2) be < infinity.

If you know of another one, please let me know.

Thanks for responding.
 
  • #7
bobbyk said:
I'm only aware of one version of the criterion, namely:

The N&S condition for a linear-time-invariant filter to have a causal response is that its Gain versus frequency, G(f), should satisfy:

The integral from -infinity to +infinity of df*|log(G(f)|/(1+f^2) be < infinity.

hey, bobby, you need to use LaTeX here:

[tex] \int_{-\infty}^{+\infty} \frac{|\log(G(f)|}{1+f^2} \ df \ \ < \infty [/tex]

If you know of another one, please let me know.

for causality in the time domain, it is necessary and sufficient that the real part and imaginary part of [itex]G(f)[/itex] be a Hilbert Transform pair.

[tex] \frac{1}{\pi} \int_{-\infty}^{+\infty} \ \frac{\mathrm{Re}\{(G(u)\}}{f-u} \ du = \mathrm{Im}\{(G(f)\} [/tex]
 
  • #8
Okay, so it does seem that a step function

[tex]
G(f) = (1-\epsilon)1_{F_{pass}}(f) + \epsilon
[/tex]

where [itex]F_{pass}[/itex] is a set of passband frequencies, will satisfy the PW condition:

[tex]
\int \frac{|\log G(f)|}{1+f^2}df = 2|\log \epsilon |(\frac{\pi}{2}-\arctan (f_c))<\infty
[/tex]

if [itex] G(f)[/itex] is what we might call an "[itex]\epsilon[/itex]-ideal low-pass filter" with cutoff [itex]f_c[/itex] (and real impulse response). This is interesting, because it has an infinitely steep transition band. However, the theorem only tells us that there's a causal filter with this magnitude response. It doesn't tell us what the delay is, or even if the delay is finite. What would the group delay of such a filter look like? Seems like the discontinuity is going to make the delay diverge around [itex]f_c[/itex], no?

Is there another theorem that relates how large the value of the integral is to the required delay? Something along the lines of [itex]|G(f)|[/itex] being bounded by [itex]Ae^{-b|f|}+c[/itex], IIRC? I'm lacking good sources on this stuff myself, so any references would be appreciated.
 
Last edited by a moderator:
  • #9
Look, I'm not a mathematician, as you no doubt must realize (I don't even know how to do LaTeX!) and know nothing about the Paley-Wiener Criterion (although I did attend a lecture by Wiener once!). I saw it in a book by Chester Page and it intrigued me. I don't know what it says about the delay or whether the delay is infinite, nor do I know how to find out. If the delay IS infinite, then my filter doesn't make much sense! If I did see a derivation of this, I probably wouldn't be able to follow it! I'm just fastinated by the subject and I SURE appreciate you guys responding to me about it! Thanks again!
bobbyk
 
  • #10
The best way to learn [itex]\LaTeX[/itex] is to use the
button and examine the code from other peoples' posts.

I think this example is interesting, as it fits into the usual explanation given to undergrads about why an ideal low-pass is unimplementable. In the usual lazy presentation, you're shown that the Fourier inverse of the ideal low-pass response (including zero-phase) is noncausal, and that's usually the end of it. However, that doesn't address the question of whether there's another filter with the same magnitude response, but with different phase response, that is causal. Your example doesn't directly address that question, since the magnitude response is slightly different, but it does point out another issue with the ideal response: the group delay around a step-transition will diverge, and so the delay of the filter will be infinite. This is actually the Heisenberg Uncertainty relation at work: infinite resolution in one domain implies zero resolution in the other. Or, in EE terms, an infinitely narrow transition band requires infinite delay.

If you had a dilligent EE101 prof, they would have pointed out that the impulse response of the ideal low-pass is both non-causal and has infinite support. Which is to say that you need both an infinite amount of previous data before the current time, and you have to then wait forever to get the current output. Finding a causal version of such a filter only eliminates the non-causal portion: you still have to wait an infinite amount of time to get the current output. So what we see is that you only need to wait an infinite amount of time once to get an infinitely steep transition; this seems to me to have to do with the fun nature of infinity.

Question: is there a causal filter with exactly the ideal low-pass magnitude response, but some non-zero phase response? It seems like there should be...
 
  • #11
quadraphonics said:
...
If you had a dilligent EE101 prof, they would have pointed out that the impulse response of the ideal low-pass is both non-causal and has infinite support.

EE101?? freshman-level engineering (and math)? i don't think i heard the words "impulse response" until i was a sophomore and started doing diff eqs. on an RC circuit. "infinite support"? that's Advanced Calculus or Real Analysis, ain't it? i s'pose one can easily explain what causality is at the 101 level.
 
  • #12
Where I come from, 101 is a junior-year course. The idea being that you can't really start on stuff in earnest until you finish calculus and physics, which takes two years, I guess..

Colleges need to get together and settle on some kind of semi-regular course number conventions...
 
  • #13
quadraphonics said:
Where I come from, 101 is a junior-year course.

for my undergrad (U of North Dakota) 1xx was freshman (normally), 2xx sophomore, 3xx junior, 4xx senior, 5xx+ graduate. courses numbered lower than 100 (usually more than 90) meant courses that were remedial in nature and usually were not applicable for credit to the degree. but, if the student was deficient in something, may be required before moving on to a normal college level course. e.g. MATH090 was a high-school level algebra remedial course. there is a general math and science grad requirement for every student and students who flunked some math placement exam (or maybe it was a deficient ACT or SAT score or maybe they found themselves drowning in the first required math course, i dunno) those folks would take something like an 090 course.

traditionally MATH101, PHYS101, PHIL101, MUS101, ECON101, PSY101, CHEM101, BIO101, or ENGL101 meant the entry-level course in the discipline. we had an ENG101 (in general engineering). the first EE circuits course was EE201 or similarly numbered.

there are papers (in fact, http://www.musicdsp.org/files/Wavetable-101.pdf) that are written intended to be a sort of primer or tutorial or somehow seminal paper on a subject. i believe that that practice came from the common understanding of "101" being entry level, but i guess that wouldn't mean freshman level. i s'pose that's how you meant to use the term.

interesting etymology of a modern term.
The idea being that you can't really start on stuff in earnest until you finish calculus and physics, which takes two years, I guess..

i think you can start doing some circuit analysis before you get done with calc and physics. you need to know what a derivative and integral is. even though it would be nice to have the first two courses in General Physics behind (so you have the physical foundation to KVL, KCL, and the volt-amp characteristics of the R's, L's, and C's), you could start with KVL, KCL, and the volt-amp characteristics as axiomatic, the rules that you begin with. and let the physical justification come later.

Colleges need to get together and settle on some kind of semi-regular course number conventions...

it would be nice if there was some standardization so that credits would transfer easily. i guess that's what organizations like ABET are for.
 
Last edited:
  • #14
rbj said:
for my undergrad (U of North Dakota) 1xx was freshman (normally), 2xx sophomore, 3xx junior, 4xx senior, 5xx+ graduate.

Ah, at my alma mater, 0-99 are lower-division (fresh/soph) courses, 100-199 are upper-division (junior/senior) and 200-299 are graduate.

rbj said:
traditionally MATH101, PHYS101, PHIL101, MUS101, ECON101, PSY101, CHEM101, BIO101, or ENGL101 meant the entry-level course in the discipline. we had an ENG101 (in general engineering). the first EE circuits course was EE201 or similarly numbered.

Yeah, I suppose I was thinking of linear systems theory as "ECE101" (which it happens literally to be where I went to school), but on second thought this might simply reflect my biases as someone in the signals/systems end of the field. Most people would probably consider circuit analysis to be "101" material here.

Either way, I certainly wouldn't expect a "101" course of any type (be it EE, or a more specific signals/systems theory course) to actually get into the details of PW and spectral factorization. That's grad school stuff where I come from. I was thinking more of the lazy "ideal lowpass = noncausal impulse response" explanation that most undergrads leave school with. The more I think about it, the more it seems to me that you can get around non-causality, but not without incurring infinite delay. It's the transition width that's the real issue.

rbj said:
i think you can start doing some circuit analysis before you get done with calc and physics.

Oh, you definitely can. It's just that you have to re-do it all the next year once you know calculus. Mostly they limit it to steady-state response of linear circuits with very simple driving functions (just one or two sinusoids, say, or even just DC), in which case all that's needed is a little bit of familiarity with complex numbers.
 
  • #15
It is my understanding that if the Paley-Wiener criterion for the GAIN is satisfied, then there
is a PHASE associated with that GAIN such that the impulse response is casual and has ZERO delay. How do I find that PHASE?

Thans for youir interest!

bobbyk


quadraphonics said:
The best way to learn [itex]\LaTeX[/itex] is to use the "QUOTE" button and examine the code from other peoples' posts.

I think this example is interesting, as it fits into the usual explanation given to undergrads about why an ideal low-pass is unimplementable. In the usual lazy presentation, you're shown that the Fourier inverse of the ideal low-pass response (including zero-phase) is noncausal, and that's usually the end of it. However, that doesn't address the question of whether there's another filter with the same magnitude response, but with different phase response, that is causal. Your example doesn't directly address that question, since the magnitude response is slightly different, but it does point out another issue with the ideal response: the group delay around a step-transition will diverge, and so the delay of the filter will be infinite. This is actually the Heisenberg Uncertainty relation at work: infinite resolution in one domain implies zero resolution in the other. Or, in EE terms, an infinitely narrow transition band requires infinite delay.

If you had a dilligent EE101 prof, they would have pointed out that the impulse response of the ideal low-pass is both non-causal and has infinite support. Which is to say that you need both an infinite amount of previous data before the current time, and you have to then wait forever to get the current output. Finding a causal version of such a filter only eliminates the non-causal portion: you still have to wait an infinite amount of time to get the current output. So what we see is that you only need to wait an infinite amount of time once to get an infinitely steep transition; this seems to me to have to do with the fun nature of infinity.

Question: is there a causal filter with exactly the ideal low-pass magnitude response, but some non-zero phase response? It seems like there should be...
 
Last edited by a moderator:
  • #16
bobbyk said:
It is my understanding that if the Paley-Wiener criterion for the GAIN is satisfied, then there
is a PHASE associated with that GAIN such that the impulse response is casual and has ZERO delay. How do I find that PHASE?

Err, zero delay? That doesn't sound right; what's the definition of "delay" here? I don't see how a nontrivial filter can be both causal and have zero delay...
 
  • #17
qradraphonics: Thanks for your response and your interest!

I'm sorry for using an undefined term such as "delay", but what is your definition
of "nontrivial" ?
I'm sure you know that there are causal filters whose impulse response to an impulse at t = 0 contain an impulse at t = 0. I would regard this as "zero delay". Maybe these are
"trivial" filters?

bobbyk
 
  • #18
bobbyk said:
qradraphonics: Thanks for your response and your interest!

I'm sorry for using an undefined term such as "delay", but what is your definition
of "nontrivial" ?
I'm sure you know that there are causal filters whose impulse response to an impulse at t = 0 contain an impulse at t = 0. I would regard this as "zero delay". Maybe these are
"trivial" filters?

Yes, that's exactly what I had in mind when I said "trivial filters." That system has a flat frequency response, so it's certainly not the case that you can construct a zero-delay filter with some arbitrary (PW-satisfying) magnitude response. Any causal filter with a non-flat magnitude response is going to have some kind of delay, at least under the definitions of "delay" that I'm familiar with.

I would guess that the definition of "delay" that your statement holds under would be "phase response is piecewise constant," i.e., it's only a set of measure 0 that contains all the delay?
 
  • #19
There must be a misunderstanding here, as a simple filter having a capacitor from input to output and a resistor from output to ground has, for an impulse input at t=0, an output containing an impulse at t=0. This is what I call zero-delay and the gain is certainly NOT flat.
 
  • #20
bobbyk said:
There must be a misunderstanding here, as a simple filter having a capacitor from input to output and a resistor from output to ground has, for an impulse input at t=0, an output containing an impulse at t=0.

That is true, but that only implies zero-delay if by that you mean "the impulse response contains an impulse at t=0." Which is not an entirely unreasonable definition, but we should note that if you input into such a system a tone burst with center frequency around the cut-off frequency of this filter (i.e., 1/RC), then the resulting output tone burst will display a non-zero delay relative to the input.

Any time the phase response is not a constant for all frequencies, it implies that there are some sets of frequencies that will exhibit delay, and so it seems problematic to call any such filter "zero delay."

When considering definitions of delay defined directly in terms of the impulse response, it's more usual to use something like the median or mean of the impulse response, rather than the mode (as you've done above). Using either of those definitions will assign the first-order highpass system a non-zero delay, as there is mass to the right of t=0, but not to the left.

Thinking about it a bit more, the only examples I can think of where delay is defined in terms of the mode (or "peak") of the impulse response is in the context of linear phase systems, where the symmetry constraint implies that the mode is equal to the (constant) group delay.
 
Last edited by a moderator:
  • #21
OK, quadraphonics, thanks for your friendly and continued interest, and I now see that your concept of "delay" is much more useful than mine and I will discontinue using mine!

What I really want to do, then, is to find a PHASE to accompany the GAIN of my e-ideal filter such that the impulse response will be causal and have a FINITE "delay" in your sense (or to see that no such PHASE exists!).

I respect your superior knowledge of Linear Filter Theory and hope you can help me in my quest!

bobbyk
 
  • #22
Guys, Mia Culpa!
I must WITHDRAW my suggested "e-low-pass filter" because the Paley-Wiener Criterion
doesn't apply to it! Its magnitide response is not "Square Integrable". I neglected
to notice that that was a requierment. I told you I was not a mathematician! I'm sorry if I caused any trouble and I apologize for taking up your time! Please forgive me for
posting such a nonsesical problem!

I may modify the GAIN so that it will be Square Integrable and still satisfy the PW, but
I don't expect you guys to respond after what I've done!

bobbyk
 
  • #23
bobbyk said:
Guys, Mia Culpa!
I must WITHDRAW my suggested "e-low-pass filter" because the Paley-Wiener Criterion
doesn't apply to it! Its magnitide response is not "Square Integrable".

Ah, we should have thought of that... For my part, I usually only deal with discrete-time filters, in which case you get square-integrability as long as the gain response is finite. So I didn't even think to check for it in this case...
 

1. What is an ideal low-pass filter?

An ideal low-pass filter is a type of electronic filter that allows signals with frequencies below a certain cutoff frequency (F) to pass through, while attenuating signals with frequencies above F. This type of filter is commonly used in audio and communication systems to remove unwanted high-frequency noise.

2. How does an ideal low-pass filter work?

An ideal low-pass filter works by using a combination of resistors, capacitors, and inductors to create a frequency-dependent impedance. This impedance allows low-frequency signals to pass through with minimal resistance, while high-frequency signals are attenuated.

3. What happens to the output of an ideal low-pass filter when the gain is not equal to 0 for frequencies above the cutoff frequency (F)?

When the gain is not equal to 0 for frequencies above F, the output of an ideal low-pass filter will experience a gradual roll-off in amplitude. This means that the amplitude of the output signal will decrease as the frequency increases above F, but it will not be completely attenuated.

4. What is the significance of the cutoff frequency (F) in an ideal low-pass filter?

The cutoff frequency (F) is a critical parameter in an ideal low-pass filter, as it determines the point at which the filter begins to attenuate high-frequency signals. The lower the cutoff frequency, the more effectively the filter can remove high-frequency noise.

5. Are there any real-world limitations to an ideal low-pass filter?

Yes, there are several real-world limitations to an ideal low-pass filter. These include non-ideal components, such as resistors and capacitors with non-zero internal resistance and inductors with non-zero resistance, which can affect the filter's performance. Additionally, the filter's response may deviate from the ideal due to parasitic effects, such as stray capacitance and inductance.

Similar threads

  • Electrical Engineering
Replies
6
Views
1K
Replies
7
Views
3K
  • Electrical Engineering
Replies
1
Views
2K
  • Electrical Engineering
Replies
20
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
16
Views
959
  • Electrical Engineering
Replies
7
Views
3K
  • Electrical Engineering
Replies
3
Views
3K
Replies
6
Views
2K
  • Classical Physics
Replies
7
Views
1K
  • Electrical Engineering
Replies
2
Views
4K
Back
Top