Two Questions: Transfer Functions and Capacitors

AI Thread Summary
The discussion addresses two key questions related to circuit analysis. First, it explores the stability of transfer functions in the Laplace domain, emphasizing that poles in the left half of the s-plane indicate stable transient responses, while those in the right half can lead to instability. The second question focuses on the role of capacitors near power pins in integrated circuits, clarifying that they serve to filter noise and provide voltage during current spikes, thereby stabilizing power supply performance. Participants highlight the importance of low inductance connections for effective decoupling and the frequency-dependent behavior of capacitors. Overall, understanding these concepts is crucial for effective circuit design and analysis.
Jessehk
Messages
21
Reaction score
0
Hi Everyone.

I actually had two distinct questions relating to circuit analysis and design. The first is theoretical and the second is a question about what I've observed on other circuits.

I'm reading some undergraduate EE books (I'm an EE student in Canada) and I've gotten to the point where transfer functions of circuits are discussed in the Laplace domain. In all the texts I've read it has been stated that if the poles (roots of the denominator) are in the left of the s-plane (i.e. the real parts are negative) then the transient response will be stable (that is, it goes to 0). I can show this by deriving the characteristic equation from the poles of a low- or high-pass filter and an LRC oscillator, but I can't immediately see that this is true for transfer functions with higher-degree denominators. In a similar vein (I think), is it possible to construct physically realizable circuits that have non-stable transient responses? What about the input function? Must the poles of an input function be on the right side of the s-plane? I'm in-between academic terms at the moment so I'm not able to conveniently ask a professor.

My second question has to do with the capacitors that are placed in large numbers next to power pins for ICs and other components such as power supplies. I've had it explained to me by some that these capacitors filter the noise in the power rails; others tell me that they provide needed voltage when the circuit is loaded because of IC current draw; still others have told me it's a combination of both. However, I'm still at a loss and it's something that's really bothering me. Can anyone explain (or point me in the direction of resources that explain) WHY capacitors are needed and maybe a brief mathematical or theoretical demonstration of the results when the capacitors are or aren't attached?

I'd be grateful for any responses or links to resources.
 
Last edited:
Engineering news on Phys.org
Jessehk said:
Hi Everyone.

I actually had two distinct questions relating to circuit analysis and design. The first is theoretical and the second is a question about what I've observed on other circuits.

I'm reading some undergraduate EE books (I'm an EE student in Canada) and I've gotten to the point where transfer functions of circuits are discussed in the Laplace domain. In all the texts I've read it has been stated that if the poles (roots of the denominator) are in the left of the s-plane (i.e. the real parts are negative) then the transient response will be stable (that is, it goes to 0). I can show this by deriving the characteristic equation from the poles of a low- or high-pass filter and an LRC oscillator, but I can't immediately see that this is true for transfer functions with higher-degree denominators. In a similar vein (I think), is it possible to construct physically realizable circuits that have non-stable transient responses? What about the input function? Must the poles of an input function be on the right side of the s-plane? I'm in-between academic terms at the moment so I'm not able to conveniently ask a professor.

My second question has to do with the capacitors that are placed in large numbers next to power pins for ICs and other components such as power supplies. I've had it explained to me by some that these capacitors filter the noise in the power rails; others tell me that they provide needed voltage when the circuit is loaded because of IC current draw; still others have told me it's a combination of both. However, I'm still at a loss and it's something that's really bothering me. Can anyone explain (or point me in the direction of resources that explain) WHY capacitors are needed and maybe a brief mathematical or theoretical demonstration of the results when the capacitors are or aren't attached?

I'd be grateful for any responses or links to resources.

I'm of more help on the 2nd question than the first. Those caps are called decoupling caps, and their function for digital logic is to supply the sharp current spikes that are needed to support the fast transition edges for the logic. To do this, the caps need to be low inductance, and connected to the IC power pins in a low inductance fashion. Generally this means that you use SMT caps, placed on the same side of the PCB as the SMT IC, and butted up right next to the pin that they are decoupling. The other end of the cap vias to the ground later as soon as possible. You choose the value of the cap to give adequate charge storage for the current spikes, while still hopefully staying below the self-resonant frequency (SRF) of the LC that is formed by the cap and the inductance of the PCB traces and via. A 0.1uF or 0.01uF ceramic cap is typically used for the decoupling cap.

For analog or mixed-signal circuits, the decoupling caps often serve more of a noise filtering and stability role. Power supply filtering becomes more important for circuits that do not have a great power supply rejection ratio (PSRR). And for circuits like high-speed comparator circuits with small hysteresis, power supply decoupling helps to avoid oscillations near the comparator cross-over voltage.
 
Jessehk said:
... if the poles (roots of the denominator) are in the left of the s-plane (i.e. the real parts are negative) then the transient response will be stable (that is, it goes to 0). I can show this by deriving the characteristic equation from the poles of a low- or high-pass filter and an LRC oscillator, but I can't immediately see that this is true for transfer functions with higher-degree denominators.
It's been a while, so this will not be very rigorous, but I think it will point you in the right direction. You may need a bit more background before you can fully appreciate this. You'll either need to have a solid basic understanding of the Laplace transform and the method of partial fractions, or you will really need to know the $#!t out of some Laplace transforms.

I will assume that your book is referring to LTI systems (otherwise the Laplace transform technique is not applicable). Acceptable circuit element models for an LTI system are:

- your 3 passives: resistor, capacitor, inductor
- your 4 actives: ideal indpendent and dependent voltage and current sources

Assuming a system that is composed of only these kinds of elements, your transfer function will be a rational function (the numerator is a finite polynomial, and the denominator is a finite polynomial). So, you can use the method of partial fractions to break up the transfer function into a sum of simple terms of the form:

1/(s-z)n

where z is one of the poles. There is one such term for each pole. If I remember correctly, these kinds of expressions inverse transfrom back into the time domain as exponentials of the form

ezt

with possibly some other non-exponential factors. The point is that, if z has a positive real part, then this exponential "explodes" in time. These exponentials are precisely what characterize the transient response (as opposed to the steady-state response). To see this, I will wave my hands again. Let's say that the transfer function is H(s), and the Laplace transform of the input is X(s). Then the Laplace transform of the output is:

Y(s) = H(s) X(s)

I already argued that H(s) is a rational function. But what is X(s)? Well, in general it is not a rational function. However, in order to examine transients, we just hit the circuit with an impulse (a.k.a. a Dirac delta function of time). And the Laplace transform of an impulse is just 1. (That is why the time domain version of H(s), usually denoted as h(t), is called the "impulse response".) Setting X(s)=1 is like striking a bell and then listening to it ring (if the circuit is a bell). So, those exponentials that I just argued are indeed the transients. So, if you want your transients to fade away and let your circuit act on the input, you better have all of your z's in the left half plane.

Jessehk said:
In a similar vein (I think), is it possible to construct physically realizable circuits that have non-stable transient responses?
I'm pretty sure that you can, using positive feedback. Note that dependent voltage and current sources are indeed allowed in an LTI circuit. I don't have a calculation off the top of my head, though, so I am not 100% certain about that.

Jessehk said:
What about the input function? Must the poles of an input function be on the right side of the s-plane?
No. The input function is arbitrary. The restriction on the poles of the input transform, X(s), comes from the convergence of the unilateral Laplace transform for the given input function, x(t). However, there is one practical restriction that is based on the transfer function, H(s). If the poles of the input transform, X(s), are to the left of the poles of the transfer function, H(s), then the transients will dominate (which is usually undesireable, as alluded to above).
 
I know it's been a while since I posted but rest-assured (:wink:) that I read your responses and learned from them.

I'm still don't have a concrete understanding of the physical effect of the capacitors aside from the conceptual description you provided berkeman, but I suspect that this will come with more reading and experience.

turin, that was a great explanation and it makes sense in the context of what I've studied in my differential equations/transient circuits class.

Thanks again to both of you for your help and explanations. :smile:
 
Last edited:
Here's a view of capacitors that might help you understand them better. All capacitors are high-pass filters. Look at the frequency domain relationship:

V = I \frac{1}{j \omega C}

That's looks a lot like V = IR doesn't it? Except, the limit of the reactance becomes 0 as the frequency becomes infinite and the reactance becomes infinite as the frequency approaches 0.

So, we can conclude that the most simple approximation of a capacitor is an open circuit for DC (0 frequency) and a short circuit for high frequencies. How can we use that? Well, let's say you have a DC power supply with some ripple in the output. How can you get rid of the ripple? One way is to connect a capacitor from the power source output to the ground. That will provide a short circuit path for the ripple to go straight to ground while blocking the DC.

You can use this property many different ways. Many amplifiers create a DC offset. Capacitors can block the DC of the amp output while allowing sufficiently high frequencies of interest to pass through. This is why you may see a capacitor sometimes referred to as a bypass capacitor.

Inductors do the opposite. They are all low pass filters. They become a short for DC and an open circuit (infinite reactance) for high frequency. Inductors are sometimes referred to as a "choke" for this reason. One application for a choke is to sink a DC offset current from the output of a radio amp stage while blocking the radio frequency.
 
Okefenokee,

In the past few months I've been covering more material and encountered some of what you've written above, but in textbooks.

I just wish I had read your response sooner! I would would have saved some time. :)
 
Thread 'Weird near-field phenomenon I get in my EM simulation'
I recently made a basic simulation of wire antennas and I am not sure if the near field in my simulation is modeled correctly. One of the things that worry me is the fact that sometimes I see in my simulation "movements" in the near field that seems to be faster than the speed of wave propagation I defined (the speed of light in the simulation). Specifically I see "nodes" of low amplitude in the E field that are quickly "emitted" from the antenna and then slow down as they approach the far...
Hello dear reader, a brief introduction: Some 4 years ago someone started developing health related issues, apparently due to exposure to RF & ELF related frequencies and/or fields (Magnetic). This is currently becoming known as EHS. (Electromagnetic hypersensitivity is a claimed sensitivity to electromagnetic fields, to which adverse symptoms are attributed.) She experiences a deep burning sensation throughout her entire body, leaving her in pain and exhausted after a pulse has occurred...
Back
Top