Systems Non-Proper Transfer Fns: Causality?

Click For Summary
A system's transfer function being non-proper indicates that the order of the numerator exceeds that of the denominator, leading to the conclusion that the system is not causal. This stems from the idea that having more finite zeros than poles results in anticipative behavior, which violates causality. Discussions highlight the need for rigorous proof of this relationship, as well as the definitions of causality and anticipatory systems. The conversation also touches on the implications of idealized systems, such as differentiators, which may challenge traditional notions of causality. Overall, the topic remains complex and requires further exploration and clarification within control theory.
  • #31
DaveE said:
edit: you can tell I'm not a mathematician by my sloppy use of "well defined", LOL.

But that's not how the definition of a derivative works. The derivative definition uses the limit so one doesn't get to cheat and use only the left or right side for that. To show the derivative is undefined I just need to demonstrate there is a number which breaks the bound at t0. This will be easy to do because at t0 the left side limit is zero and the right side limit is 1.

I was thinking this might be the answer to JasonRF's dilemma too. The derivative might not exist for an improper transfer function when the system is non causal as the frequency approaches infinity (or zero). Like how the limit of 1/x doesn't exist as x->0 but I couldn't think of an easy proof off the top of my head for that and I'm too lazy to work through all the definitions. So it's just a hunch. :)
 
Engineering news on Phys.org
  • #32
I think you are confusing causality with differentiability or realizability. I am not sure how they are connected.
 
  • #33
If we are talking pure math using classical analysis, then the derivative of your input should be a step that does not exist at ##t=t_0##, and the second derivative would not exist at ##t=t_0## and be zero everywhere else.

Of course a strictly one-sided limit will never output "does not exist". But I have never seen an author define LTI systems as mappings from ##R##, to ##\{R,## does not exist ##\}## before. Have you? A system has a domain and a range - if we are using classical analysis then the logical domain of the differentiator would be a space of differentiable functions. If we allow generalized functions, as EEs very often do (usually without explicitly mentioning it), then there is no problem. The first derivative of your input is a well-defined step and the second derivative is ##\delta(t-t_0)##.

This discussion has made me go back and look at the more precise definitions of causality that the more mathematical treatments of linear system theory often use, since the 'output doesn't depend on future inputs' notion provided in introductory treatments leaves a lot of room for interpretation. Let ##x_0(t)## and ##x_1(t)## be two inputs with corresponding outputs ##y_0(t)## and ##y_1(t)##. The definition I usually see is that a system is causal if ##x_0(t)=x_1(t)## for ##a\leq t < b## implies ##y_0(t)=y_1(t)## for ##a\leq t < b##. Under this definition the differentiator is causal, even if we use classical analysis and feed it your input. No need to try and use a one-sided limit to appeal to an imprecise definition of causality.

Edit: this is wrong. When ##t_0=a## then the differentiator would be acausal for your function under this definition, assuming we use classical analysis and it makes any sense to force systems to do operations that don’t exist. The actual definitions I saw only had the ##t<b## part, and I added the ##a\leq## to make it the same form as the definition in the next paragraph. Oops! I know of one book though, state-space and input-output linear systems by Delchamps, that uses a slightly different definition. He uses more complicated notation to allow for time-varying systems and such, but with the prior notation his definition basically is that a system is causal if ##x_0(t)=x_1(t)## for ##a\leq t < b## implies ##y_0(t)=y_1(t)## for ##a\leq t \leq b##. In his book the definition of any system must also include a specification of the vector space of input functions. So a differentiator that is defined with differentiable input functions would therefore be causal, and a differentiator defined with inputs that are not differentiable everywhere (and I'm not certain this would even be allowed) would be acausal.

So if you want to feed a differentiator non-differentiable functions and demand a 'does not exist' output, then the differentiator could be acausal depending on your definition of the system, the definition of causality and whether or not you are happy to use generalized functions. Your function creates problems only at one single point, but mathematicians can construct functions that are continuous everywhere and differentiable nowhere. If we also allow those inputs then the differentiator can have inputs that create outputs that simply don't exist anywhere. Does this make it an illegitimate system to begin with? If so, then a nice proper system defined with a standard Riemann integral, ##y(t)=\int_0^t x(p)\, dp## also has problems, since those same mathematicians can construct bounded functions that are not Riemann integrable...

jason
 
Last edited:
  • Informative
  • Like
Likes DaveE and eq1
  • #34
One more thing. If ##x_1(t)## is a function differential everywhere with derivative ##y_1(t)##, then if we use classical analysis and feed a differentiator your function plus ##x_1(t)## the output is simply 'does not exist' at ##t=t_0##, regardless of ##y_1(t_0)##. Does this violate linearity? Since we are asking the system to do things that are undefined, perhaps it doesn't matter?
 
  • Like
Likes DaveE
  • #35
jasonRF said:
One more thing. If ##x_1(t)## is a function differential everywhere with derivative ##y_1(t)##, then if we use classical analysis and feed a differentiator your function plus ##x_1(t)## the output is simply 'does not exist' at ##t=t_0##, regardless of ##y_1(t_0)##. Does this violate linearity? Since we are asking the system to do things that are undefined, perhaps it doesn't matter?
The way I learned linearity, you would have to have valid functions f(a), f(b), and f(a+b) before you could evaluate f(a+b)=f(a)+f(b). So I would say the concept of linearity is not applicable at ##t_0##. But as you say, it all comes down to the definitions you like.
 
  • Like
Likes jasonRF
  • #36
In control systems, the definition of causality is taken as (in words): "the output does not depend on future inputs". For this definition, there exists a test: a system is causal if its impulse response h(t) is 0 for t<0. This is well documented. In parallel, and with no documentation, it is stated that an improper transfer function represents a non causal system. This makes the differentiator non causal. In some texts, this may stated otherwise, but there is NO text that states that an improper transfer function may relate to a causal system.

There is an alternative route to this, equally obscure. An improper transfer function cannot be represented in state space form. This form is only valid for causal systems.

Further, to properly deal with impulse response, we need generalized functions. Frankly, it goes beyond my capabilities. One thing I can mention though is that these functions possesses derivatives and integrals of any order, thus they behave differently from "ordinary" functions like the step or the impulse.

One last thing, @jasonRF where did you see the support of the unit doublet ?
 
  • #37
Tasos51 said:
One last thing, @jasonRF where did you see the support of the unit doublet ?
I learned it from A Guide to Distribution Theory and Fourier Transforms, by Strichartz. It was one of the required texts for a math course I took my senior year, and is pretty accessible.

You should be able to find it in other texts as well. For example, Distributions, Complex Variables and Fourier Transforms by Bremmermann has it as well.

Basically, they show that a distribution with point support is a finite linear combination of the delta distribution and its derivatives.

jason
 
  • #38
jasonRF said:
I learned it from A Guide to Distribution Theory and Fourier Transforms, by Strichartz. It was one of the required texts for a math course I took my senior year, and is pretty accessible.

You should be able to find it in other texts as well. For example, Distributions, Complex Variables and Fourier Transforms by Bremmermann has it as well.

Basically, they show that a distribution with point support is a finite linear combination of the delta distribution and its derivatives.

jason
So, what is the support of the unit doublet ?
 
  • #39
Tasos51 said:
So, what is the support of the unit doublet ?
The support of ##\delta^\prime(t)## is ##\{0\}##. That is, it is just a single point. Higher derivatives keep the same point support.
 
Last edited:
  • #40
jasonRF said:
The support of ##\delta^\prime(t)## is ##\{0\}##. That is, it is just a single point. Higher derivatives keep the same point support.
Could that mean that the system is not at initial rest, since at 0 it has some non-zero value ?
I mean, if the solution to the differential equation contains singularity terms concentrated at zero, this can be viewed as non-zero initial conditions, thus the system is not at initial rest.
 
Last edited:
  • #41
I’m not sure I understand what you mean, but it sounds interesting. Could you elaborate?

Of course, within lumped circuit theory the impulse response of an ideal wire is ##h(t)= \delta(t)## so is also ‘singular’ at zero. So it must be related to having the derivatives of deltas…

Jason
 
  • #42
jasonRF said:
I’m not sure I understand what you mean, but it sounds interesting. Could you elaborate?

Of course, within lumped circuit theory the impulse response of an ideal wire is ##h(t)= \delta(t)## so is also ‘singular’ at zero. So it must be related to having the derivatives of deltas…

Jason
I refer to the solution, not the impulse response. If it contains singularities at 0 (deltas and its derivatives), it means y(0) is not 0, which contradicts the requirement that a causal LTI system should be initially at rest. It is like having an instantaneous input a t=0, when there should be none (u(0)=0).
 
  • #43
I think you are an assuming a particular domain of input functions. To me it seems that a differentiator at rest can only produce a singular output at zero if the input is not differentiable.

also, I still don’t understand how the singularity in the solution corresponds to initial conditions. Could you illustrate?

Edit: should add that part of why I’m confused is that the convolution of the input and the impulse response is by definition the zero state response. Or at least that is what I learned.
 
Last edited:
  • #44
In case I haven’t made this clear enough, I still believe a differentiator, and hence a general improper system, is unstable, impractical and unrealizable. Further, it either doesn’t exist or has singular output (depending on perspective) when we mathematically analyze useful input functions of the form ##H(t) e^{a t}## (where I am using ##H## for the unit step function). It has plenty of problems, even though I believe it is causal.

Jason
 
  • #45
I agree with all of the above, but not the last statement. I cannot "believe" in a mathematical statement: it is either proved or disproved. I believe though that the answer is difficult and invloves distribution theory. Another fact, as I mentioned, is that improper systems cannot be put in state-space form which is causal.
Finally I am afraid I cannot eleborate on my previous posts and perhaps they are nonsensial.
 
  • #46
Tasos51 said:
I cannot "believe" in a mathematical statement: it is either proved or disproved.
Agreed - I used a poor choice of words. Given a domain where the system is well defined (n-times differentiable inputs in the context of classical analysis, or simply use distribution theory throughout), then the nth order differentiator can be defined so that it is causal for all definitions of causality I have found. So improper systems can be causal.

My opinion is that it is nonsense to use a domain for which the system isn't always well-defined. I believe systems should map well-defined functions to well-defined functions, or distributions to distributions, or perhaps well-defined functions to distributions. That is where the 'belief' comes in. I don't at all think less of people who disagree with me on this!

If you don't know any distribution theory already, I do recommend the text by Gasquet and Witomski, who discuss conditions for causality in the context of continuous filters. It is more mathematical than Strichartz, though, who makes the subject very accessible but doesn't include a number of topics engineers really need.

Anyway, I think I have nothing more to say. Thanks for starting this thread - and thanks to DaveE who came up with the good example to force us to sharpen our thinking. I learned a lot. If you ever come up with a proof that satisfies you, please share it with us!

jason
 
  • Like
Likes berkeman

Similar threads

Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 292 ·
10
Replies
292
Views
11K
Replies
1
Views
4K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K