I On (real) entire functions and the identity theorem

psie
Messages
315
Reaction score
40
TL;DR Summary
In a footnote in Ordinary Differential Equations by Adkins and Davidson, I read about power series of infinite radius of convergence and that they are "determined completely by its values on ##[0,\infty)##". This claim confuses me.
In Ordinary Differential Equations by Adkins and Davidson, in a chapter on the Laplace transform (specifically, in a section where they discuss the linear space ##\mathcal{E}_{q(s)}## of input functions that have Laplace transforms that can be expressed as proper rational functions with a fixed polynomial ##q(s)## in the denominator), I read the following two sentences in a footnote:

In fact, any function which has a power series with infinite radius of convergence [...] is completely determined by its values on ##[0,\infty)##. This is so since ##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##.

Both of these sentences confuse me, but especially the latter one. ##f^{(n)}## evaluated at ##0## depends on the values of ##f^{(n-1)}## in an arbitrary small neighborhood around ##0##. What do they mean by "##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##"?

For the first sentence, I suspect they are maybe referring to the identity theorem. Suppose ##f## and ##g## are two analytic functions with domain ##\mathbb R## and suppose they equal on some subinterval of ##\mathbb R## with a limit point in ##\mathbb R##. Then they equal on ##\mathbb R##, so we can say that an analytic function is completely determined by its values on a subinterval with a limit point in ##\mathbb R##, e.g. ##[0,\infty)##.
 
Physics news on Phys.org
A function f having an infinite radius of convergence means that there exists a sequence (a_n)_{n \geq 0} of real numbers such that <br /> f(t) = \sum_{n=0}^\infty a_n t^n is true for every t \in \mathbb{R}. It follows by direct differentiation (which can be done term-by-term within the radius of convergence) that <br /> f^{(n)}(0) = n!a_n so that f^{(n)}(0) exists.

Since f^{(n)}(0) exists, it must be equal to the one-sided limit \lim_{t \to 0^{+}} \frac{f^{(n-1)}(t) - f^{(n-1)}(0)}{t} which depends only on the values of f^{(n-1)}, and hence ultimately of f, on the interval [0, \infty).

(The converse does not hold: for f^{(n)}(0) to exist we need the existence and value of the limit to be independent of the direction in which we approach the origin.)
 
psie said:
TL;DR Summary: In a footnote in Ordinary Differential Equations by Adkins and Davidson, I read about power series of infinite radius of convergence and that they are "determined completely by its values on ##[0,\infty)##". This claim confuses me.

In Ordinary Differential Equations by Adkins and Davidson, in a chapter on the Laplace transform (specifically, in a section where they discuss the linear space ##\mathcal{E}_{q(s)}## of input functions that have Laplace transforms that can be expressed as proper rational functions with a fixed polynomial ##q(s)## in the denominator), I read the following two sentences in a footnote:
Both of these sentences confuse me, but especially the latter one. ##f^{(n)}## evaluated at ##0## depends on the values of ##f^{(n-1)}## in an arbitrary small neighborhood around ##0##. What do they mean by "##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##"?

For the first sentence, I suspect they are maybe referring to the identity theorem. Suppose ##f## and ##g## are two analytic functions with domain ##\mathbb R## and suppose they equal on some subinterval of ##\mathbb R## with a limit point in ##\mathbb R##. Then they equal on ##\mathbb R##, so we can say that an analytic function is completely determined by its values on a subinterval with a limit point in ##\mathbb R##, e.g. ##[0,\infty)##.
I believe the Identity theorem, that two functions that agree on a subset containing a limit point ( i.e., not just in a discrete set) are equal everywhere, only applies for Complex-Analytic functions, not Real-Analytic ones. Maybe it applies if the latter can be extended into a Complex-Analytic function ( as its Real part).
,
 
Back
Top