On (real) entire functions and the identity theorem

  • #1
psie
122
12
TL;DR Summary
In a footnote in Ordinary Differential Equations by Adkins and Davidson, I read about power series of infinite radius of convergence and that they are "determined completely by its values on ##[0,\infty)##". This claim confuses me.
In Ordinary Differential Equations by Adkins and Davidson, in a chapter on the Laplace transform (specifically, in a section where they discuss the linear space ##\mathcal{E}_{q(s)}## of input functions that have Laplace transforms that can be expressed as proper rational functions with a fixed polynomial ##q(s)## in the denominator), I read the following two sentences in a footnote:

In fact, any function which has a power series with infinite radius of convergence [...] is completely determined by its values on ##[0,\infty)##. This is so since ##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##.

Both of these sentences confuse me, but especially the latter one. ##f^{(n)}## evaluated at ##0## depends on the values of ##f^{(n-1)}## in an arbitrary small neighborhood around ##0##. What do they mean by "##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##"?

For the first sentence, I suspect they are maybe referring to the identity theorem. Suppose ##f## and ##g## are two analytic functions with domain ##\mathbb R## and suppose they equal on some subinterval of ##\mathbb R## with a limit point in ##\mathbb R##. Then they equal on ##\mathbb R##, so we can say that an analytic function is completely determined by its values on a subinterval with a limit point in ##\mathbb R##, e.g. ##[0,\infty)##.
 
Physics news on Phys.org
  • #2
A function [itex]f[/itex] having an infinite radius of convergence means that there exists a sequence [itex](a_n)_{n \geq 0}[/itex] of real numbers such that [tex]
f(t) = \sum_{n=0}^\infty a_n t^n[/tex] is true for every [itex]t \in \mathbb{R}[/itex]. It follows by direct differentiation (which can be done term-by-term within the radius of convergence) that [tex]
f^{(n)}(0) = n!a_n[/tex] so that [itex]f^{(n)}(0)[/itex] exists.

Since [itex]f^{(n)}(0)[/itex] exists, it must be equal to the one-sided limit [tex]\lim_{t \to 0^{+}} \frac{f^{(n-1)}(t) - f^{(n-1)}(0)}{t}[/tex] which depends only on the values of [itex]f^{(n-1)}[/itex], and hence ultimately of [itex]f[/itex], on the interval [itex][0, \infty)[/itex].

(The converse does not hold: for [itex]f^{(n)}(0)[/itex] to exist we need the existence and value of the limit to be independent of the direction in which we approach the origin.)
 
  • Like
Likes psie
  • #3
psie said:
TL;DR Summary: In a footnote in Ordinary Differential Equations by Adkins and Davidson, I read about power series of infinite radius of convergence and that they are "determined completely by its values on ##[0,\infty)##". This claim confuses me.

In Ordinary Differential Equations by Adkins and Davidson, in a chapter on the Laplace transform (specifically, in a section where they discuss the linear space ##\mathcal{E}_{q(s)}## of input functions that have Laplace transforms that can be expressed as proper rational functions with a fixed polynomial ##q(s)## in the denominator), I read the following two sentences in a footnote:
Both of these sentences confuse me, but especially the latter one. ##f^{(n)}## evaluated at ##0## depends on the values of ##f^{(n-1)}## in an arbitrary small neighborhood around ##0##. What do they mean by "##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##"?

For the first sentence, I suspect they are maybe referring to the identity theorem. Suppose ##f## and ##g## are two analytic functions with domain ##\mathbb R## and suppose they equal on some subinterval of ##\mathbb R## with a limit point in ##\mathbb R##. Then they equal on ##\mathbb R##, so we can say that an analytic function is completely determined by its values on a subinterval with a limit point in ##\mathbb R##, e.g. ##[0,\infty)##.
I believe the Identity theorem, that two functions that agree on a subset containing a limit point ( i.e., not just in a discrete set) are equal everywhere, only applies for Complex-Analytic functions, not Real-Analytic ones. Maybe it applies if the latter can be extended into a Complex-Analytic function ( as its Real part).
,
 

1. What is an entire function?

An entire function is a complex function that is holomorphic on the entire complex plane. This means that it is differentiable at every point in the complex plane.

2. What is the identity theorem for entire functions?

The identity theorem for entire functions states that if two entire functions are equal on a set that has a limit point in the complex plane, then they are equal everywhere on the complex plane.

3. How does the identity theorem for entire functions relate to analytic continuation?

The identity theorem for entire functions is a key result in analytic continuation, which is the process of extending the domain of definition of a function. The theorem ensures that the extension of an entire function is unique.

4. Can an entire function have an essential singularity?

No, an entire function cannot have an essential singularity. Since entire functions are holomorphic on the entire complex plane, they are well-behaved and do not exhibit essential singularities.

5. What are some examples of entire functions?

Some examples of entire functions include polynomials, exponential functions, trigonometric functions, and power series with an infinite radius of convergence.

Similar threads

Replies
4
Views
753
Replies
1
Views
940
Replies
2
Views
1K
Replies
24
Views
2K
Replies
16
Views
2K
Replies
1
Views
164
Replies
3
Views
1K
Replies
5
Views
1K
Back
Top