On (real) entire functions and the identity theorem

Click For Summary
SUMMARY

The discussion centers on the concept of entire functions and the identity theorem as presented in "Ordinary Differential Equations" by Adkins and Davidson. It highlights that functions with a power series of infinite radius of convergence are completely determined by their values on the interval [0, ∞). The confusion arises from the relationship between the derivatives of these functions at zero and their values in a neighborhood around zero. The identity theorem is referenced, clarifying that it applies to analytic functions that agree on a subinterval with a limit point, establishing their equality across their entire domain.

PREREQUISITES
  • Understanding of entire functions and their properties
  • Familiarity with power series and their convergence
  • Knowledge of the identity theorem in complex analysis
  • Basic concepts of Laplace transforms and their applications
NEXT STEPS
  • Study the identity theorem in complex analysis in detail
  • Explore the properties of entire functions and their derivatives
  • Learn about power series and their convergence criteria
  • Investigate the applications of Laplace transforms in differential equations
USEFUL FOR

Mathematicians, students of differential equations, and anyone interested in the properties of analytic functions and their applications in real and complex analysis.

psie
Messages
315
Reaction score
40
TL;DR
In a footnote in Ordinary Differential Equations by Adkins and Davidson, I read about power series of infinite radius of convergence and that they are "determined completely by its values on ##[0,\infty)##". This claim confuses me.
In Ordinary Differential Equations by Adkins and Davidson, in a chapter on the Laplace transform (specifically, in a section where they discuss the linear space ##\mathcal{E}_{q(s)}## of input functions that have Laplace transforms that can be expressed as proper rational functions with a fixed polynomial ##q(s)## in the denominator), I read the following two sentences in a footnote:

In fact, any function which has a power series with infinite radius of convergence [...] is completely determined by its values on ##[0,\infty)##. This is so since ##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##.

Both of these sentences confuse me, but especially the latter one. ##f^{(n)}## evaluated at ##0## depends on the values of ##f^{(n-1)}## in an arbitrary small neighborhood around ##0##. What do they mean by "##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##"?

For the first sentence, I suspect they are maybe referring to the identity theorem. Suppose ##f## and ##g## are two analytic functions with domain ##\mathbb R## and suppose they equal on some subinterval of ##\mathbb R## with a limit point in ##\mathbb R##. Then they equal on ##\mathbb R##, so we can say that an analytic function is completely determined by its values on a subinterval with a limit point in ##\mathbb R##, e.g. ##[0,\infty)##.
 
Physics news on Phys.org
A function f having an infinite radius of convergence means that there exists a sequence (a_n)_{n \geq 0} of real numbers such that <br /> f(t) = \sum_{n=0}^\infty a_n t^n is true for every t \in \mathbb{R}. It follows by direct differentiation (which can be done term-by-term within the radius of convergence) that <br /> f^{(n)}(0) = n!a_n so that f^{(n)}(0) exists.

Since f^{(n)}(0) exists, it must be equal to the one-sided limit \lim_{t \to 0^{+}} \frac{f^{(n-1)}(t) - f^{(n-1)}(0)}{t} which depends only on the values of f^{(n-1)}, and hence ultimately of f, on the interval [0, \infty).

(The converse does not hold: for f^{(n)}(0) to exist we need the existence and value of the limit to be independent of the direction in which we approach the origin.)
 
  • Like
Likes   Reactions: psie
psie said:
TL;DR Summary: In a footnote in Ordinary Differential Equations by Adkins and Davidson, I read about power series of infinite radius of convergence and that they are "determined completely by its values on ##[0,\infty)##". This claim confuses me.

In Ordinary Differential Equations by Adkins and Davidson, in a chapter on the Laplace transform (specifically, in a section where they discuss the linear space ##\mathcal{E}_{q(s)}## of input functions that have Laplace transforms that can be expressed as proper rational functions with a fixed polynomial ##q(s)## in the denominator), I read the following two sentences in a footnote:
Both of these sentences confuse me, but especially the latter one. ##f^{(n)}## evaluated at ##0## depends on the values of ##f^{(n-1)}## in an arbitrary small neighborhood around ##0##. What do they mean by "##f(t)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}t^n## and ##f^{(n)}(0)## are computed from ##f(t)## on ##[0,\infty)##"?

For the first sentence, I suspect they are maybe referring to the identity theorem. Suppose ##f## and ##g## are two analytic functions with domain ##\mathbb R## and suppose they equal on some subinterval of ##\mathbb R## with a limit point in ##\mathbb R##. Then they equal on ##\mathbb R##, so we can say that an analytic function is completely determined by its values on a subinterval with a limit point in ##\mathbb R##, e.g. ##[0,\infty)##.
I believe the Identity theorem, that two functions that agree on a subset containing a limit point ( i.e., not just in a discrete set) are equal everywhere, only applies for Complex-Analytic functions, not Real-Analytic ones. Maybe it applies if the latter can be extended into a Complex-Analytic function ( as its Real part).
,
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 16 ·
Replies
16
Views
4K