Is the series [tex] \sum_{n=0}^{\infty}\frac{z^n}{n!}[/tex] uniformly convergent for all z in the complex plane? It is uniformly convergent for all z in any bounded set, but the complex plane is unbounded. My instinct is that it is NOT uniformly convergent for all z in C. This is not homework.
Do you mean the Weierstrass M-test? How would one apply the M-test to an unbounded set? I can only figure out how to apply it to successively bigger bounded sets, but I don't feel comfortable exchanging limiting operations to conclude that the series is uniformly convergent for all z in C. It is my understanding that what is meant by "the sequence S_n(z) converges uniformly to S for all z in A" is "given any e>0, there exists N(e) such that |S_n(z)-S(z)|<e for all z in A and for all n>N(e)". If we apply the M-test to successively larger bounded subsets of the complex plane, this function N(e) changes. No matter how large we make our bounded set, there will still be points outside the set which require a different N(e). How do we know there exists some N(e) that works for EVERY point in the plane?
No, the function diverges as z approaches positive real infinity. Therefore it can't be uniformly convergent on C. Note: In general, if a function diverges at either a or b, then any series representation can't be uniformly convergent on [a,b]. But it can be uniformly convergent on a subinterval of [a,b].
Interesting, thanks. How about this similar situation? Let C_R be the contour z=R*e^(it), 0<t<pi/4. Let f(z)=e^(-z^2). I can't figure out if |z*f(z)| tends uniformly to zero as R->infinity. It works for 0<t<(pi/4)-e for any e>0, but it's inconclusive for the original interval. I vaguely remember some theorems from complex analysis about compact sets converging to compact sets and convergence or some such, but it's all a bit hazy. Is this a similar situation? Actually now that I look at it, |zf(z)|=Re^(-R^2 cos2t), diverges as R->infinity if t=(pi/4), so does that mean, as you mentioned above, that |zf(z)| can't tend uniformly to zero on the whole interval?
Not very accurate way to justify the conclusion. For example the identity mapping [itex]z\mapsto z[/itex] diverges too when z approaches positive infinity, but its Taylor series converge uniformly on [tex]\mathbb{C}[/tex]. The key thing is to notice [tex] \underset{z\in\mathbb{C}}{\sup}\;\big| e^{z} - \sum_{n=0}^N \frac{1}{n!} z^n\big| = \infty,\quad \forall N\in\mathbb{N}. [/tex]
jostpuur, does your statement imply that in my second example, |zf(z)| DOES tend uniformly to zero? We could write: [tex]\sup_{\theta \in (0,\pi/4)} |zf(z)|=R, \forall R\in \mathbb{R}[/tex]
This is correct. Actually even [tex] e^{-R^2 \cos(2\theta)},\quad 0\leq \theta\leq \frac{\pi}{4} [/tex] does not converge uniformly towards zero, because [tex] \theta=\frac{\pi}{4}\quad\implies\quad e^{-R^2\cos(2\theta)} = 1,\quad\quad \forall R>0. [/tex] (edit: It seems you already knew this, though...) No, I did not attempt to state anything like that. This equation also shows that limit [tex]R\to\infty[/tex] does not produce uniform converge. Was it your point? (edit: I had difficulty understanding this equation first, so I went through a series of editings) Am I correct to guess, that your original problem deals with figuring out does [tex] \int\limits_{\gamma} dz\; e^{-z^2} [/tex] exist when [tex]\gamma=\{Re^{i\theta}\;|\; 0\leq \theta \leq \frac{\pi}{4}\}[/tex] (counter clockwise), and does it approaches zero when [tex]R\to \infty[/tex]?
Thanks, you've clarified a lot of my confusion. You are correct, that is the problem. I believe it doesn't approach zero. A colleague of mine claims that it does, by considering the contour [tex]\gamma=\lbrace Re^{i\theta}|0\le \theta \le (\frac{\pi}{4}-\epsilon)\rbrace[/tex] and letting [tex]\epsilon\to 0[/tex]. What is your opinion on this?
The sequence of functions [tex]f_n:[0,1]\to\mathbb{R}[/tex], [tex]f_n = n\chi_{[0,\frac{1}{n}]}[/tex], is a standard example of functions which converge uniformly towards zero on [tex][\epsilon,1][/tex] for all [tex]\epsilon > 0[/tex], but still [tex] \lim_{n\to\infty} \int\limits_0^1 dx\; f_n(x) = \lim_{n\to\infty} 1 = 1 \neq 0. [/tex] So if your colleague says that the integral vanishes on [tex][0,\frac{\pi}{4}][/tex] because it vanishes on [tex][0,\frac{\pi}{4} - \epsilon][/tex] (while the integrand happens to approach infinity at [tex]\frac{\pi}{4}[/tex]!!!!), he or she surely on wrong there :tongue: However! It could be he or she is still right on the final claim. Because it can be that the integral actually does vanish. I'm not fully sure about that right now. If the integral does vanish, it is probably not very difficult to find some collection of functions [tex]g_R:[0,\frac{\pi}{4}]\to [0,\infty[[/tex], so that [tex] \int\limits_0^{\pi /4} d\theta\; g_R(\theta) [/tex] can actually be calculated, so that [tex] R e^{-R^2\cos(2\theta)} \leq g_R(\theta) [/tex] and so that [tex] \lim_{R\to\infty} \int\limits_0^{\pi /4} d\theta\; g_R(\theta) = 0. [/tex] If nobody else finds those kind of dominating functions soon, it could be I'm trying to find them sooner or later. No promises, though.
Starting with inequality [tex] 1 - \frac{4}{\pi}\theta \leq \cos(2\theta),\quad\quad 0\leq \theta\leq \frac{\pi}{4} [/tex] we should get this: [tex] \int\limits_0^{\pi/4} d\theta\; R e^{-R^2\cos(2\theta)} \leq \frac{\pi}{4R}\big(1 - e^{-R^2}\big)\;\underset{R\to\infty}{\to}\; 0 [/tex] Doesn't this settle the question?
Yes, that works perfectly, thanks. Regarding the inequality for cos(2theta), I see that you have bounded the function below by the straight line segment that connects the two end points. Is that how you came up with the inequality? Should I always try using such a straight line segment if I encounter trig functions in this context? The colleague I mentioned is my maths professor, so I'm not sure I can dismiss his argument straight away. I believe his argument goes like this: - Show that |zf(z)| goes uniformly to zero for [tex]0\le\theta\le(\frac{\pi}{4}-\epsilon)[/tex] - Apply a limiting contour theorem to show that the integral over [tex]0\le\theta\le(\frac{\pi}{4}-\epsilon)[/tex] goes to zero - Interchange the limits [tex]\lim_{R\to\infty}[/tex] and [tex]\lim_{\epsilon\to 0}[/tex] - Conclude that the integral over [tex]0\le\theta\le\frac{\pi}{4}[/tex] vanishes. The problem lies in justifying the interchange of limits. He tends to be evasive when I ask him to justify interchanging limits. Do you think it is justified here? Personally I prefer your method, it is far more straightforward.
Yep, "from the picture". One could prove it more rigorously too, with not much effort, by comparing the derivatives. When it works. The suitable dominating functions naturally look different with different problems. I have used a same kind of "line" at least once earlier, when dealing with one complex contour integral. hmhm.. better be careful then. I'm safely far away from any conflicts myself of course Yeah, it can be pretty big problem sometimes. Look at the example I showed earlier. These functions: [tex] f_n:[0,1]\to\mathbb{R},\quad\quad f_n(x)=\left\{\begin{array}{ll} n,\quad &0\leq x\leq \frac{1}{n}\\ 0,\quad &\frac{1}{n} < x \leq 1\\ \end{array}\right. [/tex] Now following equations are true: [tex] \int\limits_{\epsilon}^1 dx\; f_n(x) = \left\{\begin{array}{ll} 1 - \epsilon n,\quad & \epsilon \leq \frac{1}{n}\\ 0,\quad & \frac{1}{n} < \epsilon\\ \end{array}\right. [/tex] [tex] \lim_{\epsilon\to 0} \int\limits_{\epsilon}^1 dx\; f_n(x) = 1 [/tex] [tex] \lim_{n\to\infty} \int\limits_{\epsilon}^1 dx\; f_n(x) = 0,\quad\quad \epsilon > 0 [/tex] So: [tex] \lim_{n\to\infty} \lim_{\epsilon\to 0} \int\limits_{\epsilon}^1 dx\; f_n(x) = 1 \neq 0 = \lim_{\epsilon\to 0} \lim_{n\to\infty} \int\limits_{\epsilon}^1 dx\; f_n(x) [/tex] Sounds like a dilemma. Once it is proven somehow, but not before. Like my example shows, changing the order of limits can change the value of the integral at the same time. The commutation of the two limits doesn't look like handy "proving trick", because you need to first prove something else to prove that the limits can be commutated.
Thank you for your detailed responses. Perhaps my professor was just being hasty in making that argument. Obviously he knows the integral does go to zero, maybe he applied the commutation of limits because in his mind it is justified by another proof, such as the one you showed. He has made slip-ups like this before. In any case, I will meet with him tomorrow to discuss it, and I will tell you what he said. Is absolute continuity involved with this commutation of limits? I know it is involved with a situation like the following: [tex]\frac{\partial}{\partial t}\int f(x,t)dx=\int \frac{\partial}{\partial t}f(x,t)dx[/tex]. This commutation of limits is justified when f(x,t) is absolutely continuous, as is my understanding. If we knew [tex]e^{-z^2}[/tex] were absolutely continuous, would that justify the interchange of limits?
My professor confirmed, his method of interchanging the limits is valid if the integrand is absolutely continuous in epsilon. It is difficult to prove, but in this case it is true. He much preferred your method jostpuur. EDIT: After taking R to infinity, he has this equation: [tex]\int_0^{\infty}e^{ir^2 e^{2i\epsilon}}dr=\int_0^{\infty}e^{-r^2}dr[/tex] where the arc integral has gone to zero. The integrand on the left hand side is absolutely continuous, so we have [tex]\lim_{\epsilon\to 0}\int_0^{\infty}e^{ir^2 e^{2i\epsilon}}dr=\int_0^{\infty}\lim_{\epsilon\to 0}e^{ir^2 e^{2i\epsilon}}dr[/tex]