more second semester honors calc notes
a bit more secomd semester homnors calc notes. these were completed by series for e^x, cos(x), sin(x), and proof of differentiability of convergent power series term by term.
Series of functions:
Example: power series:
Consider the functions x^n for n >= 0, and the formal geometric series expansion 1/(1+x) = 1 - x + x^2 - x^3 + x^4 - + ... We know the rhs equals the lhs for any choice of x with |x|<1 by the previous example. We claim this series of functions converges to the function 1/(1+x) on the lhs in the sup norm, on any interval [-r,r] where 0<r<1, (but not on all of
(-1,1)). I.e. since the partial sum 1 - x + x^2 - x^3 + ...+(-1)^n x^n =
[1/(1+x) - (-1)^(n+1)x^(n+1)/(1+x)], we have again that
| 1/(1+x) - (1 - x + x^2 - x^3 + ...+(-1)^n x^n)| = |x^(n+1)/(1+x)| for any real number x. Now since on the interval [-r,r] we have |x| <= r, and 1+x >= 1-r, it follows that for all x in that interval, ||x^(n+1)/(1+x)|| <= r^(n+1)/(1-r). Hence to show that || 1/(1+x) - (1 - x + x^2 - x^3 + ...+(-1)^n x^n)|| = ||x^(n+1)/(1+x)|| approaches zero, it suffices to show that r^(n+1) -->0, which we have done above. Thus 1/(1+x) = 1 - x + x^2 - x^3 + x^4 - + ..., for all x with |x|<1, and convergence holds in the sup norm on any closed bounded interval strictly contained in (-1,1). QED.
Exercise: (i) If a sequence of functions {fn} converges to f in the sup norm on [a,b], then the integrals also converge, i.e. the sequence of real numbers { } (integral of fn from a to b) converges to the real number (integral of f from a to b).
(ii) In fact the indefinite integrals Gn = , (integral of fn from a to x) which are functions on [a,b], also converge to the function G = (integral of f from a to x), in the sup norm.
Approximation of transcendental functions by polynomials
Example: ln(1+x):
By the previous example, 1/(1+x) = 1 - x + x^2 - x^3 + x^4 - + ..., for all x with |x|<1, and convergence holds in the sup norm on any closed bounded interval strictly contained in (-1,1). Consequently, by an exercise above, on any interval [-r,r] with 0<r<1, the series of indefinite integrals (starting at 0) of the series 1 - x + x^2 - x^3 + x^4 - + ..., converges to the indefinite integral of 1/(1+x).
I.e. the series x - x^2/2 + x^3/3 - x^4/4 + x^5/5 - x^6/6 ±... converges in the sup norm on [-r,r], to (integral of 1/(1+t) from t=0 to t = x)= ln(1+x). Thus ln(1+x) =
x - x^2/2 + x^3/3 - x^4/4 ±..., for each x with |x| < 1, and convergence holds in the sup norm on any [-r,r] with 0<r<1. Now because the series has alternating signs, it can be shown that it also converges for x = 1, to ln(2), and yields the amazing formula ln(2) = 1 - 1/2 + 1/3 - 1/4 + 1/5 - + ...
Example: arctan(x): The geometric series 1 - x^2 + x^4 - x^6 + - ..., converges to 1/(1+x^2), for each x with |x| < 1, and in the sup norm on any interval [-r,r] with 0<r<1. Hence the series of indefinite integrals, starting from 0, converges to the indefinite integral of the limit.
I.e. x - x^3/3 + x^5/5 - x^7/7 ±... = =(integral of 1/(1+t^2) from t=0 to t=x) = arctan(x), again in the sup norm on any closed interval strictly contained in the interval (-1,1). Again convergence holds also for x = 1, yielding the even more amazing
formula: <pi>/4 = 1 - 1/3 + 1/5 - 1/7 + 1/9 - + ...
Next we want to find series expressions for e^x, sin(x), and cos(x). Since the derivatives of these functions are no simpler than the functions themselves, we cannot proceed in the same way as before. We need some criteria guaranteeing convergence of sequences and series when we do not know what the limits are precisely. They all involve exploiting the notion of boundedness.
Convergence of monotone sequences
Lemma: A convergent sequence must be bounded. I.e. if {sn} converges, then there is some positive number K such that for all n, |sn| <= K.
proof: By definition of convergence, if {sn} converges to s, then given say e = 1, there is an N such that all elements after sN are within a distance 1 of s, so that for all n >=N, we have |sn| <= |s| + 1. Hence if we let K be the maximum of the numbers |s1|, |s2|,...,|sN-1|, |s|+1, then for all n, we have |sn| <= K. QED.
Remark: The converse does not hold, since the sequence
1,-1,1,-1,1,-1,... is bounded but not convergent.
There is however a class of bounded sequences of real numbers which does always converge, namely bounded monotone sequences.
Lemma: A bounded monotone sequence of real numbers converges.
Proof: If the sequence {sn} is bounded and monotone, say monotone increasing, let K be the least upper bound of the sequence. I.e. let K be the smallest number such that for all n, we have sn <= K. We claim the sequence converges to K. Let e>0 be given. Since K is the smallest number which is >= all elements of the sequence, the number K-e must be less than some element of the sequence. Suppose sN > K-e. Then for all n>=N, we have sN <= sn, by monotonicity. Since K is an upper bound of the entire sequence we also have K-e < sN <= sn <= K < K+e, for all n >= N. I.e.
|sn-K| < e, for all n >= N. QED.
Note: This gives a way to tell a sequence is convergent without explicitly finding the limit. Just find any upper bound for a weakly increasing sequence and you know the sequence converges even if you cannot determine what is the least upper bound, i.e. the limit. Similarly, if there is a lower bound for a weakly decreasing sequence, then that's equence also converges to its greatest lower bound.
Here is the analog for series, of convergence of monotone sequences.
Theorem: If {an} is any sequence of non negative numbers, the series (summation of ai from i =1 to i = infinity) converges if and only if the partial sums are bounded, i.e. if and only if there is some number K such that for all n, (summation of ai from i =1 to i = n)<= K.
proof: trivial exercise.
This leads to the following so called “comparison tests”.
Theorem: If (summation of ai from i =1 to i = infinity) and (summation of bi from i =1 to i = infinity) are two series of non negative real numbers, and if ai <= bi for all i, then the convergence of (summation of bi) implies the convergence of (summation of ai), and hence the non convergence of (summation of ai) implies the non convergence of (summation of bi).
proof: This follows from an earlier result because when the partial sums of one positive series are bounded, so are those of a smaller positive series. QED.
The idea that monotone sequences converge generalizes as follows.
Cauchy’s criterion and its applications
Definition: A sequence {sn} is called Cauchy, if and only if for every
e > 0, there is some N, such that, for all n,m >= N, we have |sn-sm| < e.
Exercise: Any convergent sequence is Cauchy.
Remark: In our three examples, the converse holds: in the real numbers, the plane, and the space of continuous functions on [a,b] with sup norm, every Cauchy sequence converges to an element of the same space.
Digression: Intuitively, to say a sequence is Cauchy, means the elements of the sequence are bunching up, but they might not converge unless there actually is a point of our space at the place where they are bunching. E.g., if our space were the real numbers, except zero had been removed, then the sequence {1/n} would still be Cauchy, but would not converge simply because we had removed the limit point. Since lots of sequences of rational numbers have irrational limits, Cauchy sequences of rationals do not always converge in the space of rationals. E.g. the sequence 3, 3.1, 3.14, 3.141, 3.1415, 1.14159,... of rationals, which converges to <pi>, (if the decimals are chosen appropriately), would be Cauchy in the rationals, but would not converge in the space of rationals. I.e. some spaces have “holes” in them, and a sequence could head towards a hole in the space and be Cauchy, but not have a limit in the space, just because the limit is missing from the space.
[There is a way to fill the holes in any space, i.e. a space with a distance can be enlarged so all Cauchy sequences do converge, by adding in a limit for every Cauchy sequence. This is one way to construct the reals from the rationals. Starting from any space with a length, consider the space of all Cauchy sequences in that space, and identify two Cauchy sequences {xn} and {yn} if the sequence of numbers {|xn-yn|} converges to zero. For instance the real number <pi> is identified with the Cauchy sequence 3, 3.1, 3.14, 3.141, 3.1415,... of rationals. Decimals give a very efficient way of picking usually one Cauchy sequence of rationals for each real number. Still the Cauchy sequences of decimals 1, 1.0, 1.00, 1.000, ... and .9, .99, .999, .9999, ... both represent the same real number.]
None of our 3 example spaces have holes, by the next theorem.
Big Theorem: In all three of our examples, every Cauchy sequence {si} converges to some limit in the given space.
proof:
Example (i) We do the case of real numbers first: define for each n, an = the greatest lower bound of the elements si in the sequence such that i >= n. Define bn = least upper bound of those elements si with i >= n. Then {an} is a weakly increasing sequence and {bn} is a weakly decreasing sequence, so both sequences {an} and {bn} converge by the previous corollary. Now the Cauchyness of the sequence {si} implies that |an-bn| converges to zero. Thus in fact both sequences {an} and {bn} converge to the same limit K. Then since for each n, all sk with k >= n, lie between an and bn, K is also the limit of the sequence {si}.
Here is another cute proof; we claim first (i) that every sequence has a a monotone subsequence, and then (ii) that every Cauchy which has a convergent subsequence, also converges itself.
proof of (i) Call a point sN of a sequence a “peak point” if all later members of the sequence are no smaller. I.e. sN is a peak point iff for all n >= N, we have sn >= sN. Now there are two cases: either there are an infinite number of peak points or only a finite number of them, maybe zero. If there are an infinite number of peak points, then the subsequence of peak points is weakly monotone increasing and we are done. If there are only finitely many peak points, then after the last peak point say sN, no element is a peak point. So every element sn with n >= N, has the property that there is a later element which is smaller. This allows us to choose a weakly decreasing subsequence. I.e. start from sN+1. then there is some sn with n >= N+1 and such that sn < sN+1. Let that sn be thes econd element of the subsequence. Then there is some later element sm such that sm < sn. Let that sm be the third element of the subsequence. Continue in this way.
proof of (ii) If a Cauchy sequence {sn} has a convergent subsequence {tm}, [i.e. each tm is one of the sn, and the t’s occur in the same order in which they occur in the original sequence], then the originals equence {sn} converges to the same limit as the subsequence.
To be precise:
Definition: Recall a sequence of reals is a function s:N-->R, where N is the set of positive integers and R is the set of real numbers. A subsequence is a function t:N-->N-->R which is a composition of a strictly increasing function N-->N, with the function s:N-->R. We some times write the element tm as s(n(m)), where we think of n as a function of m. Here by hypothesis, n(m) >= m, and also n(m+1) > n(m).
Ok, assume tm = s(n(m)) -->L. If {sn} is Cauchy we claim {sn}-->L also. So we just try to plod through the motions. I.e. let e>) be given. We must find N such that n>=N implies that |sn - L| < e. Ah my brain is waking up. OK, we know we can make all the later t’s close to L, by hypothesis that the sequence of t’s converegs to L. We also know that by the Cauchy hypothesis, we make all the later s’s close to each other. Since some of thos s’s are t’s, that should make all the alter s’s close to L too. OK, choose K so large that n >=K implies |tn-L| < e/2. And then choose M so large that n,m>=M implies that |sn-sm|<e/2. Then let N be the alrger of the two integers K,M. Then the element tN = s(n(N)) where n(N) >= N. So this implies that |tN-L| < e/2. Now let n >= N and look at |sn - L|. Since n(N)>=N, we know that |sn -s(n(N))| < e/2. Now we have
|sn - L| = |sn - s(n(N))+s(n(N)) -L| <= |sn - s(n(N))| + |s(n(N)) -L|
< e/2 + e/2 = e. That doos it I hope. QED.
Example (ii) For a Cauchy sequence of points {pn} = {(xn,yn)}, in the plane, both sequences {xn} and {yn} are Cauchy sequences of real numbers, since |pn| >= |xn|, |yn|. Hence {xn} converges to some x, and {yn} converges to some y, and then {pn} converges to (x,y).
Example (iii) If {fn} is a Cauchy sequence of functions on [a,b], then for each x in [a,b], the definition of the sup norm, forces the sequence of real numbers {fn(x)} to be Cauchy, hence convergent to some number we call f(x). This defines a function f, which we claim is continuous, and is the limit of the sequence {fn}.
To see convergence, let e>0 be given. We must find N such that for all n>=N, we have ||f-fn| < e. But we know the sequence {fn} is Cauchy in the sup norm, so for some N, we have ||fn-fm|| < e/3 for all n,m >= N. Since for all x, f(x) is the limit of the fn(x), it follows that for all x and all n >=N, we have |f(x)-fn(x))| <= 2e/3. I.e. given x, there is some m > N such that |fm(x)-f(x)| < e/3. Since for all n>=N, we have |fn(x)-fm(x)| < e/3, it follows that for all n >= N, |f(x)-fn(x)| <= |f(x)-fm(x)|+|fm(x)-fn(x)| < 2e/3. Thus for all x, and all n >= N, we have |f(x)-fn(x)| < e. I.e. {fn} converges to f in the sup norm.
Finally we claim the limit function f is continuous on [a,b], hence lies in the space we are working in. To prove this, let z be any point of [a,b]. To show f is continuous there, let e>0 be given and try to find d>0 such that for all x closer to z than d, we have |f(x)-f(z)| < e. This is a classic e/3 proof. I.e. choose N such that for all n,m >= N, we have ||fn-fm|| < e/3. Then we saw above that also for all n>=N, we have ||f(x)-fn(x)|| < e/3. Now fN is continuous by hypothesis, so there is a d>0 such that for all z closer to x than d, we have |fN(z)-fN(x)|<e/3. Then just note that
|f(z)-f(x)| = |f(z)-fN(z)+fN(z)-fN(x)+fN(x)-f(x)|
<= |f(z)-fN(z)| + |fN(z)-fN(x)| + |fN(x)-f(x)| < e/3 + e/3 + e/3 = e.
I.e. |f(z)-fN(z)| < e/3 because fN is closer than e/3 to f at every point of [a,b]. And |fN(x)-f(x)| < e/3 for the same reason. Then |fN(z)-fN(x)| < e/3 because fN is continuous at x, and d was chosen to make this true for fN since |z-x| < d. QED.