Uniform convergence of a product of functions

epr1990
Messages
26
Reaction score
0

Homework Statement



Let \left[a,b\right] be a closed bounded interval, f : [a,b] \rightarrow \textbf{R} be bounded, and let g : [a,b] \rightarrow \textbf{R} be continuous with g\left(a\right)=g\left(b\right)=0. Let f_{n} be a uniformly bounded sequence of functions on \left[a,b\right]. Prove that if f_{n}\rightarrow f uniformly on all closed intervals \left[c,d\right]\subset\left(a,b\right), then f_{n}g\rightarrow fg uniformly on \left[a,b\right]

Homework Equations



f is bounded on [a,b]:

(\exists M_{f}\in\textbf{R}) \ni : (\left|f(x)\right| \leq M_{f}) (\forall x\in\left[a,b\right])

g is continuous on [a,b]:

(\forall x\in\left[a,b\right]) (\forall\epsilon>0) (\exists\delta>0) (\forall y\in\left[a,b\right]) \ni : (\left|x-y\right|<\delta \Rightarrow \left|f\left(x\right) - f\left(y\right)\right|<\epsilon)
with g\left(a\right)=g\left(b\right)=0

f_n is a bounded sequence of functions on [a,b]:

(\exists M_{f_{n}}\in\textbf{R}) \ni : (\left|f(x)\right| \leq M_{f_{n}}) (\forall x\in\left[a,b\right]) (\forall n\in \textbf{N})

f_n converges uniformly to f on all closed subsets [c,d] of (a,b):

(\forall \epsilon >0)(\exists N \in\textbf{N}) \ni : (n\geq N \Rightarrow |f_{n}(x) - f(x)| < \epsilon) (\forall x\in[c,d])(\forall [c,d] \subset (a,b))

The Attempt at a Solution



Proving that f_{n}g\rightarrow fg uniformly on \left[c,d\right] on all closed intervals \left[c,d\right]\subset\left(a,b\right), is fairly trivial. Given that \left[a,b\right] is a closed bounded interval, with g continuous on \left[a,b\right], the extreme value theorem holds. So, g is bounded on \left[a,b\right] by M_{g}=sup_{x\in[a,b]}g(x), and since \left[c,d\right]\subset\left(a,b\right) is a closed subset, then |g(x)|\leq M_{g}, \forall x \in [c,d]. Now, if f_{n}\rightarrow f uniformly on all closed intervals \left[c,d\right]\subset\left(a,b\right), then fixing \epsilon > 0, we can choose an N\in\textbf{N}, so that, \forall n\geq N, |f_{n}(x) - f(x)| < \epsilon / M_{g}. Thus, |f_{n}(x)g(x) - f(x)g(x)| < \epsilon.

However, I don't see anyway to extend this to [a,b]. In the previous exercise in the book, it asked to prove that if f_n(x) is bounded on a set E for each n and f_n converges uniformly to f on E, then f_n is uniformly bounded on E, and f is bounded on E. I did that, and that proof was easy, but I don't see where it could apply here. I don't think the converse is true in general, so the only thing that I could gain from that is that f is bounded on [c,d], but I already have that since it is bounded on [a,b]. Clearly, I need to use the fact that f_n is uniformly bounded on [a,b] and f is bounded on [a,b] and probably use the continuity of g on [a,b] instead of the restriction to [c,d] that I used here. On top of this, I don't see a single way in which g(a)=g(b)=0 would apply. Any suggestions?
 
Last edited:
Physics news on Phys.org
Can you use the fact that ##g(a)=g(b)=0## and ##f_n## is uniformly bounded to show that both ##f_ng## and ##fg## are small near ##a## and ##b##? Keep in mind that you're just trying to show that ##f_ng-fg## is small.
 
  • Like
Likes 1 person
I was actually thinking that I could show that (f_n - f)g is continuous, on the interval for all n since f is bounded on [a,b] and f_n is bounded for all n on [a,b]. That still doesn't use the endpoints though, which I'm figuring are key. I do understand what you are saying though, I was trying that approach. I actually started from the extreme value theorem, since the g(x_m)=min(g(x)) and g(x_M)=max(g(x)), and from the extreme value theorem, x_m and x_M are both contained in [a,b]. There are then 4 possibilities: The trivial case where x_m=a and x_M=b, so that g is the 0 function. The other cases would be both are in the interval and not at the endpoints or exactly one of the two is in the interval. I don't think it matters if x_m < x_M or the other way around, because they are basically arbitrary. I was trying to find a proof from Real Mathematical Analysis by Charles Chapman Pugh, in which he uses a very elegant trick that I think I can modify to work on this problem. I need to find my copy though. In it he basically defines a value set V_x={y: for some t in [a,x] such that y=f(t)} and a set X={x: V_x is a bounded subset of R} and let's t vary. I think that I can modify this to show that the f_n*g --> fg on the the compliment of [c,d] in [a,b]. I really don't know if it will work though.Having moved onto other problems in this chapter of the book we are using for my class, however, I am pretty positive that a lot of them are ill posed. I don't know if they are or not, the author just chooses weird definitions and confusing wording. He also will devote an entire section of exercises to a build on a single concept sometimes very difficult and abstract, but not refer to it at any other point in the book or show any significance whatsoever...? The book is called An Introduction to Analysis by William R. Wade.
 
epr1990 said:
I was actually thinking that I could show that (f_n - f)g is continuous, on the interval for all n since f is bounded on [a,b] and f_n is bounded for all n on [a,b].

That is not true in general. You aren't given any information about the continuity of ##f## or the ##f_n##.

That still doesn't use the endpoints though, which I'm figuring are key.

The uniform boundedness of the ##f_n## and the fact that ##g(a)=g(b)=0## are essential to the truth of the claim. Any attempt at a proof that doesn't use those facts is going to be flawed.


I do understand what you are saying though, I was trying that approach. I actually started from the extreme value theorem, since the g(x_m)=min(g(x)) and g(x_M)=max(g(x)), and from the extreme value theorem, x_m and x_M are both contained in [a,b]. There are then 4 possibilities: The trivial case where x_m=a and x_M=b, so that g is the 0 function. The other cases would be both are in the interval and not at the endpoints or exactly one of the two is in the interval. I don't think it matters if x_m < x_M or the other way around, because they are basically arbitrary. I was trying to find a proof from Real Mathematical Analysis by Charles Chapman Pugh, in which he uses a very elegant trick that I think I can modify to work on this problem. I need to find my copy though. In it he basically defines a value set V_x={y: for some t in [a,x] such that y=f(t)} and a set X={x: V_x is a bounded subset of R} and let's t vary. I think that I can modify this to show that the f_n*g --> fg on the the compliment of [c,d] in [a,b]. I really don't know if it will work though.

It sounds like you are trying to use a lot of "high technology" to prove a claim that is provable much more easily if you stumble across the right idea, which I tried to give in my first post. You don't need any other theorems or lemmas or anything. You only need the uniform boundedness of the ##f_n##, the fact that ##g(a)=g(b)=0##, and the continuity of ##g## to get control near the endpoints.


Having moved onto other problems in this chapter of the book we are using for my class, however, I am pretty positive that a lot of them are ill posed. I don't know if they are or not, the author just chooses weird definitions and confusing wording. He also will devote an entire section of exercises to a build on a single concept sometimes very difficult and abstract, but not refer to it at any other point in the book or show any significance whatsoever...? The book is called An Introduction to Analysis by William R. Wade.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top