# Testing for Uniform Continutity with Weierstrass MTest

• Emspak

## Homework Statement

I am trying to make sure I am using the Weierstrass test correctly. So, on the following expression: $$f_n(x) = \frac{x}{1+nx^2}, -1 \le x \le 1$$

I looked at it this way. First I checked to see what happens at -1 and 1. The "plug in" test gets me a limit of 1/2 at x=1 and n=1, and a limit of 1/2 at x=-1 and n=1. All-righty then, we do a little L'Hôpital's rule: $$\lim_{n \rightarrow \infty} f_n(x) = \frac{\frac{d}{dn}(x)}{\frac{d}{dn}(1+ nx^2)}= \frac {0}{x^2}=0$$

So I know this converges to zero. But does it do so uniformly? Well, I can see that this function has a bound at x=1/2. So I will use $\frac{1}{1+n}$ as my M-function. Giving us:
$M_k = \frac{1}{1+n},$ and $\sum_{n=1}^{\infty}\frac{1}{1+n}$.

$\frac{1}{1+n}$ is always going to be smaller than $\frac{x}{1+nx^2}$ for any n with a particular x. We can check his out: since x is between -1 and 1, and n≥1, the $1+nx^2$ is always going to be less than 1+n.

So we can say $$\left| \frac{x}{1+nx^2} \right| \ge \frac{1}{1+n}$$ for all n. In addition to that, $\frac{1}{1+n} \lt \infty$ for all n, but the other condition doesn't hold, so we don't have uniform convergence.

Did I get it right?

Thanks!

L'Hopital's rule isn't applicable since only the denominator goes to infinity as n goes to infinity (incidentally the observation of this gives you that the limit is zero)

When x = 0 your claimed inequality is 0 > 1/(1+n) which is false (and in general I think for all x it points the wrong way)

The Weierstrass M-test is a statement about a sum of functions, not a sequence of functions, so you shouldn't need it here. Are you trying to determine whether
$$\lim_{n\to \infty} f_n(x) = 0$$
is a uniformly convergent limit, or are you trying to decide if
$$\sum_{n=0}^{\infty} f_n(x)$$
is uniformly convergent?

I am trying to find the limit and decide if it is uniformly convergent on the interval. What steps should I have taken?

You should start by writing the definition of uniform convergence down... once you have that we can work through how to show it holds (or observe that it doesn't hold)

• 1 person
Sorry, should edit this: Well as I understand it if $\sum_{n=0}^{\infty}f_n \rightarrow f$ on a given interval then it is uniformly converging. Is that correct? The M-test I thought also helped to establish that.

by the way the convergence is for the sequence, (the book says) so I assume they mean the series is uniformly converging.

No, the sum might converge pointwise but not uniformly, the M-test is a way of checking whether it converges uniformly.

If the book says the sequence is uniformly converging, then they are asking whether
$$\lim_{n\to \infty} \frac{x}{1+nx^2}$$
converges uniformly to its limit, which is zero. It is NOT asking if
$$\sum_{n=1}^{\infty} \frac{x}{1+nx^2}$$
is converging uniformly, unless it at some point says "Is the series whose terms are:" or writes out the sum explicitly. In fact you can check that when x=1 that sum doesn't even converge.

so then, even though the inequality I had doesn't hold all the time, isn't that sort of the point? We don't have M greater than the orginal limit, not all the time, so it isn't uniformly converging. No?

What is M? I'm going to re-iterate a previous suggestion, which is write down the definition of uniform convergence, and then plug in your function and see what it is asking you to prove/disprove.

M is supposed to be a sequence of numbers, yes? Anyhow the book says: we say that $f_n$ converges uniformly on a set E (an interval in this case, -1 to 1) if given ε > 0 we can find a positive integer N s.t. for all n≥N $| f_n(x) - f(x)| < \epsilon$ $\forall x$ in the set E.

But given that definition of uniform convergence, I honestly don't find it that helpful. I am trying to follow the instructions on the exercise, you know? It would help if you could tell me where the mistakes are. When you say "what is M?" I would answer, a sequence of nonnegative numbers, and in the M-test the point is that it is always greater than the function in the sequence -- that is, $M_k \ge |u_k|$ for any given k. Did I not do that?

(Try not to be too mysterious here. I am not a math major, I am a physics guy in the context of this class and I am NOT familiar with much of the vocabulary of higher math - pretend I am the dumbest ever student you have had).

The M-test is for deciding on the uniform convergence of a series, not a sequence. You answered the question in the OP incorrectly because you were attempting to talk about the uniform convergence of the series whose terms are fn(x), not the sequence whose terms are fn(x) - these are two very different things and it is important to keep straight which one you are dealing with. If this distinction is confusing to you, then you should go back to your notes on sequences and series and study on them, because precise terminology is required to answer mathematics questions, and this terminology is definitely a prerequisite to understanding the topic you are studying.

OK, so let's plug in fn(x) and f(x) into your definition of uniform convergence:
$$\left| f_n(x) - f(x) \right| = \left| \frac{x}{1+nx^2} - 0 \right| = \left| \frac{x}{1+nx^2} \right|$$

The definition of convergence says that if you fix x, as n goes to infinity this thing goes to zero, which is something that you have proven. The definition of uniform convergence says that if you let x be arbitrary in [-1, 1], you can still find a small upper bound on this expression as n gets large. A good starting point for this problem is to take the expression
$$\left| \frac{x}{1+nx^2} \right|$$
and find out, for fixed n, what is the largest value it can take on [-1,1].

OK, the largest value it can take is going to be 1/2, assuming n>=1. If n=0 it is 1. So I would posit that the upper bound is 1.

and i the upper bound is 1, and as n--> infinity the expression goes to zero no matter what, for any fixed X, does it not fit the definition of uniform convergence?

OK, the largest value it can take is going to be 1/2, assuming n>=1. If n=0 it is 1. So I would posit that the upper bound is 1.
That's not the answer I get. The upper bound decreases as ##n## increases. How did you find the maximum value of this function?

By the way, Office_Shredder already pointed out that this has nothing to do with the Weierstrass M test. I just thought I'd mention that it also has nothing to do with uniform continuity. So the thread title is doubly misleading. You might want to change it to something like "testing sequence for uniform convergence."

Last edited:
I plugged in the values at each end of the interval. Then I checked what happens when n and x are both zero, which would maximize the function it seemed to me since the denominator would be 1 and it can't be less than 1. I even graphed it to be sure. I tried taking a derivative to see whe it was zero as well, but that didnt seem to work well.

What happened when you took the derivative? What is
$$\frac{d}{dx}\left(\frac{x}{1+nx^2}\right)$$

What happened when you took the derivative? What is
$$\frac{d}{dx}\left(\frac{x}{1+nx^2}\right)$$

Taking a derivative I did the following:

$$\frac{d}{dx}\left(\frac{x}{1+nx^2}\right)= \frac{d}{dx} (x(1+nx^2)^{-1})= -x(1+nx^2)^{-2}(2nx)+(1+nx^2)^{-1}=\left(\frac{-2nx^2}{(1+nx^2)^2}+\frac{x}{1+nx^2}\right)$$ which can be "simplified" to $$\left(\frac{-2nx+x(1+nx^2)}{(1+nx^2)^2}\right)$$

and I notice that the numerator goes to zero at x=0, which would say to me that it reaches a max or minimum there, yes?

How did you get the ##x## in the numerator in the second term of:
$$\left(\frac{-2nx^2}{(1+nx^2)^2}+\frac{x}{1+nx^2}\right)$$
Your simplification doesn't look consistent with the previous line, either. Here is what I get for the derivative:
$$\frac{d}{dx}\left(\frac{x}{1+nx^2}\right) = \frac{(1+nx^2)(1) - (x)(2nx)}{(1+nx^2)^2} = \frac{1-nx^2}{(1+nx^2)^2}$$
For what value(s) of ##x## is this expression equal to zero?

How did you get the ##x## in the numerator in the second term of:
$$\left(\frac{-2nx^2}{(1+nx^2)^2}+\frac{x}{1+nx^2}\right)$$
Your simplification doesn't look consistent with the previous line, either. Here is what I get for the derivative:
$$\frac{d}{dx}\left(\frac{x}{1+nx^2}\right) = \frac{(1+nx^2)(1) - (x)(2nx)}{(1+nx^2)^2} = \frac{1-nx^2}{(1+nx^2)^2}$$
For what value(s) of ##x## is this expression equal to zero?

You're right; that extra x was the mistake that was making the whole thing difficult for me earlier. I see it never goes to zero for either n or x at zero (though we'd be starting with n=1, no?) but it does go to 1 when x=0 and n=0, and at n=1 for any x it goes between zero and 1/2, and at n=2 it goes to -1/3, and at n=3 it goes to -2/4 et cetera. Am I on to something here if I say that it does seem to always be <1?

You're right; that extra x was the mistake that was making the whole thing difficult for me earlier. I see it never goes to zero for either n or x at zero (though we'd be starting with n=1, no?) but it does go to 1 when x=0 and n=0, and at n=1 for any x it goes between zero and 1/2, and at n=2 it goes to -1/3, and at n=3 it goes to -2/4 et cetera. Am I on to something here if I say that it does seem to always be <1?
What seems to be always ##< 1##? The numbers you calculated are the values of ##x## where the derivative is zero, as a function of ##n##. Can you use this information to find the max value of ##|f_n(x)|## for ##x \in [-1,1]##?

OK, given that the numerator will be zero on the relevant interval when x=1 or x=-1, then can we say that this hits a maximum (or minimum) there, at X=1?

The numerator is only zero at ##x = \pm 1## when ##n=1##. This means that for ##n=1##, there are critical points at ##x = \pm 1##. How can you check whether this translates to a maximum?

By the way, you only need to consider ##x \geq 0##. (Why?)

i know that we only need to consider x>=0 because of the $x^2$. I know that it's a maximum because since x is between 0 and 1 (when you square it the negative becomes positive) then $nx^2$ has to be less than n. That is, it has to be less than 1 either way, and that means that whatever 1+ n*x^2 is it can't be more than 2. The numerator has the same thing going on -- but we are under similar constraints. THat is, $1-nx^2$ has to be at minimum zero, but when n=1 then any small x makes it a small n and thus close to 1. so both numerator and denominator close in on 1 as x gets really small, so we can pick $\epsilon = 1[\itex]. Yes? and since epsilon is going to be 1 we can say the absolute value of the function minus zero (which we figured out already) is always going to be less than that for any x when n=1. and since epsilon is going to be 1 we can say the absolute value of the function minus zero (which we figured out already) is always going to be less than that for any x when n=1. That won't do you any good. This is the condition that must be satisfied in order for the convergence to be uniform: \max_{x \in [-1,1]} |f_n(x)| \rightarrow 0 \,\text{ as }\, n \rightarrow \infty but as n--> infinity the degree of the denominator is more than that of the numerator. So it approaches zero, correct? For any fixed x. but as n--> infinity the degree of the denominator is more than that of the numerator. So it approaches zero, correct? For any fixed x. Well, it isn't the degree of the polynomial that does that for you. It's the coefficient ##n## in front of the ##x^2##. If ##x \neq 0##, then ##x^2## is a positive number, so ##nx^2## can be made as large as you like by increasing ##n##. But convergence to zero for any fixed ##x## is just pointwise convergence. That doesn't imply uniform convergence. For that, you need each ##f_n## to be smaller than some bound (depending on ##n## but not ##x##) for all ##x\in [-1,1]##, and the bound needs to shrink to zero as ##n\rightarrow \infty##. This is why you need to focus on the maximum: for each ##n##, what is ##\max_x |f_n(x)|##? And after you answer that, you need to check: does this maximum decrease to zero as ##n \rightarrow \infty##? Last edited: OK, but for any n the derivative is zero at x=-1 or x=1. That says to me that those are the critical points. And at those points (x=1 and x=-1) are going to be the upper bounds on the original expression no matter what n is. At n=1 the original expression would be between 1/2 and 1, at n=2 it is between 1/3 and 1, and so on. THe upper bound is 1. And as [itex]n \rightarrow \infty$ it goes to zero. So the epsilon I want is 1. Is that not so? No matter what n is $\frac{x}{1+nx^2}$ can't be more than 1.

That means $f_n \rightarrow 0$ as $n \rightarrow \infty$. So that would imply uniform convergence.

OK, but for any n the derivative is zero at x=-1 or x=1.
No, that's only true for ##n=1##. The general formula for the derivative is
$$\frac{1-nx^2}{(1+nx^2)^2}$$
Consider the case ##n=2##. Then the formula becomes
$$\frac{1-2x^2}{(1+2x^2)^2}$$
If you plug in ##x=\pm1##, the result is not zero.

Also, suppose the following were true:
At n=1 the original expression would be between 1/2 and 1, at n=2 it is between 1/3 and 1, and so on. THe upper bound is 1. [...] No matter what n is $\frac{x}{1+nx^2}$ can't be more than 1.
This would NOT imply the following:
That means $f_n \rightarrow 0$ as $n \rightarrow \infty$. So that would imply uniform convergence.
In other words, ##|f_n(x)| \leq 1## for all ##n## and all ##x## does NOT imply uniform convergence, or even pointwise convergence, of ##f_n##. You need ##|f_n(x)| \leq B_n## for all ##n## and all ##x##, where ##B_n## is a sequence of positive numbers that converges to zero.

OK now I get confused. You just said the maximum value for fn has to approach zero as n approaches infinity. Doesn't it? Make n big an the denominator gets way bigger than the numerator for any arbitrary n! I make n 100000000 and the original expression gets really tiny! That's approaching zero, isn't it? I am really really confused here.

You are still making the argument for pointwise convergence. As a counterexample consider if fn=x/n was defined on all of R. For any x the sequence converges to zero because the denominator gets way bigger, but the sequence does not converge uniformly.

Is the point here that even though fn approaches zero, and is less than some number no matter what you do with n (in this case the expression is always less than 1) that still does not imply uniform convergence?

Dude, i am completely and utterly lost with this now.