Study of the convergence (pointwis&uniform) of two series of functions

In summary, the function f_1n and f_2n both converge pointwise to ln(2), but f_1n is not uniformly convergent and f_2n is not uniformly convergent but has a maximum in b. For any compact in (0, infinity), f_1n is uniformly convergent but not in all \mathbb{R}^+$. The second series converges to zero but is not uniformly convergent. The method used to prove uniform convergence does not work for the series.
  • #1
Felafel
171
0

Homework Statement



study the pointwise and the uniform convergence of

##f_{n1}(x)=ln(1+x^{1/n}+n^{-1/x}## with ##x>0## , ##n \in |N^+}## and ##f_{n2}(x)=\frac{x}{n}e^{-n(x+n)^2}## with ##x \in \mathbb{R} ## , ##n \in }|N^+}##

The Attempt at a Solution



1) first series: ##f_{1n}##
studying the limit for n to infinity i found out it is ln2, so it converges pointwise to this value, but being the function increasing it doesn't have a maximum, and thus Weierstrass' criterion for uniform convergence doesn't yield.
however, for any compact in (0, infinity), say [a,b] with b>a it has a maximum in b. Thus, the function is also uniformly convergent for any compact in (0, infinity) but not in all ##\mathbb{R}^+##.

2) second series: ##f_{2n}##
studying the limit for n to infinity i found out it is 0, so it converges pointwise to this value.
am I missing some parts of the study/did I make any mistakes?

thank you in advance!
 
Last edited:
Physics news on Phys.org
  • #2
sorry, I've made a bit of a mess, I'm trying to correct myself:
1) f_1n convergese pointwise to ln(2) as previously said, but I'm not studying the uniform convergence differently:
according to the definition, for n big enough i get:
##|f_n(x)-f(x)| < \epsilon## ##\forall \epsilon >0##, ##\forall x in (0,\infty)##
which yields:
##|ln(1+x^{1/n}+n^{-1/x}-ln2| < \epsilon## using the properties of logarithms i get:
##\frac{1+x^{1/n}+n^{-1/x}}{2}|<e^{\epsilon}## 1 is the inferior of ##e^{\epsilon}## so can rewrite it in place of ##e^{\epsilon}## itself. then,
##\frac{1}{2}+\frac{x^{1/n}+n^{1/x}}{2}<1##
##\sqrt{x}+n^{-1/x}<1## which is true ##\forall x \leq \##
 
  • #3
Felafel said:
sorry, I've made a bit of a mess, I'm trying to correct myself:
1) f_1n convergese pointwise to ln(2) as previously said, but I'm not studying the uniform convergence differently:
according to the definition, for n big enough i get:
##|f_n(x)-f(x)| < \epsilon## ##\forall \epsilon >0##, ##\forall x in (0,\infty)##
which yields:
##|ln(1+x^{1/n}+n^{-1/x}-ln2| < \epsilon## using the properties of logarithms i get:
##\frac{1+x^{1/n}+n^{-1/x}}{2}|<e^{\epsilon}## 1 is the inferior of ##e^{\epsilon}## so can rewrite it in place of ##e^{\epsilon}## itself. then,
##\frac{1}{2}+\frac{x^{1/n}+n^{1/x}}{2}<1##
##\sqrt{x}+n^{-1/x}<1## which is true ##\forall x \leq \##
I guess you meant ##^n\sqrt{x}+n^{-1/x}<1##, but I'm not sure what you intended here:
which is true ##\forall x \leq \##
To prove uniform convergence this way, you want it true for all x > 0. It is not true for x ≥ 1.
Maybe it is not uniformly convergent. How would you prove that?
 
  • #4
isn't it actually false even for x<1?
if i get x=1/2, for instance,
##(1/2)^{1/n}+n^{-2}## goes to 1+0 if n is big.
so all the values x can have involve a contradiction with the definition of uniform convergence, as the ##sup|f_n(x)-f(x)|## isn't less than epsilon
 
  • #5
also, if there aren't values of x that go on well with the definition of uniform convergence, can I say that the function converges uniformely in any compact subset of |R for the aforemetioned reasons (beginning of the thread)?
 
  • #6
and, as far as the second series is concerned, I've just computed what follows:
##|\frac{x}{n} e^{-n(n+x)^2}| ## = ##|\frac{x}{n} \frac{1}{e^{-n(n+x)^2}}| \leq |\frac{x}{n} \frac{1}{1+n(n+x)^2}| ## given ##e^x \geq x+1##
##\leq |\frac{x}{n} \frac{1}{n(n+x)^2}| ## dividing num and denom by x i see it goes to 0, which means
##\sum||f_{n2}||_{\infty}=0 ## and so weierstrass criterion for uniform convergence holds.
 
  • #7
Felafel said:
isn't it actually false even for x<1?
if i get x=1/2, for instance,
##(1/2)^{1/n}+n^{-2}## goes to 1+0 if n is big.
It tends to 1, but that doesn't prove it persistently exceeds 1.
But even if so, all you have shown here is that this method of proving uniform convergence does not work. It doesn't prove convergence is not uniform. How would you prove that?
 
  • #8
you are probably right in saying convergence is not uniform, but ii really keep not seeing why my method doesn't work. i'll try to write things a bit differently. by the deginition of uniform convergence i have to prove there exists a n big enough such that:
##|f_n(x)-f(x)|< \epsilon##
##|ln(1+x^{1/n}+n^{-1/x}-ln(2)|\leq\epsilon## ##\forall x>0##
##|ln\frac{1+x^{1/n}+n^{-1/x}}{2}| \leq |ln\frac{1+1+\frac{1}{n}}{2}=|ln(1+\frac{1}{2n})|## using the exp:
##|1+\frac{1}{2n}| \leq e^{\epsilon}## i choose ##\epsilon=\frac{1}{n^2}## and see that for n big wnough this inequality holds.

if it's wrong again how could i start proving it doesn't converge uniformly?
also, the method i used for the second series in my last post was okay?

thank you very much for being so patient
 
  • #9
I just noticed the title says 'series' not sequences. And you refer to Weierstrass' M-test, yes? That is indeed for series. But the functions in the f1n converge to ln(2), as you say, so their sum as a series cannot converge even pointwise.
So please clarify - are these sequences or series?
Felafel said:
how could i start proving it doesn't converge uniformly?
You can take the definition of uniform convergence and invert it logically.
The definition:
for every ε > 0, there exists a natural number N such that for all x ∈ S and all n ≥ N we have |fn(x) − f(x)| < ε.
Its logical inversion:
there exists an ε > 0 such that for every natural number N there exists x ∈ S and and n ≥ N with |fn(x) − f(x)| > ε.
 
Last edited:
  • #10
sorry again, they are sequences. yes, i was referring to weierstrass' m-test. can't I use it for sequences too?
 
  • #11
i've tried to do as you suggested:
there exists an ε > 0 such that for every natural number N there exists x ∈ S and and n ≥ N with |fn(x) − f(x)| > ε

##|ln(1+x^{1/n}+n^{-1/x}-ln(2)| > \epsilon##
##|ln\frac{1+x^{1/n}+n^{-1/x}}{2}| > \epsilon##
if x=1
##|ln\frac{1+1+n^{-1}}{2}| > \epsilon \Rightarrow |ln(2+\frac{1}{n}|\geq \epsilon##
taking##\epsilon=ln2## i should have proved the contraddiction. so it can't converge uniformly on all |R.

here's my attempt at proving for which subsets it converges uniformly:
i want to know for which values of x this inequality holds:
##|ln(1+x^{1/n}+n^{-1/x}-ln(2)| < \epsilon##
##|\frac{(1+x^{1/n}+n^{-1/x})}{2} < e^{\epsilon}##
##|\frac{(1+x^{1/n}+n^{-1/x})}{2} < \epsilon+1##
##x^{1/n}+n^{-1/x}<2 \epsilon +1##
##x^{1/n}<2 \epsilon +1##
##x < (2 \epsilon +1)^n## for ##\epsilon \to 0## i get ##x<1##
so it converges uniformly ##\forall## compact subset [a,b] where b<1.
 
Last edited:
  • #12
Felafel said:
##|ln\frac{1+1+n^{-1}}{2}| > \epsilon \Rightarrow |ln(2+\frac{1}{n}|\geq \epsilon##
This logic is not going to work. The implications are backwards from what you need. You haven't proved |fn(x) − f(x)| > ε, but that something rather larger > ε.
##|\frac{(1+x^{1/n}+n^{-1/x})}{2} < \epsilon+1##
##x^{1/n}+n^{-1/x}<2 \epsilon +1##
Algebraic error.
To make it easier, let's first look at the individual functions ##x^{\frac 1n} -1## and ##n^{-\frac 1x}##. Each converges pointwise to 0. You need to get some idea of how the rate of converges depends on x. So, for each, consider how large n has to be to make the function < ε.
 
  • #13
any further hint? i really can't make it :(, i had the following results but they seem absurd to me.
i tried to put:
##sup|f_n(x)-f(x)|=| sup(f_n(x)) - inf f(x)|= |sup (ln(1+x^(1/n)+n^(-1/x))| - ln2| = sup | ln(\frac{1+x^(1/n)+n^(-1/x))}{2}|##
now if x=1 i get ##lim |ln(1/2+1/2+1/n)|=ln1=0## so i have uniform convergence for this value. if ##n<+\infty## i wouldn't have u. convergence, though
if x=a s.t. a>1 i get ##lim |ln(1+a^(1/n)+n^(-1/a))|=ln1=0## same as above
if x=1/a s.t. a>1 i get ##lim |ln(1+\frac{1}{a}^(1/n)+n^(-a))|=ln1=0## same as above
so apparently i'd have uniform convergence for every x>0
 
  • #14
Felafel said:
now if x=1 i get ##lim |ln(1/2+1/2+1/n)|=ln1=0## so i have uniform convergence for this value.
You can't have 'uniform convergence' for a value of x. That doesn't mean anything.
Try rewriting ##|x^{\frac 1n}-1| < \epsilon## as a constraint on the value of n.
 
  • #15
haruspex said:
You can't have 'uniform convergence' for a value of x. That doesn't mean anything.
Try rewriting ##|x^{\frac 1n}-1| < \epsilon## as a constraint on the value of n.

do you mea that i have to find n s.t. ##1/n<log_x(1+\epsilon)##? And for ## n>\frac{1}{log_x(1+\epsilon) ## it converges uniformly? From which passage did i get that expression?
 
  • #16
Felafel said:
do you mea that i have to find n s.t. ##1/n<log_x(1+\epsilon)##?
Sort of. Because of the modulus operation we should treat x > 1 and x < 1 separately.
For x > 1, what you wrote is correct, but let's write it as n > ln(x)/ln(1+ε).
So far the logic chain has been reversible (have to be careful with that when dealing with inequalities). That is, the sequence converges uniformly if and only if for every ε > 0 we can find an N s.t. n > N implies n > ln(x)/ln(1+ε).
Well, is that true? Or is it the case that no matter what N we pick there is an x > 1 which will make ln(x)/ln(1+ε) > N?
 
  • #17
haruspex said:
Sort of. Because of the modulus operation we should treat x > 1 and x < 1 separately.
For x > 1, what you wrote is correct, but let's write it as n > ln(x)/ln(1+ε).
So far the logic chain has been reversible (have to be careful with that when dealing with inequalities). That is, the sequence converges uniformly if and only if for every ε > 0 we can find an N s.t. n > N implies n > ln(x)/ln(1+ε).
Well, is that true? Or is it the case that no matter what N we pick there is an x > 1 which will make ln(x)/ln(1+ε) > N?

doesn't it hold for any x?because the logarithmic function goes to infinity more slowly than n. Shoul I assume from that that it is uniformly convergent in all |R+?
 
  • #18
Felafel said:
doesn't it hold for any x?because the logarithmic function goes to infinity more slowly than n. Shoul I assume from that that it is uniformly convergent in all |R+?
You don't seem to have grasped what uniform convergence means.
For pointwise convergence, given an ε and an x, you find an N = N(ε, x). I.e. the choice of N is allowed to depend on x.
For uniform convergence, given an ε you have to find an N = N(ε) that will work for all x.
For the functions ##x^{\frac 1n}##, uniform convergence will require us to pick an N >= ln(x)/ln(1+ε) for all x > 1. But that is clearly not possible, since ln(x) is not bounded above.
 
  • #19
haruspex said:
You don't seem to have grasped what uniform convergence means.
For pointwise convergence, given an ε and an x, you find an N = N(ε, x). I.e. the choice of N is allowed to depend on x.
For uniform convergence, given an ε you have to find an N = N(ε) that will work for all x.
For the functions ##x^{\frac 1n}##, uniform convergence will require us to pick an N >= ln(x)/ln(1+ε) for all x > 1. But that is clearly not possible, since ln(x) is not bounded above.

ok! Now it's clearer. So, convergence is not uniform in |R, but how do I find out if there are subsets of |R where the convergence is uniform?
 
  • #20
Felafel said:
ok! Now it's clearer. So, convergence is not uniform in |R, but how do I find out if there are subsets of |R where the convergence is uniform?
Suppose we limit x to being in [1, a]. Now can you find N=N(ε) such that N > n(x)/ln(1+ε) for all x in [1, a]? Next, what about x < 1? After that, try ##n^{-\frac 1x}##.
One thing we have to be careful about here. There may be some cancellation between ##n^{-\frac 1x}## and ##x^{\frac 1n}## such that even though neither by itself converges uniformly there is some magic by which the sum does. (I'm pretty sure that can't happen, but we need to prove it doesn't.)
 
  • #21
ok, i'll also do a short recap to see if everything's in order.
- I calculated the pointwise convergence and got that the function goes to ln2.
- to find the uniform convergence i have to prove that

for every x and for every ε>0 there exists a N such that for all n>N the following inequality holds:
##sup|ln(1+x^{1/n}+n^{-1/x})-ln2|<\epsilon##=##sup|ln(\frac{1+x^{1/n}+n^{-1/x}}{2}|)|<\epsilon##
which is true if ##\frac{1+x^{1/n}+n^{-1/x}}{2} \leq 1## ##\Rightarrow## ##x^{1/n}+n^{-1/x}-1 \leq 0##
i split it in two:
##|x^{1/n}-1|<\epsilon## which holds if ##n>\frac{ln(x)}{ln(1+\epsilon)}## which isn't generally true, because lnx is unbounded, so I can't have uniform convergence on all |R.
however, there might exist some subsets of |R where the convergence is uniform.
let's consider
## x in [1,a]##, being ln an increasing function, it has max in a. so the function converges uniformly for every n: ##n>\frac{ln(a)}{ln(1+\epsilon)}##
if b<x<1 the function has max in 1, so ##n>\frac{ln(1)}{ln(1+\epsilon)}## which holds for any n>0.

now let's consider ##|n^{-1/x}|<\epsilon## ##\Rightarrow## ##|-\frac{1}{x}ln(n)|<ln(\epsilon)## which holds if
##n<\epsilon e^{x}## the exp is also an increasing function, so if ##1\leq x \leq a## it has min in 1:
##n<\epsilon e##
and if b<x<1 the minimum is in b, so
##n<\epsilon e^{b}##

i can join the various results and say the sequence converges uniformly for 1<x<a iff
##\epsilon e > n >\frac{ln(a)}{ln(1+\epsilon)}##

and for b<x<1 iff

##\epsilon e^{b}>n>\frac{ln(1)}{ln(1+\epsilon)}##
 
  • #22
Felafel said:
- to find the uniform convergence i have to prove that for every x and for every ε>0 there exists a N such that for all n>N the following inequality holds:
##sup|ln(1+x^{1/n}+n^{-1/x})-ln2|<\epsilon##
No, your wording there allows x and ε to be specified first, and then you find a suitable N. That's pointwise again. For uniform you have to find N given only ε. The N you find must work for all x at once.
And you don't need the sup. The "for all n > N" implies that.
which is true if ##\frac{1+x^{1/n}+n^{-1/x}}{2} \leq 1## ##\Rightarrow## ##x^{1/n}+n^{-1/x}-1 \leq 0##
No, you're forgetting the modulus operation. You need ##-\epsilon < \frac{1+x^{1/n}+n^{-1/x}}{2} - 1 < \epsilon## etc.
i split it in two:
##|x^{1/n}-1|<\epsilon## which holds if ##n>\frac{ln(x)}{ln(1+\epsilon)}## which isn't generally true, because lnx is unbounded, so I can't have uniform convergence on all |R.
As I mentioned, you have to be very careful with arguments involving a chain of inequalities. You wrote
##|x^{1/n}-1|<\epsilon## which holds if ##n>\frac{ln(x)}{ln(1+\epsilon)}## which isn't generally true
Let me run that reasoning with a different example:
x2 > 0 which holds if x2 > -1, which isn't generally true.​
Do you see the logical error?
There are two ways you can protect against this.
1. As far as possible, use steps that are "if and only if". That ensures the chain can be reversed.
2. Once you think you have a valid argument, set it up as a deductive sequence. That can be either from the known facts to the desired conclusion, or (reductio ad absurdum) from the negation of the desired conclusion to the negation of the known facts. In logic notation, using ! for not, those forms are respectively p & (p => q) and p & (!q => !p).
What you wrote above is of the form (p <= q) & !q, which doesn't prove anything.
however, there might exist some subsets of |R where the convergence is uniform.
let's consider
## x in [1,a]##, being ln an increasing function, it has max in a. so the function converges uniformly for every n: ##n>\frac{ln(a)}{ln(1+\epsilon)}##
You're close. But you don't mean 'for every n'. Remember, you're looking to find an N which works for every x.
if b<x<1 the function has max in 1, so ##n>\frac{ln(1)}{ln(1+\epsilon)}## which holds for any n>0.
You've forgotten about the absolute value (modulus operator) again. It's the max of |ln(x)| that matters here.
now let's consider ##|n^{-1/x}|<\epsilon## ##\Rightarrow## ##|-\frac{1}{x}ln(n)|<ln(\epsilon)##
No, you can't swap the order of ln() and || like that. In this case, ##n^{-1/x}## cannot be negative, so it's just ##n^{-1/x}<\epsilon##. Since ln() is an increasing function (you see why that matters?) this condition is if and only if ##-\frac 1x ln(n) < ln(\epsilon)##.
which holds if n<ϵex
Going forwards from ##-\frac 1x ln(n) < ln(\epsilon)##, you'll find that you have reversed the inequality now.
 
  • #23
i'm sorry it's taking so long, but i really don't seem to get this topic, which shouldn't even be that difficult, after all.
i'll give another try, more carefully.

i want to see for which ns this inequality holds:
##|ln(1+x^{1/n}+n^{-1/x})-ln2|<\epsilon ##. it is the same as:
##-\epsilon<\frac{1+x^{1/n}+n^{-1/x}}{2}-1<\epsilon##
##-2\epsilon=-\epsilon-\epsilon<-1+x^{1/n}+n^{-1/x}<+2\epsilon=\epsilon+\epsilon##
in particular, i want

##-\epsilon<-1+x^{1/n}<\epsilon## and ##-\epsilon<n^{-1/x}<\epsilon##

let's consider the first one:

##-\epsilon+1<x^{1/n}<\epsilon+1## ##\Rightarrow## ##\frac{ln(-\epsilon+1)}{lnx}<\frac{1}{n}<\frac{ln(\epsilon+1)}{lnx}##

1/n is positive, ln(ϵ+1) is positive, ln(-ϵ+1) is negative.
therefore this inequality holds iff x>1. if x=1 or <1 there wouldn't exist any suitable n.
let's assume, then, xϵ(1,a]. i can delete the lef-hand side of the inequality, which holds now for every n.
the only condition left is:
##n>\frac{lnx}{ln(ϵ+1)}##. being ln an increasing function it has maximum in a, so the inequality is true for ##n: n>\frac{ln(a)}{ln(ϵ+1)}##

let's now look at
##-ϵ<n^{-1/x}<ϵ## the left part is always true. x must be >1 as previously said, so

##(n^{-1/x})^{-x}>\epsilon^{-x}## which holds for ##n>\frac{1}{e^x}## which is a decreasing function and has then maximum for 1/(ϵ+e).

joining my two inequalities i must get the biggest of the n's I've found:

##n=max(\frac{1}{e+ϵ}, \frac{ln(a)}{ln(ϵ+1)})## and actually, being a>1 i have
##\frac{ln(a)}{ln(ϵ+1)} \geq 1 > \frac{1}{e+ϵ}## so the function converges uniformly iff x belongs to a subset of (1,a] and ##n>\frac{ln(a)}{ln(ϵ+1)}##

i hope it's okay now, because I'm beginning to feel really stupid..
 
  • #24
Felafel said:
i really don't seem to get this topic
It's not easy! Do you now understand in what way uniform convergence is a stronger condition? Do you understand the logical fallacy your earlier arguments committed? (I think not, because you are still doing it!)
in particular, i want

##-\epsilon<-1+x^{1/n}<\epsilon## and ##-\epsilon<n^{-1/x}<\epsilon##
... bearing in mind that it could happen that there's some cancellation, so the sum of the functions might stay within bounds even when the individual functions don't.
let's consider the first one:

##-\epsilon+1<x^{1/n}<\epsilon+1## ##\Rightarrow## ##\frac{ln(-\epsilon+1)}{lnx}<\frac{1}{n}<\frac{ln(\epsilon+1)}{lnx}##
In the last step there you divided by ln(x). Could that be negative? If it is, what has gone wrong?
1/n is positive, ln(ϵ+1) is positive, ln(-ϵ+1) is negative.
therefore this inequality holds iff x>1. if x=1 or <1 there wouldn't exist any suitable n.
let's assume, then, xϵ(1,a]. i can delete the lef-hand side of the inequality, which holds now for every n.
the only condition left is:
##n>\frac{lnx}{ln(ϵ+1)}##. being ln an increasing function it has maximum in a, so the inequality is true for ##n: n>\frac{ln(a)}{ln(ϵ+1)}##
That is sort of right for the x>=1 case, but you must make the validity of the argument clear by running it in the orthodox direction: Given ε > 0, choose ##N = \frac{ln(a)}{ln(ϵ+1)}##, and show that for n > N we have ##0 < -1+x^{1/n}<\epsilon##. As your argument stands, you are still committing the logical fallacy I pointed out before. You are going "we want to show a > b; if a > b then c > d; c > d is true; so a > b". Can you see that that does not work?
In fact, you are quite close to solving it because your manipulation from a > b to c > d is reversible, so you could have argued "we want to show a > b; a > b if and only if c > d; c > d is true; so a > b", which is valid.
You still have a problem with x < 1, as I noted above.
let's now look at
##-ϵ<n^{-1/x}<ϵ## the left part is always true. x must be >1 as previously said, so

##(n^{-1/x})^{-x}>\epsilon^{-x}##
You have the same problem with the deductions going the wrong way. Also, you have reversed the inequality sign here. Is that valid for all x in the range?
 
  • #25
when you say
haruspex said:
... bearing in mind that it could happen that there's some cancellation, so the sum of the functions might stay within bounds even when the individual functions don't.

do you mean I should study the convergence of the associated series afterwards, and see if this happens?


haruspex said:
In the last step there you divided by ln(x). Could that be negative? If it is, what has gone wrong?

yes, i didn't realize it could be negative (case x<1), so i should have changed the inequalitiy's signs and examine the two different cases: x<1 and x>1 separately


haruspex said:
In fact, you are quite close to solving it because your manipulation from a > b to c > d is reversible, so you could have argued "we want to show a > b; a > b if and only if c > d; c > d is true; so a > b", which is valid.
do you mean the result I have found (for the x>1 case) is correct but according to my wording it seems that i did it the other way round?

also, for x=1 lnx=0, so i can't divide by lnx. should i assume from this that if x=1 there doesn't exist a suitable n to make the sequence converge?

haruspex said:
Also, you have reversed the inequality sign here. Is that valid for all x in the range?
no, actually, it's valid only for x>1
if x<1 i should have ##(n^{-1/x})^{-x}<\epsilon^{-x}##

if you think what I've written in this post is correct i'll try to recap and sort things out in the next one, finally :)

and thanks again
 
  • #26
Felafel said:
do you mean I should study the convergence of the associated series afterwards, and see if this happens?
That's the sort of thing. More precisely, if there is some value of x (maybe ∞) that has to be excluded to get uniform convergence, the same value for both functions, you would need to show that it also needs to be excluded to get uniform convergence of the sum of the functions.
do you mean the result I have found (for the x>1 case) is correct but according to my wording it seems that i did it the other way round?
I mean that your wording did not constitute a logical chain of argument leading to your conclusion. It was analogous to the false syllogism "Socrates was entitled to vote, all adult male citizens of Athens were entitled to vote, therefore Socrates was an adult male citizen of Athens". The conclusion is right but the argument fallacious. But if we know that only adult male citizens of Athens were entitled to vote we can develop a valid argument.
I strongly encourage you to study the argument you gave until you can see that it had this flaw.
also, for x=1 lnx=0, so i can't divide by lnx. should i assume from this that if x=1 there doesn't exist a suitable n to make the sequence converge?
You can avoid the division by ln(x) by rearranging the algebra a little.

if you think what I've written in this post is correct i'll try to recap and sort things out in the next one, finally :)
Progress, definite progress.
 

1. What is the difference between pointwise and uniform convergence of two series of functions?

Pointwise convergence of two series of functions means that for each individual point in the domain, the sequence of function values approaches the limit function. Uniform convergence, on the other hand, means that the sequence of function values approaches the limit function at the same rate for all points in the domain.

2. How is the convergence of two series of functions tested?

The convergence of two series of functions can be tested using various methods such as the Cauchy criterion, the Weierstrass M-test, and the Dini's theorem. These tests involve evaluating the behavior of the series at different points in the domain and comparing it to the limit function.

3. What is the importance of studying the convergence of two series of functions?

Studying the convergence of two series of functions is important in analyzing the behavior of a sequence of functions and determining if it approaches a specific limit function. This is crucial in various areas of mathematics, such as real analysis and functional analysis.

4. Can two series of functions converge pointwise but not uniformly?

Yes, it is possible for two series of functions to converge pointwise but not uniformly. This happens when the rate of convergence differs at different points in the domain, leading to non-uniform convergence.

5. How does the convergence of two series of functions relate to the continuity of the limit function?

If two series of functions converge uniformly, then the limit function is guaranteed to be continuous. However, if the convergence is only pointwise, the continuity of the limit function cannot be guaranteed and may depend on the behavior of the series at different points in the domain.

Similar threads

  • Calculus and Beyond Homework Help
Replies
26
Views
894
  • Calculus and Beyond Homework Help
Replies
4
Views
306
  • Calculus and Beyond Homework Help
Replies
2
Views
184
  • Calculus and Beyond Homework Help
Replies
3
Views
413
  • Calculus and Beyond Homework Help
Replies
2
Views
711
  • Calculus and Beyond Homework Help
Replies
1
Views
255
  • Calculus and Beyond Homework Help
Replies
7
Views
706
  • Calculus and Beyond Homework Help
Replies
7
Views
956
  • Calculus and Beyond Homework Help
Replies
5
Views
990
  • Calculus and Beyond Homework Help
Replies
6
Views
387
Back
Top