Are Delta and Epsilon Formulas Universally Applicable for Polynomial Limits?

  • Context: Graduate 
  • Thread starter Thread starter Orion1
  • Start date Start date
  • Tags Tags
    Limits
Click For Summary

Discussion Overview

The discussion revolves around the applicability of delta and epsilon formulas for determining limits of polynomial functions. Participants explore the definitions and relationships between delta and epsilon in the context of limits, particularly focusing on polynomial functions and their behavior as they approach specific values.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant presents a formula for delta and epsilon in the context of polynomials, suggesting that these formulas have consistently worked for their assigned problems.
  • Another participant challenges the correctness of the proposed formulas, arguing that the rearrangement of terms is necessary and that the nth roots must be taken into account.
  • A participant shares a specific limit problem and the corresponding delta and epsilon relationship, illustrating their approach to proving limits using a polynomial theorem.
  • Another participant critiques the generalization of the theorem, emphasizing the need for careful consideration of specific polynomial forms and the limitations of the proposed approach.
  • Some participants express skepticism about the validity of the proposed polynomial theorem, questioning its clarity and applicability to a broader range of polynomial functions.
  • There are discussions about the assumptions required for the theorem to hold, including the relationship between the coefficients and the behavior of the polynomial near the limit point.
  • One participant suggests that the theorem appears to work for specific cases but may not be universally applicable, highlighting the need for more rigorous definitions and proofs.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the validity of the proposed delta and epsilon formulas or the polynomial theorem. Multiple competing views are presented, with some participants supporting the formulas while others challenge their correctness and applicability.

Contextual Notes

Limitations include the unclear definitions of terms used in the proposed theorem, the dependence on specific polynomial forms, and the unresolved mathematical steps in the arguments presented. The discussion reflects a range of assumptions and conditions that affect the applicability of the formulas.

Orion1
Messages
961
Reaction score
3

I have made some observations regarding the Precise Limit Definition.

For any given polynomial:
[tex]ax^n[/tex]

The solution for delta is:
[tex]\boxed{\delta = \frac{\epsilon^{\frac{1}{n}}}{|a|}}[/tex]

The solution for epsilon is:
[tex]\boxed{\epsilon = (|a| \delta)^n}[/tex]

My Calculus textbook determines the values for delta and epsilon experimentally based upon the primary numerator function, however these equations have worked for every problem that was assigned to me.

Are these solutions correct?

What are the possibilities that there is a theorem that may exist that determines the solutions for ALL deltas and epsilons? :rolleyes:
[/Color]
 
Physics news on Phys.org
filling in the gaps you mean that if |x|<d then |ax^n|<e (d and e being delta and epsilon)? Well, as given what you wrote is incorrect; just rearrange the second term: you need to take nth roots of |a| not just e.
 

Here is a typical problem given in class and already corrected by my Calculus professor:

[tex]\lim_{x \rightarrow 3} \frac{x}{5} = \frac{3}{5}[/tex]
[tex]\left| \frac{x}{5} - \frac{3}{5} \right| < \epsilon \; \text{when} \; 0 < |x - 3| < \delta[/tex]
[tex]\frac{1}{5} |x - 3| < \epsilon \Rightarrow |x - 3| < 5 \epsilon[/tex]
[tex]\delta = 5 \epsilon[/tex]
[tex]\text{Let} \; \epsilon > 0 \; \text{when} \; \delta = 5 \epsilon[/tex]
[tex]\text{If} \; 0 < |x - 3| < \delta \; \text{when} \; \left| \frac{x}{5} - \frac{3}{5} \right| = \frac{1}{5} |x - 3| < \frac{\delta}{5}[/tex]
[tex]\boxed{ \frac{\delta}{5} = \frac{1}{5} (5 \epsilon) = \epsilon}[/tex]

Here is my polynomial theorem:
[tex]\lim_{x \rightarrow a} f(x) = L[/tex]
[tex]|f(x) - L| < \epsilon \; \text{when} \; 0 < |x - a| < \delta[/tex]
[tex]|f(x) - L| = |a_1(x - a)^n|[/tex]
[tex]|a_1 x^n - L| < \epsilon \; \text{when} \; 0 < |x - a| < \delta[/tex]
[tex]\delta = \left( \frac{\epsilon}{|a_1|} \right)^{\frac{1}{n}}[/tex]
[tex]\text{Let} \; \epsilon > 0 \; \text{when} \; \delta = \left( \frac{\epsilon}{|a_1|} \right)^{\frac{1}{n}}[/tex]
[tex]\text{If} \; 0 < |x - a| < \delta \; \text{when} \; |a_1 x^n - L| < |a_1| \delta^n[/tex]
[tex]\boxed{|a_1| \delta^n = |a_1| \left[ \left( \frac{\epsilon}{|a_1|} \right)^{\frac{1}{n}} \right]^n = \epsilon}[/tex]

[/Color]
 
Last edited:
consider f(x)=x^2/2 and the limit as x tends to 0.

Given espilon *you* want to let d=2sqrt(e) to conclude that

when |x|<d then |x^2/2|<e

but that it isn't true. What is true is that |x^2/2|<2e

Here are some more details:

Given e>0 you can by some algorithmic method work out d in terms of e for *some* polys as I will explain below.

Now, what you're professor is doing (hopefully with more words on the board) is do something for a specific type of polynomial (one that factors nicely at f(x)-L) and you've picked the wrong generalization.

Firstly apply it to the case x^2/2 ie where a_1 fails to be 1.

If |x|<d then |x^2/2| <d^2/2.

if d^2/2<e we've got the result, ie d^2<2e (or d<sqrt(2e) if you like).

This shows that in the particularly nice case when f(x)-L = k(x-m)^n then if |x-m|<d then |f(x)-L|<kd^n ie we need to let d=(e/k)^{1/n}

Notice how the constant is subsumed inside the n'th root not outside as you had it?

Now, in general what happens? Well, we can't be so specific.

consider f(x)-L Now to prove this converges to zero as x tends to m then we are assuming x-m is a root of f(x)-L, or the f(x)-L=(x-m)g(x) for some polynomial. It so happens the cases you know are nice.

So how would we prove this actually converges to 0? The key is that g(x) is bounded near x=m. This bound *is* realizable in terms of the coefficients of g(x) which are realizable in terms of coefficients of f the number m. If you want to get a rigourous bound for this then you can use various tools such as:

|b_0 + b_1x +...+ b_rx^r|< |b_0| + |b_1||x| +...+ |b_r||x|^r

which in turn is less than rmax{|b_i|}max{1,|x|^r}

the first max is in terms of coefficients and the second can be worked out since we can assume we are looking for a d<1 so that |x-m|<1 or -m-1<x<m+1. Let us assume this bound is M, then d to be less than e/M and we're done.

But it isn't as nice in general as what you want.
 
Last edited:


Interesting, my Calculus textbook does not list a single example of a polynomial with a fractional coeffecient raised to an integer power in this specific section, which explains a lot.

I will try again with a better random example using the new theorem.

[tex]\lim_{x \rightarrow 2} \frac{x^3}{4} = 2[/tex]
[tex]a = 2 \; \; \; a_1 = \frac{1}{4} \; \; \; n = 3 \; \; \; L = 2[/tex]
[tex]\left| \frac{x^3}{4} - 2 \right| < \epsilon \; \text{when} \; 0 < |x - 2| < \delta[/tex]
[tex]\delta = \left( |4| \epsilon} \right)^{\frac{1}{3}}[/tex]
[tex]\text{Given} \; \epsilon > 0 \; \text{let} \; \delta = \left( |4| \epsilon} \right)^{\frac{1}{3}}[/tex]
[tex]\text{If} \; 0 < |x - 2| < \delta \; \text{when} \; \left| \frac{x^3}{4} - 2 \right| < \frac{\delta^3}{|4|}[/tex]
[tex]\boxed{\frac{\delta^3}{|4|} = \left| \frac{1}{4} \right| \left[ \left( |4| \epsilon \right)^{\frac{1}{3}} \right]^3 = \epsilon}[/tex]

Is this solution correct?
[/Color]
 
Last edited:
x^3-8 is not (x-2)^3, so what you've written is not correct. If you wre actually looking at the polynomial [tex]\frac{(x-2)^3}{4}[/tex] then what is there is 'the right idea' but I dislike the presentation. That is just personal: I prefer words to unmotivated symbols. And it should read

given e>o let d=...

and last time i checked 4 was a positive number.

Of course the idea of carefully choosing delta such that something is less than *exactly* epsilon is flawed and should be discouraged in my opinion: it is distracting from what analysis is really saying. If you state: suppose X is less than d(>0) and then show Y is less than 2d^2 then that is more than adequate since obvisouly d can be chosen so that 2d^2<e whatever e(>0) is.
 
Last edited:

It is a very good theorem, in fact it predicts the solutions to every problem assigned in this section of my Calculus textbook, therefore I cannot dismiss it that easily.

Polynomial theorem:
[tex]\lim_{x \rightarrow a} f(x) = L[/tex]
[tex]|f(x) - L| < \epsilon \; \text{when} \; 0 < |x - a| < \delta[/tex]
[tex]|a_1 x^n - L| < \epsilon \; \text{when} \; 0 < |x - a| < \delta[/tex]
[tex]\delta = \left( \frac{\epsilon}{|a_1|} \right)^{\frac{1}{n}}[/tex]
[tex]\text{Given} \; \epsilon > 0 \; \text{let} \; \delta = \left( \frac{\epsilon}{|a_1|} \right)^{\frac{1}{n}}[/tex]
[tex]\text{If} \; 0 < |x - a| < \delta \; \text{when} \; |a_1 x^n - L| < |a_1| \delta^n[/tex]
[tex]\boxed{|a_1| \delta^n = |a_1| \left[ \left( \frac{\epsilon}{|a_1|} \right)^{\frac{1}{n}} \right]^n = \epsilon}[/tex]

Any Calculus I students interested in disproving this polynomial theorem?
[/Color]
 
Last edited:
It isn't a polynomial theorem, is it? It only appears that you're saying 'it' 'works' if f(x)=ax^n, and you still seem to believe that x^n-k^n=(x-k)^n, which is a major flaw in the argument. You could at least try rewriting it so that it makes more sense. What is f(x)? What is a_1? Is it that f(x)=a_1x^n? Usually a_1 would be the coefficient of x in f(x), but even that is a guess since we don't know what f(x) is. What is the relationship between L, a, and a_1? Who knows? It is very unclear what yo'ure even trying to prove, and what you're assuming. As written your statement applies to f(x)=x, a=0 and L=1, so you reallyu can't be saying that as x tends to zero that f(x), which is just x, tends to 1. So there must be some more restrictions, such as a is a root of f(x)-L, mustn't you? Or are you assuming that f(x) tends to L as x tends to a? Then if so what are you proving and what are you proving it about? It's just completely impossible to decide what you're doing. Indeed I've no idea what the statement of the 'polynomial theorem' is.

Try writing:

Theorem: STATEMENT OF THEOREM

Proof: STATEMENT OF PROOF OF SAID THEOREM

The best we can do is say, if f(x)= k(x-a)^n then we can prove from first principles that it tends to zero as x tends to a since, given e>0 let d=(e/k)^{1/n} then |f(x)|<e when |x|<d, but that is truly not a very hard theorem, is it?
 
Last edited:
This is inviting, Matt

This is inviting, Matt :biggrin:

matt grime said:
Now, in general what happens? Well, we can't be so specific.

consider f(x)-L Now to prove this converges to zero as x tends to m then we are assuming x-m is a root of f(x)-L, or the f(x)-L=(x-m)g(x) for some polynomial. It so happens the cases you know are nice.

So how would we prove this actually converges to 0? The key is that g(x) is bounded near x=m. This bound *is* realizable in terms of the coefficients of g(x) which are realizable in terms of coefficients of f the number m. If you want to get a rigourous bound for this then you can use various tools such as:

|b_0 + b_1x +...+ b_rx^r|< |b_0| + |b_1||x| +...+ |b_r||x|^r

which in turn is less than rmax{|b_i|}max{1,|x|^r}

the first max is in terms of coefficients and the second can be worked out since we can assume we are looking for a d<1 so that |x-m|<1 or -m-1<x<m+1. Let us assume this bound is M, then d to be less than e/M and we're done.

But it isn't as nice in general as what you want.

Theorem All univariate polynomials with real coefficients are continuous.

pf: Let [tex]f(x)=\sum_{q=0}^{n}a_qx^q[/tex]

Then [tex]\lim_{x\rightarrow k}f(x)=f(k)[/tex] since

[tex]\lim_{x\rightarrow k}f(x)=f(k)\Leftrightarrow\forall \epsilon >0,\exists \delta >0 \mbox{ such that }|x-k|<\delta \Rightarrow \left| f(x) - f(k)\right| <\epsilon[/tex]

[tex]\left| f(x) - f(k)\right| = \left| \sum_{q=0}^{n}a_qx^q - \sum_{q=0}^{n}a_qk^q\right| = \left| \sum_{q=1}^{n}a_q\left( x^q - k^q\right) \right| = \left| x - k\right| \left| \sum_{q=1}^{n}a_q\sum_{r=0}^{q-1} x^{q-r-1}k^{r} \right| \leq \left| x - k\right| \sum_{q=1}^{n} |a_q| \sum_{r=0}^{q-1} \left| x^{q-r-1} k^{r} \right|[/tex]

to be continued...
 
  • #10
Why would you want to prove it like that? It suffices to show that x^n converges and we are done, and that is easy if messy to do rigorously. I've no idea what univariate means, and to be honest why would you specify over the reals? The same proof works over the complex numbers.
 
  • #11
univariate, as opposed to multivariate; of one variable opposed to many.
over the reals to appeal to our present audience.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
6K
  • · Replies 11 ·
Replies
11
Views
5K
Replies
1
Views
3K
  • · Replies 12 ·
Replies
12
Views
11K
  • · Replies 2 ·
Replies
2
Views
14K