How Do You Prove This Limit Equals the Derivative?

  • Thread starter Thread starter jgens
  • Start date Start date
  • Tags Tags
    Derivatives
jgens
Gold Member
Messages
1,575
Reaction score
50

Homework Statement



If f is differentiable at a, prove the following:
\lim_{h,k \to 0^+} \frac{f(a+h)-f(a-k)}{h+k} = f'(a)

Homework Equations



N/A

The Attempt at a Solution



At the moment, I don't have a complete proof worked out, but I was wondering if someone could comment on the validity of this reasoning . . .

Clearly, for every \varepsilon > 0, there exists a \delta_1 > 0 such that if 0 < k < \delta_1 then . . .

\left|\frac{f(a+h)-f(a-k)}{h+k} - \frac{f(a+h) - f(a)}{h} \right| < \frac{\varepsilon}{2}*​

Moreover, for this same \varepsilon, there must be some other number \delta_2 > 0 such that whenever 0 < h < \delta_2, it follows that . . .

\left|\frac{f(a+h)-f(a)}{h} - f'(a) \right| < \frac{\varepsilon}{2}​

From this, so long as 0 < k < \delta_1 and 0 < h < \delta_2, we have

\left|\frac{f(a+h)-f(a-k)}{h+k} - f'(a) \right| < \varepsilon​

as desired.

*I realize that this is the point that really needs some work, but I think that it should be a trivial (albeit potentially long-winded) exercise to find the proper \delta.
 
Physics news on Phys.org


Looks fine, just remamber that the first inequality with given \delta_1 must hold for all 0 < h <\delta_2.
 


Well, the first inequality should hold for any arbitrary h > 0, so I should have my bases covered. Thanks! I'll work on finding a proper \delta now.
 


Since we don't have any global assumptions on f, I don't think it's possible to find \delta for arbitrary h. If, for example, for some h>0 function
\phi_h(k)=\frac{f(a+h)-f(a-k)}{h+k}
is discontinuous at 0, it's hopeless.
 


Well, I'm working on my phone right now, so I'm not sure if this is correct, but here are my thoughts. For every \varepsilon > 0, it's clearly possible to find a \delta_1 > 0 such that if 0 < h < \delta_1, then the second inequality holds. Based on this choice of h and the continuity of f at a, it's then possible to choose a \delta_2 > 0 such that if 0 < k < \delta_2, then the second inequality holds. This would show that for every number \varepsilon > 0, it's possible to find the necessary two numbers \delta > 0 for the final inequality to hold. However, looking through this, the dependence of \delta_2 on h seems like it might be problematic. Ugh :(

Edit: I'm fairly certain that it's possible to find the necessary \deltas that I allude to in this post; it's just is the second one's reliance on h problematic or is it a non-issue.
 


Thanks! I'll try to post a fully worked out proof later tonight when I have access to a computer. If you don't mind, could you explain why the delta with dependence on h is problematic? I understand why it's certainly problematic if the delta is dependent on the same variable it's placing a bound on, but I can't figure it out exactly when it's dependent on a different variable. Thanks again!
 


I would probably be able to replace delta 2's dependence on h with a dependence on delta 1. I'm not sure if this makes the situation any better, but I could really use some help here. Thanks.
 


Sorry for deleting that post. Dependence is indeed problematic because of the definition of double limit, but I realized I made a critical mistake in my solution. I'll think about and post if I find anything useful. Good luck!
 


Try this. Can you show min(L1,L2)<=(h*L1+k*L2)/(h+k)<=max(L1,L2) for h and k positive? Can you think how you might use that?
 
  • #10


Thanks for the help Dick! First, we need only note that ...

h\min{(L_1,L_2)} + k\min{(L_1,L_2)} \leq h \dot L_1 + k \dot L_2 \leq h\max{(L_1,L_2)} + k\max{(L_1,L_2)}​

From which the inequality that you posted follows immediately. Next, we define L_1 and L_2 such that ...

L_1 = \frac{f(a+h)-f(a)}{h}
L_2 = \frac{f(a)-f(a-k)}{k}

Using these definitions, we have that ...

\min{(L_1,L_2)} \leq \frac{f(a+h)-f(a)+f(a)-f(a-k)}{h+k} = \frac{f(a+h)-f(a-k)}{h+k} \leq \max{(L_1,L_2)}​

And then using the squeeze theorem or explicitly writing out the \varepsilon-\delta proof, we prove the desired result. Is this what you were getting at Dick?
 
  • #11


Sure, it's a squeeze theorem. And you know how to apply it. So why is min(L1,L2)<=(h*L1+k*L2)/(h+k)<=max(L1,L2) for h,k>0? Do you get that part?
 
  • #12


Dick said:
So why is min(L1,L2)<=(h*L1+k*L2)/(h+k)<=max(L1,L2) for h,k>0? Do you get that part?

The first part of my post was meant to address that, I'm sorry if it wasn't clear. To outline it again without that pain of the maxs/mins, I'll assume (without loss of generality) that L_1 \leq L_2. Clearly ...

L_1(h+k) \leq h \dot L_1 + k \dot L_2 \leq L_2(h+k)​

Dividing through by h+k we get ...

L_1 \leq \frac{h \dot L_1 + k \dot L_2}{h+k} \leq L_2​

as desired.
 
  • #13


losiu99 said:
Sorry for deleting that post. Dependence is indeed problematic because of the definition of double limit, but I realized I made a critical mistake in my solution. I'll think about and post if I find anything useful. Good luck!

Thanks for your help anyway. I'm not familiar at all with the definition of a limit involving two variables (single variable calc. course here) so that makes sense why I was having such a difficult time explicitly understanding why the dependence was so problematic; after looking up the correct definition, it makes complete sense.
 
  • #14


jgens said:
The first part of my post was meant to address that, I'm sorry if it wasn't clear. To outline it again without that pain of the maxs/mins, I'll assume (without loss of generality) that L_1 \leq L_2. Clearly ...

L_1(h+k) \leq h \dot L_1 + k \dot L_2 \leq L_2(h+k)​

Dividing through by h+k we get ...

L_1 \leq \frac{h \dot L_1 + k \dot L_2}{h+k} \leq L_2​

as desired.

That's a bit of circular reasoning and why are you putting dots on the L's? I know L1<=(h*L1+k*L2)/(h+k)<=L2 is true for L1<=L2 and h,k>0. But do you know why? I'll give you a hint. It's the same reason why L1<=t*L1+(1-t)*L2<=L2 for t in [0,1].
 
  • #15


Sorry about the dots, they were supposed to be multiplication signs. And what's particularly circular about my reasoning? If I choose arbitrary h,k &gt; 0 and assume that L_1 \leq L_2, then hL_1 + kL_1 \leq hL_1 + kL_2 since kL_1 \leq kL_2. The other inequality would follow from similar reasoning.

I'm off to bed now, so I'll follow up on your last idea tomorrow morning. Thanks again Dick!
 
  • #16


Oh, nothing circular about your thinking. More of a problem with my following what you are doing. I was focusing on the dots. Don't worry about it.
 
  • #17


We can solve it similarly without any min/max reasoning. Observe that
\left|\frac{f(a+h)-f(a-k)}{h+k}-f&#039;(a) \right|<br /> =<br /> \left|\frac{f(a+h)-f(a) + f(a)-f(a-k)}{h+k}-f&#039;(a)\right|<br /> =<br />
<br /> \left|\frac{f(a+h)-f(a) -hf&#039;(a)+ f(a)-f(a-k)-kf&#039;(a)}{h+k}\right|<br /> \leq <br />
<br /> \left|\frac{f(a+h)-f(a) -hf&#039;(a)}{h+k}+\frac{f(a) - f(a-k)-kf&#039;(a)}{h+k}\right|<br /> \leq <br />
<br /> \frac{h}{h+k} \left|\frac{f(a+h)-f(a)}{h} -f&#039;(a)\right|<br /> +\frac{k}{h+k} \left|\frac{f(a-k)-f(a)}{-k} -f&#039;(a)\right| \leq <br />
<br /> \left|\frac{f(a+h)-f(a)}{h} -f&#039;(a)\right|+\left|\frac{f(a-k)-f(a)}{-k} -f&#039;(a)\right|
Sorry for not posting this earlier, I wanted to complete you original idea.
 
Last edited:
  • #18


Thanks again Dick! And nice proof losiu99! I really appreciate all the help that you've given me.
 
Back
Top