How can the limit of a derivative fraction be proven for non-zero values?

daudaudaudau
Messages
297
Reaction score
0

Homework Statement


If \lim_{z\rightarrow z_0}f(z)=A and \lim_{z\rightarrow z_0}g(z)=B then prove that \lim_{z\rightarrow z_0}\frac{f(z)}{g(z)}=\frac{A}{B} for B\neq0

The Attempt at a Solution



I write f(z)=A+\epsilon_1(z) and g(z)=B+\epsilon_2(z), where the epsilon-functions tend to zero as z tends to z_0. I now write

<br /> \left|\frac{f(z)}{g(z)}-\frac{A}{B}\right|=\left|\frac{A+\epsilon_1(z)}{B+\epsilon_2(z)}-\frac{A}{B}\right|=\left|\frac{AB+B\epsilon_1(z)-AB-A\epsilon_2(z)}{B^2+B\epsilon_2(z)}\right|\le\frac{|B\epsilon_1(z)|+|A\epsilon_2(z)|}{|B^2+B\epsilon_2(z)|}<br />
And since the above can be made arbitrarily small by letting z tend to z_0, I am done, or what do you think?
 
Physics news on Phys.org
I think you are not done. Given a positive number \epsilon, you have to demonstrate the existence of a positive number \delta such that |x - x0| < \delta implies that |f(x)/g(x) - A/B| < \epsilon.
 
Looks good to me. Check that \lim_{z\rightarrow z_0} f(z) = A is equivalent to f(z) = A+\epsilon_1 (z) where the epsilon-function goes to zero as z\rightarrow z_0, then you are in fact done.
 
Mark44:

I have \epsilon_1(z)=f&#039;(\xi)(z-z_0) and \epsilon_2(z)=g&#039;(\zeta)(z-z_0) so

<br /> \frac {|B\epsilon_1(z)|+|A\epsilon_2(z)|}{|B^2+B\epsilon _2(z)|}=\frac{(|Bf&#039;(\xi)|+|Ag&#039;(\zeta)|)|z-z_0|}{|B^2+Bg&#039;(\zeta)(z-z_0)|}.<br />

Now if \delta=\frac{1}{k|g&#039;(\zeta)|} and |z-z_0|&lt;\delta and k>1 I get

<br /> \frac{(|Bf&#039;(\xi)|+|Ag&#039;(\zeta)|)|z-z_0|}{|B^2+Bg&#039;(\zeta)(z-z_0)|}\le \frac{1}{k}\frac{(|Bf&#039;(\xi)|+|Ag&#039;(\zeta)|)|g&#039;(\zeta)|}{|B|^2-\frac{|B|}{k}}\le\frac{1}{k}C<br />

where
<br /> C=\frac{(|Bf&#039;(\xi)|+|Ag&#039;(\zeta)|)|g&#039;(\zeta)|}{|B|^2-|B|}<br />
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top