# Cauchy-Riemann conditions proof

1. Oct 11, 2007

### futurebird

I'm trying to understand the proof for this theorem:

The function f(x) = u(x,y) + iv(x,y) is differentiable at a point z= x +iy of a region in the complex plane if and only if the partial derivatives $$U_{x}$$,$$U_{y}$$,$$V_{x}$$,$$V_{y}$$ are continuos and satisfy the Cauchy-Riemann conditions.​

Everything was going GREAT untill I got to this part:

This shows that C-R is necessary, but now we much show that it is also sufficient: that is we must show that if the partials meet the C-R condition then f(z) is differentiable. Once we show this we will have proved the theorem.

If $$U_{x}$$,$$U_{y}$$,$$V_{x}$$,$$V_{y}$$ are continuous at the point (x, y) then:

$$\Delta u = u_{x} \Delta x + u_{y} \Delta y + \epsilon_{1}| \Delta z|$$
$$\Delta v = v_{x} \Delta x + v_{y} \Delta y + \epsilon_{2}| \Delta z|$$

Where $$| \Delta z|=\sqrt{\Delta x^{2}+\Delta y^{2}}$$

$$\mathop{\lim}\limits_{\Delta z \to 0}\epsilon_{1} =\mathop{\lim}\limits_{\Delta z \to 0}\epsilon_{2}=0$$

and

$$\Delta u = u(x+ \Delta x, y+ \Delta y)-u(x,y)$$
$$\Delta v = v(x+ \Delta x, y+ \Delta y)-v(x,y)$$

Calling $$\Delta f = \Delta u + i \Delta v$$, we have

$$\frac{\Delta f}{\Delta z}=\frac{\Delta u}{\Delta z}+i\frac{\Delta v}{\Delta z}$$
$$=u_{x}\frac{\Delta x}{\Delta z}+u_{y}\frac{\Delta y}{\Delta z} + iv_{x}\frac{\Delta x}{\Delta z}+iv_{y}\frac{\Delta y}{\Delta z}+ (\epsilon_{1} +i\epsilon_{2})\frac{|\Delta z|}{\Delta z}$$

... I was able to make sense of the proof from this point.

I don't see this connection between "If $$U_{x}$$,$$U_{y}$$,$$V_{x}$$,$$V_{y}$$ are continuous at the point (x, y)" and

$$\Delta u = u_{x} \Delta x + u_{y} \Delta y + \epsilon_{1}| \Delta z|$$
$$\Delta v = v_{x} \Delta x + v_{y} \Delta y + \epsilon_{2}| \Delta z|$$

So everything after that is just moving deltas around... and I wish I knew why.

Where is this coming from? My book says it's a "famous result from analysis" but that just made me feel dumber for not knowing what it was. (I have not had real analysis, I'm taking it next term.) I looked in a real analysis book but I don't know what to look for... so that didn't work.

Can someone help me understand this step in the proof in a simple way that gets at the big idea behind the step? I have 16 more proofs to study for this midterm, so I don't want to get too bogged down... at the same time I don't want to resort to rote memorization that won't serve me well later when I take analysis and learn what the heck this is all about.

2. Oct 11, 2007

### mathwonk

i just explained this somewhere else near here.

ah yes, in complex analysis: holomorphic functions.

3. Oct 11, 2007

### futurebird

I read that response, but that's not what has me confused. What I need to know is what theorem or idea they used in this one step of this proof...

4. Oct 12, 2007

### SiddharthM

I'm not sure one would encounter this in a real analysis class either. It's a result from multivariate analysis. Real analysis classes usually focus on the line and general metric spaces.

But yes, this was one thing I remember not liking in our complex analysis course - we drew results from multivariate analysis. I would not worry about understanding this proof, rather understand it's implications and know what you can use it for. ie - do more problems. Most complex analysis courses at the undergrad level focus less on theory and more on practice.

5. Oct 12, 2007

### SiddharthM

and i think that's churchill & browns proof - i HATE that textbook! lol

problems are boring, theory is weak.

6. Oct 12, 2007

### futurebird

Our midterm is going to be all proofs and just a few problems. The prof said as much. This is more of a "baby grad" course than an undergrad course.

This is from Ablowitz and Fokas. It's an "OK" book, it tends to rush a bit just to get to the fancy problems. The excercise sets are really really really hard. I love it and HATE it.

7. Oct 12, 2007

### futurebird

If for the functions u and v the change in the function with respect to the change in each of the variables, x and y, is smooth and continuous, then we can estimate the increment of the function using the increments in x and y and the respective partial derivatives. Since the magnitude of the increment of z is based on the increments of x and y, this estimate will only be off by some small constant times delta z.

And, in the end, the error won't matter since it goes to zero as the increment of z goes to zero.

Is this the general right idea?

8. Oct 14, 2007

### futurebird

Bump.

9. Oct 14, 2007

### mathwonk

your question is uninteresting - you have rejected my attempt to explain the concept and insist on plowing through an unenlightening proof.

10. Oct 14, 2007

### Hurkyl

Staff Emeritus
It's a differential approximation. (a.k.a. a degree-1 Taylor series with remainder)

Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook