# Better Understanding of Complex Differentiability

1. Nov 28, 2011

### Poopsilon

So the way I understand complex differentiability and its requirement that the partial derivatives satisfy the Cauchy-Riemann Equations is that we would really like ℂ to have the same nice property as ℝ, that is to say we would really like the derivative to be a linear operator which is itself an element of the field. Thus we put this restriction on our complex derivative and voilà, its Jacobian matrix turns into an element of that special subfield of the ring M(ℝ)2x2, isomorphic to ℂ.

Now this restriction allows us to derive all sorts of nice properties about functions which are complex differentiability, the big one being analyticity. On the other hand differentiable functions from ℝ^2 to ℝ^2 seem to get on reasonably well without restrictions on the components of their Jacobian matrices, and I understand that adding the Cauchy-Riemann restriction wouldn't get us anywhere interesting in this case. The question is what would happen if we dropped the Cauchy-Riemann Equations for ℂ? Would we just end up with ℝ^2? Clearly the same functions would be differentiable, but with ℂ we still have a field and not just a vector-space, and thus I feel like a theory of this rougher differentiability would still be distinct from that for ℝ^2.

This makes me feel that something deeper or more subtle is going on, something to do with linear algebra I imagine, something maybe to do with the fact that all fields are one dimensional vector-spaces over themselves, I'm not really sure, I can't quite find the correct perspective. Please feel free to ramble, elaborate, digress and/or school me about complex analysis in general or about any misconceptions I may have.

Edit: Now that I think a bit more about it I imagine we would have some problems taking the limit of the difference quotient, since dropping the Cauchy-Riemann Equations would cause the limit to be different depending on from which direction we approach the point, which is allowed in the ℝ^2 case. And if I'm remembering correctly, in ℂ it's enough to show that approaching the limit horizontally and vertically( with h times i instead of h ) and showing they're equal is enough to know that approaching it from any direction will also be equal, and thus equivalent to the C-R Equations. I can't remember though if this is true in the ℝ^2 case, I think not, but somehow this doesn't matter as long as the partial derivatives are continuous, so is it the multiplicative structure of ℂ that causes this?

Last edited: Nov 29, 2011
2. Nov 29, 2011

### HallsofIvy

It is just that the 'second dimension' we have with complex numbers introduces a great deal more complexity- and more things have to work together right.

For example, the definition of "differentiable" is exactly the same in complex numbers as in real numbers:
$$\begin{array} {c}\lim \\ h\to 0\end{array}\frac{f(z_0+y)- f(z_0)}{h}$$
must exist.

That, in turn, means that we must get the same result approaching $z_0$ from any direction. For real numbers, that is simply "from below" and "from above". But with the complex numbers that means along any line through z, along a parabola, a spiral, etc.

In particular, we could approach $z_0$ along a line parallel to the real axis: if $z= x+ iy$, $z_0= x_0+iy_0$, and $f(z)= f(x+ iy)= u(x,y)+ iv(x,y)$, then $z_0+ h= (x_0+ h)+ iy$.
$$\begin{array}{c}\lim \\ h\to 0\end{array}\frac{[u(x_0+h,y_0)+ iv(x_0+h, y_0)]- [u(x_0,y_0)+ iv(x_0,y_0)]}{h}$$$$= \begin{array}{c}\lim \\ h\to 0\end{array}\frac{u(x_0+h, y_0)- u(x_0, y_0)}{h}+ \begin{array}{c}\lim \\ h\to 0\end{array}i\frac{(v(x_0+h,y_0)- v(x_0, y_0)}{h}$$$$= \frac{\partial u}{\partial x}+ i\frac{\partial v}{\partial x}$$

Now take that same limit approaching parallel to the imaginary axis:
$$\begin{array}{c}\lim \\ h\to 0\end{array}\frac{[u(x_0, y_0+ h)+ iv(x_0,y_0)] - [u(x_0, y_0)+ iv(x_0, y_0)]}{ih}$$
(Before, h was a real number, written simply as "h". Now it is imaginary, written as "ih".)
$$= \begin{array}{c}\lim \\ h\to 0\end{array}\frac{u(x_0,y_0+h)- u(x_0, y_0)}{ih}+ \begin{array}{c}\lim \\ h\to 0\end{array}i\frac{v(x_0, y_0+h)- v(x_0, y_0)}{ih}$$
$$= -i\frac{\partial u}{\partial y}+ \frac{\partial v}{\partial y}$$

Now, those limits (as well as all other possible ways of approaching $z_0$) must be the same. In particular, the real parts must be the same and the imaginary parts must be the same. The real parts are
$$\frac{\partial u}{\partial x}= \frac{\partial v}{\partial y}$$
and the imaginary parts are
$$\frac{\partial v}{\partial u}= -\frac{\partial u}{\partial y}$$
the "Cauchy-Riemann" equations.

3. Nov 29, 2011

### Guffel

A quick question, which I think is related enough not to be off topic: On the wikipedia page on the identity theorem, it is stated that "Thus a holomorphic function is completely determined by its values on a (possibly quite small) neighborhood in D. This is not true for real-differentiable functions."

What does "real-differentiable" mean in this context?

4. Nov 29, 2011

### micromass

I guess it is a differentiable function $f:\mathbb{R}\rightarrow \mathbb{R}$.

5. Nov 29, 2011

### Guffel

But doesn't the equivalence to the identity theorem on real functions hold?

6. Nov 29, 2011

### micromass

No, it only holds for analytic functions (= real functions that are locally expressible as power series). An easy example is

$$f:\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow \left\{\begin{array}{c} x^2~x\geq 0\\ -x^2~x<0\end{array}\right.$$

This function and the function $g(x)=x^2$ are equal on a neigborhood (namely $\mathbb{R}^+$), but they are not equal on entire $\mathbb{R}$.

It even fails for infinitely differentiable functions. Indeed, let

$$f:\mathbb{R}\rightarrow \mathbb{R}:x\rightarrow \left\{\begin{array}{c} e^{-1/x^2}~x\geq 0\\ 0~x<0\end{array}\right.$$

and $g(x)=0$.

7. Nov 29, 2011

### Guffel

Ah, thanks micromass!