Reciprocals of Derivatives: How Can They Simplify Calculus?

  • Thread starter Thread starter rsq_a
  • Start date Start date
  • Tags Tags
    Derivatives
rsq_a
Messages
103
Reaction score
1
What is required for one to simply write,

\frac{\partial x}{\partial y} = \frac{1}{\left(\dfrac{\partial y}{\partial x}\right)}

There are probably necessary conditions on the smoothness of the inverse map, but I'd like an easy way to know when I can just compute dx/dy by this method.
 
  • Like
Likes craigthone
Physics news on Phys.org
The inverse function theorem addresses exactly this issue!

If f is continuously differentiable in a neighborhood of the point of interest, and the derivative at that point is not zero, then the inverse of the derivative is the derivative of the inverse.

Additionally, the inverse is continuously differentiable on some neighborhood of the point and its image, the other point you brought up.

http://en.wikipedia.org/wiki/Inverse_function_theorem
 
The fact that the original poster used partial derivatives makes me wonder if he's referring to a multivariable case, which is quite messy. :frown:
 
Hurkyl said:
The fact that the original poster used partial derivatives makes me wonder if he's referring to a multivariable case, which is quite messy. :frown:

Not really!

If D(f(x)) denotes the best linear approximation to f at x (eg, the jacobian in finite dimensions, or the Frechet derivative on a more general Banach space), then basically the exact same result holds.

If f is a mapping between banach spaces that is C1 in a neighborhood of point x, and if D(f(x)) is an isomorphism (eg, in the real to real case d/dx exists and is nonzero), then f is a diffeomorphism on neighborhoods of x and y=f(x), and D(f-1)(y) = (D(f(x))-1.
 
maze said:
Not really!

If D(f(x)) denotes the best linear approximation to f at x...
You can make anything simple if you change the problem. You're not talking about partial derivatives here...
 
Let me say this better...


Things like the derivative of function (as you described) and the exterior derivative are "intrinsic" properties of a function -- they depend on the function and nothing else.

Partial derivatives in Leibniz notation are more complicated, because they depend not only on the function and the variable you want to differentiate with respect to... but they also depend on what coordinate chart you've decided to use on the parameter space.

The net effect is that you have to jump through hoops to even figure out what the equation \partial y / \partial x = 1 / (\partial x / \partial y) even means, let alone figure out whether or not it's a valid equation. Arildno described the type of thing you have to do.

And just to demonstrate (for everyone, particularly the opening poster) some of the bad things partial derivatives can do, consider the following:

You have three variables x, y, z related by x + y + z = 0. If we write z as a function of x and y, then \partial z / \partial x = -1. If we write y as a function of x and z, then \partial y / \partial z = -1. If we write x as a function of y and z, then \partial x / \partial y = -1. Combining these three expressions:

\frac{\partial z}{\partial x} \frac{\partial y}{\partial z} \frac{\partial x}{\partial y} = -1
:bugeye:
 
Unfortunately, my post was riddled with errors, I'll make a better one later.
 
Hurkyl said:
Let me say this better...Things like the derivative of function (as you described) and the exterior derivative are "intrinsic" properties of a function -- they depend on the function and nothing else.

Partial derivatives in Leibniz notation are more complicated, because they depend not only on the function and the variable you want to differentiate with respect to... but they also depend on what coordinate chart you've decided to use on the parameter space.

The net effect is that you have to jump through hoops to even figure out what the equation \partial y / \partial x = 1 / (\partial x / \partial y) even means, let alone figure out whether or not it's a valid equation. Arildno described the type of thing you have to do.

And just to demonstrate (for everyone, particularly the opening poster) some of the bad things partial derivatives can do, consider the following:

You have three variables x, y, z related by x + y + z = 0. If we write z as a function of x and y, then \partial z / \partial x = -1. If we write y as a function of x and z, then \partial y / \partial z = -1. If we write x as a function of y and z, then \partial x / \partial y = -1. Combining these three expressions:

\frac{\partial z}{\partial x} \frac{\partial y}{\partial z} \frac{\partial x}{\partial y} = -1
:bugeye:

I seem to have forgotten a lot of my Calculus.

The problem which provoked the question was, I was trying to figure out the relationship between the divergence/gradient/laplacian, etc. in polar coordinates with cartesion coordinates.

So for example, if x = r\sin\theta and y = r\cos\theta, I immediately wrote down,

\frac{\partial \theta}{\partial x} = \frac{1}{\dfrac{\partial x}{\partial \theta}} = \frac{1}{r\cos\theta}

However, if you write \theta = \text{atan}(y/x) and differentiate, you get \frac{\partial \theta}{\partial x} = -\frac{\sin\theta}{r}, which seems correct.
 
rsq_a said:
I seem to have forgotten a lot of my Calculus.

The problem which provoked the question was, I was trying to figure out the relationship between the divergence/gradient/laplacian, etc. in polar coordinates with cartesion coordinates.

So for example, if x = r\sin\theta and y = r\cos\theta, I immediately wrote down,

\frac{\partial \theta}{\partial x} = \frac{1}{\dfrac{\partial x}{\partial \theta}} = \frac{1}{r\cos\theta}

However, if you write \theta = \text{atan}(y/x) and differentiate, you get \frac{\partial \theta}{\partial x} = -\frac{\sin\theta}{r}, which seems correct.

The hitch, if I recall correctly, is that

\frac{\partial \theta}{\partial x} = \frac{1}{\dfrac{\partial x}{\partial \theta}}

is true, except that you left off a very important piece of information. Namely, the variables being held constant. The true statement, if I recall correctly, is that

\left(\frac{\partial \theta}{\partial x}\left)_{y} = \frac{1}{\left(\dfrac{\partial x}{\partial \theta}\right)_y}

Note that on both sides of the equality, the variable y is being held constant - hence, you cannot just differentiate x with respect to theta while holding r fixed, because r depends on x! Here's the derivation:

x = r\cos\theta = \sqrt{x^2 + y^2}\cos\theta \Rightarrow \left(\frac{\partial x}{\partial \theta}\right)_y = \frac{x\left(\frac{\partial x}{\partial \theta}\right)_y}{\sqrt{x^2+y^2}}\cos\theta - r\sin\theta

Solve for the derivative:

\left(\frac{\partial x}{\partial \theta}\right)_y(1 - \cos^2\theta) = -r\sin\theta \Rightarrow \left(\dfrac{\partial x}{\partial \theta}\right)_y = -\frac{r}{\sin\theta}

Hence,

\left(\frac{\partial \theta}{\partial x}\left)_{y} = \frac{1}{\left(\dfrac{\partial x}{\partial \theta}\right)_y} = -\frac{\sin\theta}{r}

In general, the rule is that a partial derivative is equal to the reciprocal of the inverting of the "numerator" and "denominator", but you MUST differentiate both with respect to the SAME variables being held constant on both sides. I'm not sure of a good, general way to write it down. Maybe

\left(\frac{\partial y_i(x_1,x_2,...)}{\partial x_j}\right)_{x_k, k \neq j} = \frac{1}{\left(\frac{\partial x_j(x_1,x_2,...,x_j,...,y_i)}{\partial y_i}\right)_{\all x_k, k \neq j}}

Note that on the RHS you have x_j written as a function of all the other x_k's, and y_i, and potentially as an implicit function of itself.
 
Last edited:
  • Like
Likes craigthone
Back
Top