In writing a computer raytracing simulation using polar coordinates, I've come across a simple problem: I need to calculate the value of dr/dΘ at an arbitrary point (in polar coordinates), pointing in an arbitrary direction (given as an angle). So if point 1 (p1) is given by the coordinates (r1, Θ1), and I create a ray from that point, pointing at an angle Θ2 with respect to p1 (not with with respect to the origin), I need to know the value of dr/dΘ along that ray at that starting point. Of course, Cartesian coordinates is much simpler: dx/dy as a function of Θ2 does not depend on the values of x and y at the point you're trying to calculate from. In polar coordinates, however, given an arbirtrary value Θ2, dr/dΘ will change depending on what the value of r is at the point you're trying to calculate from. Beyond that, however, I do not know the explicit formula that I need. So my current algorithm/equation relies on converting to cartesian coordinates and then doing the calculation. It is as follows: ----------------------------------- dx = cos(Θ2) dy = sin(Θ2) x1 = r1 * cos(Θ1) y1 = r1 * sin(Θ1) (This assumes dx << x1 and dy << y1. If not, dx and dy are divided by a large constant) rT = sqrt( (x1+dx)^2 + (y1+dy)^2 ) ΘT = arctan( (y1 + dy) / (x1 + dx) ) Add 2pi or pi to thetaT as necessary (depending on which quadrant the point p1 is in) dr = rT - r1 dΘ = ΘT - Θ1 dr/dΘ = (rT - r1) / (ΘT - Θ1) ----------------------------------- This algorithm is undesirable both because it is inelegant (in its roundabout way of first converting to cartesian coordinates to do the calculation) and slow to run on the machine (trig functions and square roots, when unnecessary, slow things down a great deal, and I'm having many problems with computer-related precision limits which can likely be avoided with a more straightforward algorithm). I'm sure there's a much simpler solution, involving only polar coordinates, and I'm frustrated that such a seemingly simple problem is stumping me. Can anyone help out?