# Approximating Kepler's Equation

1. Feb 16, 2013

### eurekameh

Kepler's equation: M = E - esinE, where E is to be approximated.

I'm trying to find f(e,M), a function expressed in e and M. I've tried using the trig identity sin(u + v) = sinu*cosv + cosu*sinv, but that just introduces a cosine, which I believe to be useless. Anyone with ideas?

2. Feb 16, 2013

### Staff: Mentor

Maybe, apply that expansion, then ....

if you are saying ∊ « 0, try approximating sinx by x, i.e., sin(∊·sinM) by ∊.sinM
and make a corresponding applicable approximation for cos(∊·sinM).

BTW, that's a great graphic!

3. Feb 17, 2013

### eurekameh

Thanks, haha. It was done on paint.
I'm still having a little trouble applying the approximation that e is small.
Applying the trig identity, I have
E = M + esin(M + esinM) = M + e[ sinM*cos(esinM) + cosM*sin(esinM)].
Assuming that e is small, sin(esinM) goes to 0 and cos(esinM) = 1.
So, E = M + esinM, which is still the first-order approximation. Am I missing something?
Also, isn't the symbol in "∊ « 0" saying that e is much less than zero?

4. Feb 17, 2013

### Staff: Mentor

No. For small x, sin(x) ⋍ x (Try it on your calculator, to show I'm right.)
Correct.
:cough: I meant that to be ϵ « 1

5. Feb 17, 2013

### SteamKing

Staff Emeritus
If e << 1, doesn't that imply that E = M?

If you're trying to simplify an equation, that suggests you want to rid yourself of extra terms, rather than keep stringing them along.

6. Feb 17, 2013

### eurekameh

Ah, got it, thanks.
The function is e^2*cosM*sinM. I have two questions:
1. Why not just keep all the terms even though it's a bit more complex? Wouldn't this give better accuracy?
2. I've seen the approximation that sinx is approximated as x when x is small. But x is also small. Why don't people just approximate x as zero?

To my understanding, E = M is the approximation that we are trying to improve. Sticking the zero-order approximation E = M into E = M + esinE = M + esinM improves the approximation by adding the extra term esinM. So E = M + esinM is the first-order approximation, and what I'm trying to do is find the second-order approximation where a function f(e,M) in E = M + esinM + f(e,M) is to be found.

7. Feb 17, 2013

### SteamKing

Staff Emeritus
I confess I don't understand your goal.

It seems to me if you are trying to find second-order, third-order, etc. approximations, for such an uncomplicated formula, you are making more work than evaluating the original formula.

8. Feb 17, 2013

### Staff: Mentor

The original is a transcendental function with no closed form solution. So approximation or numerical method solution is required.

This particular function is a very important one in celestial mechanics, and gave our predecessors much grief as they labored to calculate the motions of the planets. If you're curious, investigate "The Kepler Problem", or "The Prediction Problem" in celestial mechanics (astrodynamics).

9. Feb 17, 2013

### eurekameh

The original Kepler equation M = E + esinE doesn't have a closed-form solution for E, like gneill just said, so approximating E is necessary.

I wrote a Matlab script for numerically approximating E via Newton-Raphson:

M = 2; e = 0.5;
E(1) = M; %initial guess
for i = 1:10
E(i+1) = E(i) - [(M-E(i)+e*sin(E(i)))/(-1+e*cos(E(i)))];
end

But I don't know how to implement the stopping criterion, where my error function could be something like err = abs(E(i+1) - E(i)), and if error < tolerance, the iterations can stop, rather than just keep iterating to 10 iterations like what the script above does. Can you guys help me with this? I've tried using the while function (while error < tolerance), but to no avail.

10. Feb 17, 2013

### SteamKing

Staff Emeritus
The goal is clear now. I deal with trying to find solutions to transcendental equations often.

If you use 'while error < tolerance' as the criterion, then no calculations will be done while error > tolerance, which is presumably the case when starting the calculations.

11. Feb 17, 2013

### Staff: Mentor

That's what I make it, also.
Accuracy is only part of the goal. Ease/speed of computation is equally desirable. Also, often it's possible to massage the simpler approximating function into something more easily comprehended than the original, leading to a clearer understanding of what the error term will be. When you can be confident the error will be contained within certain acceptable bounds, then there is really no need to actually calculate it, thus freeing you to use the fast computation of the approximating function.
Try it. First approximate sin(0.01) to 0, then try 0.01.

Let sin(0.01) 0:

percentage error = (sin(0.01) - 0)/sin(0.01)* 100% = 100%

Let sin(0.01) 0.01:

percentage error = (sin(0.01) - 0.01)/sin(0.01)* 100% = -1.7%

Which percentage error looks most acceptable to you?