Approximating Kepler's Equation

  • Thread starter Thread starter eurekameh
  • Start date Start date
AI Thread Summary
The discussion focuses on approximating Kepler's equation, M = E - e*sin(E), with an emphasis on finding a function f(e, M) for small eccentricities (e). Participants explore using trigonometric identities and small angle approximations, leading to the first-order approximation E = M + e*sin(M). There is debate on the necessity of higher-order approximations versus evaluating the original equation, given its transcendental nature. A user shares a MATLAB script for numerically approximating E using the Newton-Raphson method and seeks advice on implementing a stopping criterion based on error tolerance. The conversation highlights the balance between accuracy and computational efficiency in approximating solutions.
eurekameh
Messages
209
Reaction score
0
Kepler's equation: M = E - esinE, where E is to be approximated.
1zz5jj9.png

I'm trying to find f(e,M), a function expressed in e and M. I've tried using the trig identity sin(u + v) = sinu*cosv + cosu*sinv, but that just introduces a cosine, which I believe to be useless. Anyone with ideas?
 
Physics news on Phys.org
using the trig identity sin(u + v) = sinu*cosv + cosu*sinv
Maybe, apply that expansion, then ...

if you are saying ∊ « 0, try approximating sinx by x, i.e., sin(∊·sinM) by ∊.sinM
and make a corresponding applicable approximation for cos(∊·sinM).

BTW, that's a great graphic!
 
Thanks, haha. It was done on paint.
I'm still having a little trouble applying the approximation that e is small.
Applying the trig identity, I have
E = M + esin(M + esinM) = M + e[ sinM*cos(esinM) + cosM*sin(esinM)].
Assuming that e is small, sin(esinM) goes to 0 and cos(esinM) = 1.
So, E = M + esinM, which is still the first-order approximation. Am I missing something?
Also, isn't the symbol in "∊ « 0" saying that e is much less than zero?
 
eurekameh said:
I'm still having a little trouble applying the approximation that e is small.
Applying the trig identity, I have
E = M + esin(M + esinM) = M + e[ sinM*cos(esinM) + cosM*sin(esinM)].
Assuming that e is small, sin(esinM) goes to 0
No. For small x, sin(x) ⋍ x (Try it on your calculator, to show I'm right.)
and cos(esinM) = 1.
Correct.
Also, isn't the symbol in "∊ « 0" saying that e is much less than zero?
:cough: :blushing: I meant that to be ϵ « 1
 
If e << 1, doesn't that imply that E = M?

If you're trying to simplify an equation, that suggests you want to rid yourself of extra terms, rather than keep stringing them along.
 
Ah, got it, thanks.
The function is e^2*cosM*sinM. I have two questions:
1. Why not just keep all the terms even though it's a bit more complex? Wouldn't this give better accuracy?
2. I've seen the approximation that sinx is approximated as x when x is small. But x is also small. Why don't people just approximate x as zero?

SteamKing said:
If e << 1, doesn't that imply that E = M?

If you're trying to simplify an equation, that suggests you want to rid yourself of extra terms, rather than keep stringing them along.

To my understanding, E = M is the approximation that we are trying to improve. Sticking the zero-order approximation E = M into E = M + esinE = M + esinM improves the approximation by adding the extra term esinM. So E = M + esinM is the first-order approximation, and what I'm trying to do is find the second-order approximation where a function f(e,M) in E = M + esinM + f(e,M) is to be found.
 
I confess I don't understand your goal.

It seems to me if you are trying to find second-order, third-order, etc. approximations, for such an uncomplicated formula, you are making more work than evaluating the original formula.
 
SteamKing said:
I confess I don't understand your goal.

It seems to me if you are trying to find second-order, third-order, etc. approximations, for such an uncomplicated formula, you are making more work than evaluating the original formula.

The original is a transcendental function with no closed form solution. So approximation or numerical method solution is required.

This particular function is a very important one in celestial mechanics, and gave our predecessors much grief as they labored to calculate the motions of the planets. If you're curious, investigate "The Kepler Problem", or "The Prediction Problem" in celestial mechanics (astrodynamics).
 
SteamKing said:
I confess I don't understand your goal.

It seems to me if you are trying to find second-order, third-order, etc. approximations, for such an uncomplicated formula, you are making more work than evaluating the original formula.

The original Kepler equation M = E + esinE doesn't have a closed-form solution for E, like gneill just said, so approximating E is necessary.

I wrote a Matlab script for numerically approximating E via Newton-Raphson:

M = 2; e = 0.5;
E(1) = M; %initial guess
for i = 1:10
E(i+1) = E(i) - [(M-E(i)+e*sin(E(i)))/(-1+e*cos(E(i)))];
end

But I don't know how to implement the stopping criterion, where my error function could be something like err = abs(E(i+1) - E(i)), and if error < tolerance, the iterations can stop, rather than just keep iterating to 10 iterations like what the script above does. Can you guys help me with this? I've tried using the while function (while error < tolerance), but to no avail.
 
  • #10
The goal is clear now. I deal with trying to find solutions to transcendental equations often.

If you use 'while error < tolerance' as the criterion, then no calculations will be done while error > tolerance, which is presumably the case when starting the calculations.
 
  • #11
eurekameh said:
Ah, got it, thanks.
The function is e^2*cosM*sinM.
That's what I make it, also.
I have two questions:
1. Why not just keep all the terms even though it's a bit more complex? Wouldn't this give better accuracy?
Accuracy is only part of the goal. Ease/speed of computation is equally desirable. Also, often it's possible to massage the simpler approximating function into something more easily comprehended than the original, leading to a clearer understanding of what the error term will be. When you can be confident the error will be contained within certain acceptable bounds, then there is really no need to actually calculate it, thus freeing you to use the fast computation of the approximating function.
2. I've seen the approximation that sinx is approximated as x when x is small. But x is also small. Why don't people just approximate x as zero?
Try it. First approximate sin(0.01) to 0, then try 0.01.

Let sin(0.01) ≈[/size] 0:

percentage error = ❲[/size](sin(0.01) - 0)/sin(0.01)❳[/size]* 100% = 100%[/color]


Let sin(0.01) ≈[/size] 0.01:

percentage error = ❲[/size](sin(0.01) - 0.01)/sin(0.01)❳[/size]* 100% = -1.7%[/color]


Which percentage error looks most acceptable to you? :wink:
 

Similar threads

Replies
1
Views
1K
Replies
5
Views
2K
Replies
5
Views
2K
Replies
4
Views
1K
Replies
4
Views
2K
Replies
16
Views
3K
Replies
3
Views
3K
Replies
0
Views
934
Back
Top