Unlocking the Mystery of Approximating Cosine with C++

  • Thread starter pokeka
  • Start date
  • Tags
    Work
In summary, a user shares their accidental discovery of an algorithm to approximate the cosine of a number with high precision through programming. They provide the code and a brief explanation, but do not fully explain the workings of the algorithm. Another user then derives the algorithm and provides a detailed explanation and derivation of its accuracy. They also apologize for being bitter about the amount of effort put into their explanation. The conversation ends with a thank you and an expression of admiration for the algorithm.
  • #1
pokeka
1
0
Hi, this is my first post. Hope I'm not breaking any rules. Anyway, I was messing arround programming the other day, and I accedently came up with an algorithm that will approximate the cosine of a number with a great deal of precisision. Here's the code:

double cosine(double n){
double a=1;
double da=0;
double p=1000;//Bigger numbers will give you a better value for cos(n)
for(double i=0;i<=n;i+=pow(p,-1)){
da+=-a/pow(p,2);
a+=da;}
return a;}

It's kinda hard to put this into words, and I imagine a lot of people here know a little bit of c++, so, for now, I won't try to explain what it does. I can try though, if you want. So, yeah, if anyone could tell me why this works, that would rock.
 
Mathematics news on Phys.org
  • #2
OK, I think I was able to derive this correctly. I really hope this isn't homework, as I put a lot of time into this for you to just be cheating off me. I'll see if I can explain this concisely, but it'll be tough. First, we'll write out the recursion relation explicitly:

[tex]da_n=da_{n-1}-\frac{1}{p^2} a_{n-1}[/tex]

[tex]a_n = a_{n-1} + da_{n}[/tex]

Where a0=1 and da0=0. Next we define:

[tex]\alpha \equiv -\frac{1}{p^2}[/tex]

And we find:

[tex]da_n=\alpha(a_0+a_1+ ... +a_{n-1}) [/tex]

And similarly:

[tex]\begin{align*} a_n &= da_0+... + da_{n-1}\\
&=1+ \alpha \left( (a_0) + (a_0+a_1) + ... + (a_0+...+a_{n-1})\right) \\
&=1+ \alpha \sum_{k=0}^{n-1} (n-k) a_k \end{align*}[/tex]

Which eliminates the need for the dan. Now, you can see by computing a few terms that the an will have the form:

[tex]a_n=\sum_{j=0}^n c_{nj} \alpha^j[/tex]

For some constants cnj. Plugging this into the above equation:

[tex]\begin{align*} \sum_{j=0}^n c_{nj} \alpha^j &=1+ \alpha \sum_{k=0}^{n-1} (n-k) \sum_{j=0}^k c_{kj} \alpha^j \\
&=1+\sum_{j=0}^{n-1} \alpha^{j+1}\sum_{k=j}^{n-1} (n-k)c_{kj} \end{align*}[/tex]

Equating like powers of [itex]\alpha[/itex] gives a recursion relation for cnj:

[tex]c_{nj}=\sum_{k=j-1}^{n-1} (n-k)c_{k j-1}[/tex]

With initial conditions cn0=1. Now, it turns out these coefficients are (2j)th order polynomials in n. We will be taking the limit as [itex]\alpha[/itex] goes to zero later on, and n will go to p=1/[itex]\sqrt{\alpha}[/itex], so we're only interested in the highest power of n. Thus, we will use the approximation:

[tex]c_{nj} \approx b_j n^{2j}[/tex]

Also, the following approximation holds to first order (as you'll see, we actually approximate a series that doesn't start at 1 by this formula, but the limit justitifies this):

[tex]\sum_{k=1}^{n-1} k^q \approx \frac{1}{q+1} n^{q+1}[/tex]

So that:

[tex]\begin{align*}b_j n^{2j} &= \sum_{k=j-1}^{n-1} (n-k) b_{j-1} k^{2j-2} \\
&=n\sum_{k=j-1}^{n-1}b_{j-1} k^{2j-2}-\sum_{k=j-1}^{n-1} b_{j-1} k^{2j-1} \\
&\approx \frac{1}{2j-1} n^{2j} b_{j-1}- \frac{1}{2j} n^{2j}b_{j-1} \\
&=\frac{1}{(2j)(2j-1)} n^{2j} b_{j-1} \end{align*}[/tex]

And so:

[tex]b_{j} = \frac{1}{(2j)(2j-1)} b_{j-1}[/tex]

And together with b0=1, we have our first non-recursive results, even though they're only approximate:

[tex]b_j= \frac{1}{(2j)!}[/tex]

[tex]c_{nj} \approx \frac{n^{2j}}{(2j)!}[/tex]

[tex]a_n \approx \sum_{j=0}^n \frac{n^{2j}}{(2j)!} \alpha^j[/tex]

We are interested in apx (I'm using x here where the OP used n) This is given by (putting back in p):

[tex]\begin{align*}a_{px} &\approx \sum_{j=0}^{px} \frac{{(px)}^{2j}}{(2j)!} \frac{(-1)^j}{p^{2j}}\\
&=\sum_{j=0}^{px}(-1)^j \frac{x^{2j}}{(2j)!}\end{align*} [/tex]

Which, in the limit p>>1/x (x>0) gives:

[tex]a_{px} &\approx \sum_{j=0}^{\infty} (-1)^j \frac{x^{2j}}{(2j)!}=cos(x)[/tex]

The taylor series for cos(x). Anyone know an easier way?
 
Last edited:
  • #3
Oh, you're welcome pokeka. Not a problem. Your thank you made it all worth while...
 
  • #4
StatusX said:
Oh, you're welcome pokeka. Not a problem. Your thank you made it all worth while...

If it's any consolation, I am certainly impressed. :)
 
  • #5
Sartak said:
If it's any consolation, I am certainly impressed. :)

Thanks. Sorry I'm bitter, but I did put a bit of work into that.
 

1. What is the purpose of approximating cosine with C++?

The purpose of approximating cosine with C++ is to create a more efficient and accurate way of computing the cosine function. Instead of using the traditional method of calculating cosine, which involves complex mathematical equations, this method uses a series of simpler calculations to come up with a close estimate of the cosine value.

2. How does the approximation process work?

The approximation process involves breaking down the cosine function into smaller pieces and using simpler calculations to approximate each piece. These approximations are then combined to create a more accurate overall approximation of the cosine value. The process also involves adjusting for any errors that may occur during the calculations.

3. What are the advantages of using C++ for approximating cosine?

C++ is a high-level programming language that is known for its speed and efficiency. This makes it an ideal choice for approximating cosine, as the calculations can be performed quickly and accurately. C++ also allows for more control over the approximation process, allowing for adjustments and improvements to be made as needed.

4. How accurate is the approximated cosine value compared to the actual value?

The accuracy of the approximated cosine value depends on the number of pieces used in the approximation process. Generally, the more pieces used, the closer the approximation will be to the actual value. With enough pieces, the approximation can get very close to the actual value, but it will never be completely accurate.

5. Can this method be applied to other trigonometric functions?

Yes, this method can be applied to other trigonometric functions such as sine and tangent. However, the number of pieces and the specific calculations used may vary depending on the function. The same principles of breaking down the function into smaller pieces and approximating each piece can be applied to other trigonometric functions as well.

Similar threads

  • Programming and Computer Science
Replies
1
Views
641
  • Programming and Computer Science
Replies
1
Views
735
  • Programming and Computer Science
Replies
4
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Programming and Computer Science
Replies
5
Views
2K
  • Quantum Physics
Replies
19
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
3K
  • Programming and Computer Science
Replies
15
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
Replies
10
Views
668
Back
Top