The Mystery of Approximating Cosine with C++

  • Thread starter Thread starter pokeka
  • Start date Start date
  • Tags Tags
    Work
AI Thread Summary
The discussion centers on a C++ algorithm developed to approximate the cosine of a number with high precision. The algorithm utilizes a recursive relation and involves manipulating series to derive coefficients that approximate the Taylor series for cosine. The poster expresses a desire for feedback on the algorithm's validity and seeks simpler methods for achieving the same result. There is an acknowledgment of the effort put into the work, alongside a concern about potential misuse for homework purposes. Overall, the thread highlights both the technical aspects of the algorithm and the community's appreciation for the effort involved.
pokeka
Messages
1
Reaction score
0
Hi, this is my first post. Hope I'm not breaking any rules. Anyway, I was messing arround programming the other day, and I accedently came up with an algorithm that will approximate the cosine of a number with a great deal of precisision. Here's the code:

double cosine(double n){
double a=1;
double da=0;
double p=1000;//Bigger numbers will give you a better value for cos(n)
for(double i=0;i<=n;i+=pow(p,-1)){
da+=-a/pow(p,2);
a+=da;}
return a;}

It's kinda hard to put this into words, and I imagine a lot of people here know a little bit of c++, so, for now, I won't try to explain what it does. I can try though, if you want. So, yeah, if anyone could tell me why this works, that would rock.
 
Mathematics news on Phys.org
OK, I think I was able to derive this correctly. I really hope this isn't homework, as I put a lot of time into this for you to just be cheating off me. I'll see if I can explain this concisely, but it'll be tough. First, we'll write out the recursion relation explicitly:

da_n=da_{n-1}-\frac{1}{p^2} a_{n-1}

a_n = a_{n-1} + da_{n}

Where a0=1 and da0=0. Next we define:

\alpha \equiv -\frac{1}{p^2}

And we find:

da_n=\alpha(a_0+a_1+ ... +a_{n-1})

And similarly:

\begin{align*} a_n &amp;= da_0+... + da_{n-1}\\<br /> &amp;=1+ \alpha \left( (a_0) + (a_0+a_1) + ... + (a_0+...+a_{n-1})\right) \\<br /> &amp;=1+ \alpha \sum_{k=0}^{n-1} (n-k) a_k \end{align*}

Which eliminates the need for the dan. Now, you can see by computing a few terms that the an will have the form:

a_n=\sum_{j=0}^n c_{nj} \alpha^j

For some constants cnj. Plugging this into the above equation:

\begin{align*} \sum_{j=0}^n c_{nj} \alpha^j &amp;=1+ \alpha \sum_{k=0}^{n-1} (n-k) \sum_{j=0}^k c_{kj} \alpha^j \\<br /> &amp;=1+\sum_{j=0}^{n-1} \alpha^{j+1}\sum_{k=j}^{n-1} (n-k)c_{kj} \end{align*}

Equating like powers of \alpha gives a recursion relation for cnj:

c_{nj}=\sum_{k=j-1}^{n-1} (n-k)c_{k j-1}

With initial conditions cn0=1. Now, it turns out these coefficients are (2j)th order polynomials in n. We will be taking the limit as \alpha goes to zero later on, and n will go to p=1/\sqrt{\alpha}, so we're only interested in the highest power of n. Thus, we will use the approximation:

c_{nj} \approx b_j n^{2j}

Also, the following approximation holds to first order (as you'll see, we actually approximate a series that doesn't start at 1 by this formula, but the limit justitifies this):

\sum_{k=1}^{n-1} k^q \approx \frac{1}{q+1} n^{q+1}

So that:

\begin{align*}b_j n^{2j} &amp;= \sum_{k=j-1}^{n-1} (n-k) b_{j-1} k^{2j-2} \\<br /> &amp;=n\sum_{k=j-1}^{n-1}b_{j-1} k^{2j-2}-\sum_{k=j-1}^{n-1} b_{j-1} k^{2j-1} \\<br /> &amp;\approx \frac{1}{2j-1} n^{2j} b_{j-1}- \frac{1}{2j} n^{2j}b_{j-1} \\<br /> &amp;=\frac{1}{(2j)(2j-1)} n^{2j} b_{j-1} \end{align*}

And so:

b_{j} = \frac{1}{(2j)(2j-1)} b_{j-1}

And together with b0=1, we have our first non-recursive results, even though they're only approximate:

b_j= \frac{1}{(2j)!}

c_{nj} \approx \frac{n^{2j}}{(2j)!}

a_n \approx \sum_{j=0}^n \frac{n^{2j}}{(2j)!} \alpha^j

We are interested in apx (I'm using x here where the OP used n) This is given by (putting back in p):

\begin{align*}a_{px} &amp;\approx \sum_{j=0}^{px} \frac{{(px)}^{2j}}{(2j)!} \frac{(-1)^j}{p^{2j}}\\<br /> &amp;=\sum_{j=0}^{px}(-1)^j \frac{x^{2j}}{(2j)!}\end{align*}

Which, in the limit p>>1/x (x>0) gives:

a_{px} &amp;\approx \sum_{j=0}^{\infty} (-1)^j \frac{x^{2j}}{(2j)!}=cos(x)

The taylor series for cos(x). Anyone know an easier way?
 
Last edited:
Oh, you're welcome pokeka. Not a problem. Your thank you made it all worth while...
 
StatusX said:
Oh, you're welcome pokeka. Not a problem. Your thank you made it all worth while...

If it's any consolation, I am certainly impressed. :)
 
Sartak said:
If it's any consolation, I am certainly impressed. :)

Thanks. Sorry I'm bitter, but I did put a bit of work into that.
 
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...

Similar threads

Replies
1
Views
1K
Replies
4
Views
2K
Replies
1
Views
2K
Replies
5
Views
3K
Replies
2
Views
4K
Replies
2
Views
3K
Replies
21
Views
3K
Back
Top