Physical derivation of the Sin series

AI Thread Summary
The discussion explores the derivation of the sine series through physical analogies, inspired by Feynman's lectures on high-frequency capacitors. The original poster attempts to relate the sine function to a harmonic oscillator's motion, starting from the cosine expansion at time t=0. They analyze the potential energy of the oscillator and its derivatives, leading to a series that resembles the cosine function. The poster questions whether their approach is valid and seeks confirmation on the significance of the terms derived. Overall, the conversation centers on connecting physical concepts to mathematical series expansions.
Storm Butler
Messages
74
Reaction score
0
I was reading feynman's lectures on physics and i came across the 23-2 in volume two where he is talking about a capacitor at high frequencies. He then uses the equations of E and M to come up with an approximation of the electric field between the two plates as the field oscillates at a high frequency. In order to make the approximation better he accounts for the fact that a changing electric field generates a magenetic field. then he corrects the magnetic field according to the corrected electric field, so on and so forth.

In the end he has an infinite series J_{0} that is the bessel funciton.

I was wondering if there would be any way to come up with the sine expansion in a similar way. I was first looking at a circle with arc length \vartheta and trying to show that the length of the chord was rSin(\vartheta) but I am not sure how to go about it. Maybe someone has some suggestion of how to do it for a more physical situation similar to how feynman did it.
 
Mathematics news on Phys.org
Ive been thinking that one possible way would be looking at a harmonic oscillator. if we have a k constant of 1 and initial conditions, x(0)=1 and x'(0)=0. then we have a solution of x(t)=cos(t). so at time t=0 we have x=1, so we have the beginning of the cos expansion. i don't know how to go forward from there just yet.
 
I figure we will have to look at the higher terms of the potential because that way we can better approximate the motion. similar to how adding terms of higher power to a taylor expansion sort of bends the line out to curve more closely to the function in question. The potential is U=1/2kx^2 or ,in our case with k=1, U=1/2*x^2 or x^2/2!. This is exactly the next term we need for the next piece of the cos expansion. Is this mere coincidence or is it signifigant?
 
My last post I realize now was completely wrong. I just got excited about seeing the 1/2!x^2 term. however it also shows up if you calculate the distance traveled from the acceleration at time t=0. Perhaps this is in the right track? I'm not sure how to go on from here yet though .
 
Last edited:
So here's how I worked it out. After everything above, I remembered that since at t=0 the acceleration is a max so it's derivative (the jerk) is 0. Moving on to the next derivative of motion (the snap?) we find the distance traveled is the integral of the jerk or 1/4!x^4. Which is just what we are looking for! I then just continued the argument, assuming all the odd powered terms are zero due to being the derivative of some maximum quantity. Then I found the contribution of the new term to the distance. So I come up with the series 1-1/2!x^2+1/4!x^4-1/6!x^6... Which is the cos ! Does this work? Or did I go wrong somewhere?
 
Thread 'Video on imaginary numbers and some queries'
Hi, I was watching the following video. I found some points confusing. Could you please help me to understand the gaps? Thanks, in advance! Question 1: Around 4:22, the video says the following. So for those mathematicians, negative numbers didn't exist. You could subtract, that is find the difference between two positive quantities, but you couldn't have a negative answer or negative coefficients. Mathematicians were so averse to negative numbers that there was no single quadratic...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Back
Top