Trig substitution into integrals

tadf2
Messages
5
Reaction score
0
I was testing for convergence of a series:
∑\frac{1}{n^2 -1} from n=3 to infinity

I used the integral test, substituting n as 2sin(u)

so here's the question:
when using the trig substitution, I realized the upperbound, infinity, would fit inside the sine.

Is it still possible to make the substitution? Or is there a restriction when this happens?
 
Mathematics news on Phys.org
tadf2 said:
I was testing for convergence of a series:
∑\frac{1}{n^2 -1} from n=3 to infinity

I used the integral test, substituting n as 2sin(u)

so here's the question:
when using the trig substitution, I realized the upperbound, infinity, would fit inside the sine.
What does "fit inside the sine" mean?
tadf2 said:
Is it still possible to make the substitution? Or is there a restriction when this happens?
Sure, you can make the substitution. The integral will be from 3 to, say b, and you take the limit as b → ∞.

Not that you asked, but it's probably simpler and quicker to break up 1/(n2 - 1) using partial fractions.
 
Inside the sine meaning, the argument of the 'arcsine' would only range from -1 to 1.
So I'm guessing you can't make the substitution because arcsin(infinity) = error?
 
If you're looking for an appropriate trig substitution for the definite integral (and not just one that gets you a correct antidierivative), then ##\sec u## is the way to go. But like Mark44 said, partial fractions is really the "right" technique of integration for this particular integral.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top