Numerical integration - Techniques to remove singularities

AI Thread Summary
The discussion focuses on the numerical integration of the function ∫dx1/(sin(abs(x)^(1/2))) from x=-1 to x=2, highlighting the singularity at x=0. The use of absolute values and square roots in the denominator mitigates convergence issues, allowing numerical methods to yield accurate results. Participants emphasize that even basic numerical techniques, like Simpson's rule, can handle such integrals if singularities are appropriately addressed. A suggested approach for removing singularities involves decomposing the function into a singular part and a well-behaved part, enabling easier integration. Overall, the conversation underscores the importance of understanding singularity behavior in numerical integration techniques.
franciobr
Messages
13
Reaction score
0
Hello everyone!

I am trying to understand why the following function does not provide problems to being computed numerically:

∫dx1/(sin(abs(x)^(1/2))) from x=-1 to x=2.

Clearly there is a singularity for x=0 but why does taking the absolute value of x and then taking its square root solve the problem?

I searched for quite a while on the internet and on numerical integration books and was surprised that I couldn't find the answer for that. Apparentlly there is a lack of documentation on singularity removal techniques on the web or I am not searching with the right keywords. Any introductory documentation on the subject is welcome!

For the record I am using MATLAB built-in functions such as quadtx() or integral() to solve it but I that's not the point since it turns out even the most simple simpson rule algorithm can deal with that integral.
 
Technology news on Phys.org
Just because you get a certain number out of a numerical routine does not mean it is accurate. Convergence for a particular definite integral containing a singularity within the limits of integration must be shown using other analysis methods before judging whether the resulting evaluation has any meaning.
 
Yes. The result is accurate since if you exclude the singularity by simply integrating from -1 to 0 and then from 0 to 2 and sum them up the answer is exactly the same. I do not know how to analytically solve this integral and, well, that's the whole point of numerical integration.

I dind't mention that if you try to take the integral without taking the absolute value or the square root you get no answer and there is no convergence of the method. I am interested on why quadrature methods of integration converges on these cases. I am not interested on that particular integral, if you do the same with 1/x kinds of functions and integrate it going through its singularity on 0 you get convergence for the same methods of integration and they agree with analytical results. There is a reason why taking taking a square root and the absolute value of the denominator the numerical methods converges.

The absolute value application seems logical since you don't want a imaginary number coming out of the square root. But the convergence of the method is what astonishes me. I would like to know why there is the convergence and also if anyone has a good source for these techniques of excluding singularities for numerical integration.
 
It's fairly easy to see that the integral is convergent. Expand the function as a power series and you get $$\frac{1}{|x|^{1/2}} \frac {1}{1 - |x|/3! + |x|^2/5! - \cdots}$$

So close to ##x = 0## the function is similar to ##|x|^{-1/2}## which can be integrated everywhere.

If you want to remove the singularity before you do the numerical integration, one way is to write the function as ##f(x) = s(x) + g(x)## where the ##s(x)## contains the singular part and can be integrated explicitly.

Then integrate ##s(x)## analytically and ##g(x)## numerically.

For this example you could take ##s(x) = |x|^{-1/2}## and ##g(x) = f(x) - s(x)## (where ##f(x)## is the function the OP wants to integrate).

You know that ##g(0) = 0## and ##g(x)## is well behaved near ##x = 0##, so any numerical integration method should converge quickly.
 
Last edited:
Good job Aleph, it makes sense now. Thanks!
 
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
What percentage of programmers have learned to touch type? Have you? Do you think it's important, not just for programming, but for more-than-casual computer users generally? ChatGPT didn't have much on it ("Research indicates that less than 20% of people can touch type fluently, with many relying on the hunt-and-peck method for typing ."). 'Hunt-and-peck method' made me smile. It added, "For programmers, touch typing is a valuable skill that can enhance speed, accuracy, and focus. While...
I had a Microsoft Technical interview this past Friday, the question I was asked was this : How do you find the middle value for a dataset that is too big to fit in RAM? I was not able to figure this out during the interview, but I have been look in this all weekend and I read something online that said it can be done at O(N) using something called the counting sort histogram algorithm ( I did not learn that in my advanced data structures and algorithms class). I have watched some youtube...

Similar threads

Replies
15
Views
3K
Replies
3
Views
2K
Replies
1
Views
3K
Replies
6
Views
7K
Replies
13
Views
2K
Replies
6
Views
2K
Replies
4
Views
5K
Replies
2
Views
1K
Back
Top