Loss of Significance: Why Does Bisection Method Fail for Large Intervals?

In summary: If a and b are one bit apart, the value of dx will be ε, so da will be equal to 0 (since the mathematical value is < ε, and therefore cannot be represented), and a will no longer change.
  • #1
Silviu
624
11

Homework Statement


Hello! I am not sure if this is the best place to ask this question, but I need some help. So i use the bisection method to find the zeros of a function. When I do it for sin(x), with the initial interval being ##[10^6 \pi, (10^6+1) \pi]##, the program enters an infinite loop if the ending criteria is ##\frac{b-a}{2}<10^{-10}##, where a and b are the limits of the interval. I need to explain why does this happen, while if I use instead of ##10^{-10}##, let’s say ##10^{-8}##, it works. I am aware it is related to the loss of significance, as the ##10^6 \pi## and ##(10^6+1) \pi## are very close to each other, so when subtracting may digits are loss, but I am not sure how to explain it completely. Any idea? Thank you!

Homework Equations

The Attempt at a Solution

 
Physics news on Phys.org
  • #2
Have you implemented the method and looked at how the values of a and b vary at each iteration?
 
  • #3
Yes, the right end of the interval is getting smaller and smaller until becomes equal to the left one (at least up to the significant digits displayed). I have a = 3141592.653589793 and b = 3141592.6535897935. And after this, they both remain at these values. But based on this, the loop should stop, as they are equal to more than 10 significant digits, right?
 
  • #4
Silviu said:
And after this, they both remain at these values. But based on this, the loop should stop, as they are equal to more than 10 significant digits, right?
You are actually not looking at a certain number of significant digits (which would correspond to a relative precision), but rather at a certain number of decimals (absolute precision).

You have to compare the desired precision with machine accuracy.
 
  • #5
Oh, so let me see if I understand this right. The precision for Python is 53 digits in base 2. So for base 10 I have ##10^{-53}=10^{-n}## and from here I get ##n=15.954...##. So I can't have more than 16 (or 15?) digits in the representation of my numbers. Now ##10^6 \pi = 3141592.6535897932384626## which with 16 digits would be ##3141592.653589793##. But this number has only 9 digits after . so I will never get to ##10^{-10}##? But even so, won't my 2 numbers become equal within the 16 digits so their difference will be 0, which is still smaller than ##10^{-10}##?
 
  • #6
I think that your analysis is correct. As to why the algorithm doesn't stop, you have to look at how it is implemented. You wrote that
Silviu said:
I have a = 3141592.653589793 and b = 3141592.6535897935. And after this, they both remain at these values.
so instead of converging towards ##a \approx b## (equal within ε), both reach fixed points that are more than 10-10 apart.
 
  • #7
DrClaude said:
I think that your analysis is correct. As to why the algorithm doesn't stop, you have to look at how it is implemented. You wrote that

so instead of converging towards ##a \approx b## (equal within ε), both reach fixed points that are more than 10-10 apart.
I understand this, what confuses me is that a = 3141592.653589793 has 16 digits while b = 3141592.6535897935 has 17. So why when dividing, the last digit of b is disregarded? If they both had the same number of digits, your statement would make sense to me, but now I am still a bit confused.
 
  • #8
It is a question of decimal representation of floating-point binary numbers. The two numbers you get are one bit apart (in hexadecimal, they are 0x4147F7EC53A8D491 and 0x4147F7EC53A8D492).

(I used http://www.binaryconvert.com/convert_double.html to check this out.)
 
  • #9
DrClaude said:
It is a question of decimal representation of floating-point binary numbers. The two numbers you get are one bit apart (in hexadecimal, they are 0x4147F7EC53A8D491 and 0x4147F7EC53A8D492).

(I used http://www.binaryconvert.com/convert_double.html to check this out.)
Oh, so being one decimal apart, they can't get any close, right?
 
  • #10
Silviu said:
Oh, so being one decimal apart, they can't get any close, right?
Again, it depends on the actual implementation of the algorithm. Consider the pseudo code
Code:
dx ← b - a
da ← 0.5 * dx
a ← a + da
with the goal of shifting a towards b by bisection. If a and b are one bit apart, the value of dx will be ε, so da will be equal to 0 (since the mathematical value is < ε, and therefore cannot be represented), and a will no longer change.
 
  • #11
DrClaude said:
Again, it depends on the actual implementation of the algorithm. Consider the pseudo code
Code:
dx ← b - a
da ← 0.5 * dx
a ← a + da
with the goal of shifting a towards b by bisection. If a and b are one bit apart, the value of dx will be ε, so da will be equal to 0 (since the mathematical value is < ε, and therefore cannot be represented), and a will no longer change.
This is my actual code
Code:
def bisection(f, a, b, tol = 1e-10):

    # sanity checks to cover us from user distraction (n.b. usually user=me)

    if a > b:

        c=b

        b=a

        a=c

    if a==b:

        raise ValueError("Bisection called with a==b\n")

    if f(a)*f(b) >= 0.:

        raise ValueError("The interval does not bracket a zero! f({})={}; f({})={}\n".format(a, f(a), b, f(b)))    while (b-a) > tol:

      

        midpoint = (a+b)/2.

        print(a,b)

        sfm = sign(f(midpoint))

        if sfm == sign(f(a)):

            a = midpoint

        elif sfm == sign(f(b)):

            b = midpoint

        elif sfm == 0:

            #lucky case: we found an exact zero!

            return midpoint

        else:

          raise ValueError("Something went horribly bad: sign(f(midpoint))={}\n".format(sfm))

    return (a+b)/2.
 
  • #12
In your case, since b - a = ε, the value of midpoint will be a (after floating-point rounding), so it stays put. You can check this but writing out the actual value.
 

What is the bisection method and how does it work?

The bisection method is a numerical algorithm used to approximate the root of a function. It involves repeatedly dividing an interval in half and checking which half the root falls in, until the interval becomes small enough to give an accurate estimate of the root.

What is loss of significance and how does it relate to the bisection method?

Loss of significance is a term used to describe the loss of accuracy in a calculation due to the limited precision of computer arithmetic. In the bisection method, this can occur when the interval becomes too large and the function values at the endpoints are rounded to the same value, causing the algorithm to fail.

Why does the bisection method fail for large intervals?

The bisection method fails for large intervals because of loss of significance. As the interval becomes larger, the difference between the function values at the endpoints becomes smaller, leading to the same value being rounded for both. This makes it impossible for the algorithm to determine which half the root falls in, causing it to fail.

What are some ways to avoid loss of significance in the bisection method?

One way to avoid loss of significance in the bisection method is to use a different algorithm, such as the regula falsi method, which is less prone to this issue. Another option is to use a higher precision data type for the function values, or to adjust the interval based on the precision of the function values being used.

Are there any other limitations or drawbacks of the bisection method?

Yes, in addition to loss of significance, the bisection method also has limitations such as being a slow converging method and requiring the function to be continuous and have opposite signs at the endpoints of the interval. It also cannot handle multiple roots or complex roots.

Similar threads

  • Engineering and Comp Sci Homework Help
Replies
1
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
22
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
5
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
4
Views
5K
  • Engineering and Comp Sci Homework Help
Replies
4
Views
1K
Replies
6
Views
717
  • Engineering and Comp Sci Homework Help
Replies
2
Views
2K
Replies
8
Views
2K
Replies
9
Views
461
  • Engineering and Comp Sci Homework Help
Replies
33
Views
3K
Back
Top