I Why is (a+b)/2 not recommended in the Bisection method?

CandidFlakes
Messages
3
Reaction score
0
TL;DR Summary
I am confused by two different explanation of bisection method in two different books.
doubt.png

The image attached above from a textbook, explains that we should refrain from using (a+b)/2 while applying Bisection method but I am unable to get the reason why it is asking to do so?
doubt.png

While the image above is from another textbook. This book uses (a+b)/2.
I am really confused by two different instructions. Please help me clear the confusion.
Thanks in advance!
 
Mathematics news on Phys.org
The two methods are equivalent. In the first (5.1) they calculate ##m = a + \frac{b - a} 2##, which is the midpoint of the interval [a, b]. Then they check to see if f(a) and f(m) are of the same sign or not. If not, the function zero must be between a and m.

In the second method, they calculate ##c = \frac{a + b} 2##, the average of a and b. If you do the algebra you should see that m and c are equal for any given pair of values a and b. After c is calculated, they calculate f(a)f(c) and determine whether the sign of this product is positive or negative. If it's negative, f(a) and f(c) lie on opposite sides of the horizontal axis, so the function zero must be between a and c.
 
  • Like
Likes topsquark and CandidFlakes
Mark44 said:
The two methods are equivalent. In the first (5.1) they calculate ##m = a + \frac{b - a} 2##, which is the midpoint of the interval [a, b]. Then they check to see if f(a) and f(m) are of the same sign or not. If not, the function zero must be between a and m.

In the second method, they calculate ##c = \frac{a + b} 2##, the average of a and b. If you do the algebra you should see that m and c are equal for any given pair of values a and b. After c is calculated, they calculate f(a)f(c) and determine whether the sign of this product is positive or negative. If it's negative, f(a) and f(c) lie on opposite sides of the horizontal axis, so the function zero must be between a and c.
doubt.png

Thanks so much!
But, in the book why are they mentioning that the formula (a+b)/2 gives "midpoint" of 0.7 for the interval [0.67,0.69]?
Also can you please give me a hint on how a+b overflows in extreme cases?
 
CandidFlakes said:
But, in the book why are they mentioning that the formula (a+b)/2 gives "midpoint" of 0.7 for the interval [0.67,0.69]?
Because they are specifying 2-digit decimal arithmetic. The result of the addition of a and b is 1.36 in this case, which has to be rounded to two significant digits, namely 1.4. When it is divided by 2, you get .7, which isn't in the interval [.67, .69].
CandidFlakes said:
Also can you please give me a hint on how a+b overflows in extreme cases?
They're talking about finite-precision arithmetic. If you're dealing with 8-bit integers, the largest such integer is 255. If a = 130 and b = 140, then a + b results in an overflow. I.e., the sum can't be represented in 8 bits. I think that's what they're talking about.
 
  • Like
Likes FactChecker and CandidFlakes
Mark44 said:
Because they are specifying 2-digit decimal arithmetic. The result of the addition of a and b is 1.36 in this case, which has to be rounded to two significant digits, namely 1.4. When it is divided by 2, you get .7, which isn't in the interval [.67, .69].

They're talking about finite-precision arithmetic. If you're dealing with 8-bit integers, the largest such integer is 255. If a = 130 and b = 140, then a + b results in an overflow. I.e., the sum can't be represented in 8 bits. I think that's what they're talking about.
Thanks so much!
 
They have to be talking about very old or limited computers. With any common computer of the last few decades, a reasonable tolerance value will stop the computations long before the midpoint calculation has an error. Assuming that you are working with floating point 32 or 64 bit calculations, you can ignore their worry about the midpoint calculation.
 
  • Like
Likes Office_Shredder
Amusingly, the only time I have ever had a bug because of an issue like overflow is when I wrote some code to multiply two integers to see if they were the same sign, and got an integer overflow.

This book is not a good source of learning good programming practice in 2022.
 

Similar threads

Replies
38
Views
7K
Replies
1
Views
2K
Replies
1
Views
2K
Replies
15
Views
4K
Replies
2
Views
1K
Back
Top