What is the sum of existing partial products in Booth's Algorithm?

  • Thread starter domabo
  • Start date
  • Tags
    Algorithm
In summary: The "additive identity" for the numbering system is 0.He refers to it as a "partial sum". If you want the final sum to be the addition of "n" values, you had better start with zero.Or in words:PartialSum = 0PartialSum = PartialSum + CurrentPartialthis results in PartialSum being the CurrentPartialPartialSum = 7.62 (just an arbitrary value for an unintialized variable)PartialSum + CurrentPartialthis results in PartialSum NotEqual Current
  • #1
domabo
32
0
TL;DR Summary
Trying to understand Booth's Algorithm. Particularly, what is meant by "sum of existing partial products" as Booth defines it in his original paper.
Trying to understand Booth's Algorithm. There's some YouTube videos online but I decided I'd try by reading Booth's original paper http://bwrcs.eecs.berkeley.edu/Classes/icdesign/ee241_s00/PAPERS/archive/booth51.pdf
Can anyone explain what is meant by "sum of existing partial products"? (Page 3 where the actual algorithm is described)

I don't see where this was defined and it's tripping me up as I'm trying to follow along the example at the bottom.
Thanks for your time.
 
Technology news on Phys.org
  • #2
The "sum of existing partial products" starts as zero and is changed each time through the loop.
It is repetitively changed by the process described in the paragraph at the top of that third page - and the 5 numbered clauses that are included in the paragraph.

Note that this was written in 1950 - before the language for processing "binary numbers" had evolved very far. The term "bit" had been coined by Bell Labs worker John Tukey just three years earlier.
 
  • #3
.Scott said:
The "sum of existing partial products" starts as zero and is changed each time through the loop.
It is repetitively changed by the process described in the paragraph at the top of that third page - and the 5 numbered clauses that are included in the paragraph.

Note that this was written in 1950 - before the language for processing "binary numbers" had evolved very far. The term "bit" had been coined by Bell Labs worker John Tukey just three years earlier.
How did you come to that conclusion? You seem to be right as it agrees with other things I've read on this algo, but I don't see where it says to take the sum of existing partial products as zero. Thanks for the reply.
 
  • #4
I will give you the answer - but I will lead it with an explanation on why you should not try to mimic the language Booth is using.

As I noted before, it was written in 1950. My professional career as a "Software Engineer" started 21 years after that paper was written and more than a decade before the term "Software Engineer" was an recognized job title.

The author is addressing an audience that doesn't exist today.

So here it is: If you are adding up a sum of anything, you need to start at zero.
Also, if you had ever seen one of that "calculating machines" that he mentions in actual operation - the notion that it would start at zero and then process "n" steps (starting at n=1) would be more apparent.

Those machines were mechanical and rather fun to watch.
 
  • #5
.Scott said:
I will give you the answer - but I will lead it with an explanation on why you should not try to mimic the language Booth is using.

As I noted before, it was written in 1950. My professional career as a "Software Engineer" started 21 years after that paper was written and more than a decade before the term "Software Engineer" was an recognized job title.

The author is addressing an audience that doesn't exist today.

So here it is: If you are adding up a sum of anything, you need to start at zero.
Also, if you had ever seen one of that "calculating machines" that he mentions in actual operation - the notion that it would start at zero and then process "n" steps (starting at n=1) would be more apparent.

Those machines were mechanical and rather fun to watch.
So why is it that if I'm adding up a sum of anything, I need to start at zero exactly? You've said that this must be the case but not why. Thanks for your patience.
 
Last edited:
  • #6
Zero is the "additive identity" for the numbering system.
He refers to it as a "partial sum". If you want the final sum to be the addition of "n" values, you had better start with zero.
 
  • #7
Or in words:

PartialSum = 0
PartialSum = PartialSum + CurrentPartial
this results in PartialSum being the CurrentPartial

PartialSum = 7.62 (just an arbitrary value for an unintialized variable)
PartialSum + CurrentPartial
this results in PartialSum NotEqual CurrentPartial
∴ PartialSum is never equal to the sum of the CurrentPartials

Cheers,
Tom
 
  • #8
domabo said:
So why is it that if I'm adding up a sum of anything, I need to start at zero exactly?
domabo said:
You've said that this must be the case but not why.
Here's why. When you add a list of numbers, you accumulate the intermediate results. For example, to add 10 + 20 + 30 + 40, you start with 0.
Then add 10 to the previous result, so that you now have a partial sum (an accumulated value) of 10.
Then add 20 to the previous result, making the new partial sum 30.
Then add 30 to the previous result -- partial sum is now 60.
Finally add 40 to the previous result -- partial sum is not 100. And, since you've run out of numbers to add, this is also the final result.

If you open a bank savings account, your very first balance is $0. Any deposit you then make gets added to that initial balance of 0. If your balance started off at, say, $50, the bank would have lost money. If your balance started at, say, -$100, you would very likely not want to do business with that bank.
 
  • #9
Thanks all for the responses. I think everything is clear and makes sense now.

One final question: Out of curiosity, can anyone elaborate a little more on the mechanical process outlined by Booth? I don't, for example, know what he's saying when he talks about the short cutting method or what the bars over the decimal numbers mean. He seems to be saying the process he's outlined in his algorithm is equivalent to this process.
 
  • #10
domabo said:
Thanks all for the responses. I think everything is clear and makes sense now.

One final question: Out of curiosity, can anyone elaborate a little more on the mechanical process outlined by Booth? I don't, for example, know what he's saying when he talks about the short cutting method or what the bars over the decimal numbers mean. He seems to be saying the process he's outlined in his algorithm is equivalent to this process.
As I said, those machines were fun to watch.
But basically, if the digit you are multiplying by is "8", it is quicker for the machine to subtract twice (-2) and then do one more add on the next decimal place (+10) than is it to do 8 adds.

I looked for a good video. Here is one: Friden Calculator repair
This wasn't the most advanced machine. I could not find one of those in action.
While in High School, I had a summer job at the IRS in Andover, MA. They had lots of those machines - including the most advanced ones. And how can one resist such toys?
 
Last edited:

What is Booth's Algorithm?

Booth's Algorithm is a multiplication algorithm used to multiply two signed binary numbers. It is an efficient method for performing multiplication using shifts and additions rather than using the traditional method of repeated addition.

How does Booth's Algorithm work?

Booth's Algorithm works by breaking down a multiplication problem into smaller sub-problems and then using a series of shifts and additions to solve each sub-problem. It utilizes the concept of partial products and signed digit representation to efficiently perform multiplication.

What are the advantages of using Booth's Algorithm?

Booth's Algorithm offers several advantages over traditional multiplication methods. It reduces the number of steps required to perform multiplication, which results in faster computation times. It also requires less space and hardware, making it a more efficient method for multiplication.

Are there any limitations to using Booth's Algorithm?

One limitation of Booth's Algorithm is that it is only applicable to signed binary numbers. It also requires additional steps to handle negative numbers, which can make it slightly more complex to implement. Additionally, it may not be as efficient for small multiplication problems as the traditional method.

How is Booth's Algorithm used in modern computing?

Booth's Algorithm is commonly used in digital signal processing, where multiplication is a crucial operation. It is also used in computer architecture and hardware design, as it offers a more efficient method for multiplication. However, with the advancements in technology, other multiplication algorithms such as Karatsuba and Toom-Cook are gaining popularity due to their ability to handle larger numbers.

Similar threads

  • Programming and Computer Science
Replies
29
Views
3K
  • Programming and Computer Science
Replies
1
Views
984
Replies
14
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Special and General Relativity
2
Replies
45
Views
3K
  • General Discussion
Replies
4
Views
666
  • General Discussion
Replies
14
Views
2K
Replies
3
Views
2K
Replies
7
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
4
Views
2K
Back
Top