What is the sum of existing partial products in Booth's Algorithm?

  • Thread starter Thread starter domabo
  • Start date Start date
  • Tags Tags
    Algorithm
AI Thread Summary
The discussion revolves around understanding Booth's Algorithm, particularly the concept of "sum of existing partial products" as described in Booth's original 1950 paper. Participants clarify that this sum starts at zero and is updated through a series of steps outlined in the algorithm. The importance of beginning with zero is emphasized, as it is the additive identity in arithmetic, ensuring accurate accumulation of values. The conversation also touches on the historical context of the paper, highlighting the primitive state of binary processing language at the time. Additionally, there is curiosity about the mechanical processes referenced by Booth, particularly the "short cutting method," which allows for more efficient calculations by minimizing the number of additions required. Participants share insights into the operation of early calculating machines, illustrating the practical implications of Booth's algorithm in mechanical computation.
domabo
Messages
32
Reaction score
0
TL;DR Summary
Trying to understand Booth's Algorithm. Particularly, what is meant by "sum of existing partial products" as Booth defines it in his original paper.
Trying to understand Booth's Algorithm. There's some YouTube videos online but I decided I'd try by reading Booth's original paper http://bwrcs.eecs.berkeley.edu/Classes/icdesign/ee241_s00/PAPERS/archive/booth51.pdf
Can anyone explain what is meant by "sum of existing partial products"? (Page 3 where the actual algorithm is described)

I don't see where this was defined and it's tripping me up as I'm trying to follow along the example at the bottom.
Thanks for your time.
 
Technology news on Phys.org
The "sum of existing partial products" starts as zero and is changed each time through the loop.
It is repetitively changed by the process described in the paragraph at the top of that third page - and the 5 numbered clauses that are included in the paragraph.

Note that this was written in 1950 - before the language for processing "binary numbers" had evolved very far. The term "bit" had been coined by Bell Labs worker John Tukey just three years earlier.
 
.Scott said:
The "sum of existing partial products" starts as zero and is changed each time through the loop.
It is repetitively changed by the process described in the paragraph at the top of that third page - and the 5 numbered clauses that are included in the paragraph.

Note that this was written in 1950 - before the language for processing "binary numbers" had evolved very far. The term "bit" had been coined by Bell Labs worker John Tukey just three years earlier.
How did you come to that conclusion? You seem to be right as it agrees with other things I've read on this algo, but I don't see where it says to take the sum of existing partial products as zero. Thanks for the reply.
 
I will give you the answer - but I will lead it with an explanation on why you should not try to mimic the language Booth is using.

As I noted before, it was written in 1950. My professional career as a "Software Engineer" started 21 years after that paper was written and more than a decade before the term "Software Engineer" was an recognized job title.

The author is addressing an audience that doesn't exist today.

So here it is: If you are adding up a sum of anything, you need to start at zero.
Also, if you had ever seen one of that "calculating machines" that he mentions in actual operation - the notion that it would start at zero and then process "n" steps (starting at n=1) would be more apparent.

Those machines were mechanical and rather fun to watch.
 
.Scott said:
I will give you the answer - but I will lead it with an explanation on why you should not try to mimic the language Booth is using.

As I noted before, it was written in 1950. My professional career as a "Software Engineer" started 21 years after that paper was written and more than a decade before the term "Software Engineer" was an recognized job title.

The author is addressing an audience that doesn't exist today.

So here it is: If you are adding up a sum of anything, you need to start at zero.
Also, if you had ever seen one of that "calculating machines" that he mentions in actual operation - the notion that it would start at zero and then process "n" steps (starting at n=1) would be more apparent.

Those machines were mechanical and rather fun to watch.
So why is it that if I'm adding up a sum of anything, I need to start at zero exactly? You've said that this must be the case but not why. Thanks for your patience.
 
Last edited:
Zero is the "additive identity" for the numbering system.
He refers to it as a "partial sum". If you want the final sum to be the addition of "n" values, you had better start with zero.
 
Or in words:

PartialSum = 0
PartialSum = PartialSum + CurrentPartial
this results in PartialSum being the CurrentPartial

PartialSum = 7.62 (just an arbitrary value for an unintialized variable)
PartialSum + CurrentPartial
this results in PartialSum NotEqual CurrentPartial
∴ PartialSum is never equal to the sum of the CurrentPartials

Cheers,
Tom
 
domabo said:
So why is it that if I'm adding up a sum of anything, I need to start at zero exactly?
domabo said:
You've said that this must be the case but not why.
Here's why. When you add a list of numbers, you accumulate the intermediate results. For example, to add 10 + 20 + 30 + 40, you start with 0.
Then add 10 to the previous result, so that you now have a partial sum (an accumulated value) of 10.
Then add 20 to the previous result, making the new partial sum 30.
Then add 30 to the previous result -- partial sum is now 60.
Finally add 40 to the previous result -- partial sum is not 100. And, since you've run out of numbers to add, this is also the final result.

If you open a bank savings account, your very first balance is $0. Any deposit you then make gets added to that initial balance of 0. If your balance started off at, say, $50, the bank would have lost money. If your balance started at, say, -$100, you would very likely not want to do business with that bank.
 
Thanks all for the responses. I think everything is clear and makes sense now.

One final question: Out of curiosity, can anyone elaborate a little more on the mechanical process outlined by Booth? I don't, for example, know what he's saying when he talks about the short cutting method or what the bars over the decimal numbers mean. He seems to be saying the process he's outlined in his algorithm is equivalent to this process.
 
  • #10
domabo said:
Thanks all for the responses. I think everything is clear and makes sense now.

One final question: Out of curiosity, can anyone elaborate a little more on the mechanical process outlined by Booth? I don't, for example, know what he's saying when he talks about the short cutting method or what the bars over the decimal numbers mean. He seems to be saying the process he's outlined in his algorithm is equivalent to this process.
As I said, those machines were fun to watch.
But basically, if the digit you are multiplying by is "8", it is quicker for the machine to subtract twice (-2) and then do one more add on the next decimal place (+10) than is it to do 8 adds.

I looked for a good video. Here is one: Friden Calculator repair
This wasn't the most advanced machine. I could not find one of those in action.
While in High School, I had a summer job at the IRS in Andover, MA. They had lots of those machines - including the most advanced ones. And how can one resist such toys?
 
Last edited:
Back
Top