What is the sum of existing partial products in Booth's Algorithm?

  • Thread starter Thread starter domabo
  • Start date Start date
  • Tags Tags
    Algorithm
Click For Summary

Discussion Overview

The discussion revolves around Booth's Algorithm, specifically focusing on the concept of the "sum of existing partial products" as described in Booth's original paper. Participants seek clarification on this term and its implications within the algorithm, exploring both theoretical and historical contexts.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • One participant seeks clarification on the term "sum of existing partial products" and its initial value in Booth's Algorithm.
  • Another participant asserts that the sum starts at zero and is modified through the algorithm's iterative process, referencing the original paper's language and context.
  • Some participants discuss the historical context of Booth's writing, noting the evolution of terminology and audience understanding since 1950.
  • There are claims that starting the sum at zero is necessary for accurate accumulation of values, with references to the concept of additive identity.
  • One participant provides an example of summing numbers to illustrate the necessity of starting at zero, while another questions the reasoning behind this requirement.
  • A later reply discusses the mechanical processes referenced by Booth, including a comparison to how early calculating machines operated, and mentions the efficiency of certain operations in that context.

Areas of Agreement / Disagreement

Participants generally agree on the necessity of starting the sum at zero for accurate calculations, but there is ongoing discussion about the clarity of Booth's original language and the implications of the algorithm's mechanical processes. Some questions remain unresolved regarding specific terms and methods used in the algorithm.

Contextual Notes

There are references to historical terminology and the evolution of computing language, which may affect the understanding of Booth's Algorithm. Some participants express uncertainty about specific terms and processes described in the original paper.

Who May Find This Useful

This discussion may be useful for individuals studying computer science, particularly those interested in algorithms, historical computing methods, and the evolution of terminology in the field.

domabo
Messages
32
Reaction score
0
TL;DR
Trying to understand Booth's Algorithm. Particularly, what is meant by "sum of existing partial products" as Booth defines it in his original paper.
Trying to understand Booth's Algorithm. There's some YouTube videos online but I decided I'd try by reading Booth's original paper http://bwrcs.eecs.berkeley.edu/Classes/icdesign/ee241_s00/PAPERS/archive/booth51.pdf
Can anyone explain what is meant by "sum of existing partial products"? (Page 3 where the actual algorithm is described)

I don't see where this was defined and it's tripping me up as I'm trying to follow along the example at the bottom.
Thanks for your time.
 
Technology news on Phys.org
The "sum of existing partial products" starts as zero and is changed each time through the loop.
It is repetitively changed by the process described in the paragraph at the top of that third page - and the 5 numbered clauses that are included in the paragraph.

Note that this was written in 1950 - before the language for processing "binary numbers" had evolved very far. The term "bit" had been coined by Bell Labs worker John Tukey just three years earlier.
 
.Scott said:
The "sum of existing partial products" starts as zero and is changed each time through the loop.
It is repetitively changed by the process described in the paragraph at the top of that third page - and the 5 numbered clauses that are included in the paragraph.

Note that this was written in 1950 - before the language for processing "binary numbers" had evolved very far. The term "bit" had been coined by Bell Labs worker John Tukey just three years earlier.
How did you come to that conclusion? You seem to be right as it agrees with other things I've read on this algo, but I don't see where it says to take the sum of existing partial products as zero. Thanks for the reply.
 
I will give you the answer - but I will lead it with an explanation on why you should not try to mimic the language Booth is using.

As I noted before, it was written in 1950. My professional career as a "Software Engineer" started 21 years after that paper was written and more than a decade before the term "Software Engineer" was an recognized job title.

The author is addressing an audience that doesn't exist today.

So here it is: If you are adding up a sum of anything, you need to start at zero.
Also, if you had ever seen one of that "calculating machines" that he mentions in actual operation - the notion that it would start at zero and then process "n" steps (starting at n=1) would be more apparent.

Those machines were mechanical and rather fun to watch.
 
.Scott said:
I will give you the answer - but I will lead it with an explanation on why you should not try to mimic the language Booth is using.

As I noted before, it was written in 1950. My professional career as a "Software Engineer" started 21 years after that paper was written and more than a decade before the term "Software Engineer" was an recognized job title.

The author is addressing an audience that doesn't exist today.

So here it is: If you are adding up a sum of anything, you need to start at zero.
Also, if you had ever seen one of that "calculating machines" that he mentions in actual operation - the notion that it would start at zero and then process "n" steps (starting at n=1) would be more apparent.

Those machines were mechanical and rather fun to watch.
So why is it that if I'm adding up a sum of anything, I need to start at zero exactly? You've said that this must be the case but not why. Thanks for your patience.
 
Last edited:
Zero is the "additive identity" for the numbering system.
He refers to it as a "partial sum". If you want the final sum to be the addition of "n" values, you had better start with zero.
 
Or in words:

PartialSum = 0
PartialSum = PartialSum + CurrentPartial
this results in PartialSum being the CurrentPartial

PartialSum = 7.62 (just an arbitrary value for an unintialized variable)
PartialSum + CurrentPartial
this results in PartialSum NotEqual CurrentPartial
∴ PartialSum is never equal to the sum of the CurrentPartials

Cheers,
Tom
 
domabo said:
So why is it that if I'm adding up a sum of anything, I need to start at zero exactly?
domabo said:
You've said that this must be the case but not why.
Here's why. When you add a list of numbers, you accumulate the intermediate results. For example, to add 10 + 20 + 30 + 40, you start with 0.
Then add 10 to the previous result, so that you now have a partial sum (an accumulated value) of 10.
Then add 20 to the previous result, making the new partial sum 30.
Then add 30 to the previous result -- partial sum is now 60.
Finally add 40 to the previous result -- partial sum is not 100. And, since you've run out of numbers to add, this is also the final result.

If you open a bank savings account, your very first balance is $0. Any deposit you then make gets added to that initial balance of 0. If your balance started off at, say, $50, the bank would have lost money. If your balance started at, say, -$100, you would very likely not want to do business with that bank.
 
Thanks all for the responses. I think everything is clear and makes sense now.

One final question: Out of curiosity, can anyone elaborate a little more on the mechanical process outlined by Booth? I don't, for example, know what he's saying when he talks about the short cutting method or what the bars over the decimal numbers mean. He seems to be saying the process he's outlined in his algorithm is equivalent to this process.
 
  • #10
domabo said:
Thanks all for the responses. I think everything is clear and makes sense now.

One final question: Out of curiosity, can anyone elaborate a little more on the mechanical process outlined by Booth? I don't, for example, know what he's saying when he talks about the short cutting method or what the bars over the decimal numbers mean. He seems to be saying the process he's outlined in his algorithm is equivalent to this process.
As I said, those machines were fun to watch.
But basically, if the digit you are multiplying by is "8", it is quicker for the machine to subtract twice (-2) and then do one more add on the next decimal place (+10) than is it to do 8 adds.

I looked for a good video. Here is one: Friden Calculator repair
This wasn't the most advanced machine. I could not find one of those in action.
While in High School, I had a summer job at the IRS in Andover, MA. They had lots of those machines - including the most advanced ones. And how can one resist such toys?
 
Last edited:

Similar threads

Replies
29
Views
6K
  • · Replies 14 ·
Replies
14
Views
3K
Replies
4
Views
500
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
6K
Replies
14
Views
3K
Replies
7
Views
3K
  • · Replies 45 ·
2
Replies
45
Views
7K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 7 ·
Replies
7
Views
4K