I Decimal Precision - Multiplying, Rounding, and Adding

  • I
  • Thread starter Thread starter pob
  • Start date Start date
  • Tags Tags
    Precision
pob
Messages
3
Reaction score
0
Hi,

I have $150. This value is multiplied by 2 or more factors. Each factor is less than or equal to 1 and greater than or equal to 0. Each factor has a maximum of 9 decimal places. The sum of the factors equals 1. For example, the following set of factors meets these 3 conditions:

0.500033333
0.499966667

So does this set of factors:

0.099703630
0.095107000
0.035644140
0.264757680
0.050352750
0.144806740
0.145405230
0.110790870
0.053431960

Suppose I perform the following operations:
  1. Multiply the $150 by each of the factors and store each product, storing up to 13 decimal places
  2. Round each product from #1 to 2 decimal places
  3. Sum the rounded products
Is it always true, that the sum of the rounded products will equal $150? Or is it possible to create a value such as $150.01?

So far, I've worked the problem backwards by supposing I had 2 factors that resulted in the following unrounded products:

75.005
74.995

And the following rounded products sum to $150.01,

75.01
75.00

However, I find it impossible to produce the products 75.005 and 74.995 using factors limited to 9 decimal places. The closest I get are the following factors:

0.500033333
0.499966667

Which, when multiplied by $150, yield the following products. These rounded products also sum to $150.

75.00499995
74.99500005

Given this example, I believe I've shown that, given 2 factors, the sum of the rounded products is always equal to $150. However, I'm trying to generalize this to any number of factors.

Thank you,
pob
 
Mathematics news on Phys.org
You'll need at least three numbers I think.

0.33337 -> 50.0055 -> rounded to 50.01
0.33337 -> 50.0055 -> rounded to 50.01
0.33326 -> 49.9890 -> rounded to 49.99
Sum: 150.01.

All numbers are exact, I didn't add trailing zeroes.
 
What about this:
0.9999 x 150 = 149.985, rounds to 149.99
0.0001 x 150 = 0.015, rounds to 0.02
 
  • Like
Likes pob and mfb
That's a good option I missed.
scottdave said:
0.0001 x 150 = 0.015, rounds to 0.02
It's noteworthy that we can use this 10,000 times and produce a sum of $200.

We can also use 0.000 033 334, to give us a single cent, 29999 times. The remainder is not enough to give another cent, but the sum is $299.99. This is the maximum we can get. The minimum is $0, of course, just split it up enough.
 
  • Like
Likes scottdave and pob
And you could go more than 2 digits and find examples of inaccuracies. The fact is that any amount of rounding can lead to inaccurate results, given the proper conditions. Anyone who's worked with floating point arithmetic in computer programming should be familiar with this.
 
Thank you for the answers
 
pob said:
Hi,

I have $150. This value is multiplied by 2 or more factors. Each factor is less than or equal to 1 and greater than or equal to 0. Each factor has a maximum of 9 decimal places. The sum of the factors equals 1. For example, the following set of factors meets these 3 conditions:

0.500033333
0.499966667

So does this set of factors:

0.099703630
0.095107000
0.035644140
0.264757680
0.050352750
0.144806740
0.145405230
0.110790870
0.053431960

Suppose I perform the following operations:
  1. Multiply the $150 by each of the factors and store each product, storing up to 13 decimal places
  2. Round each product from #1 to 2 decimal places
  3. Sum the rounded products
Once upon a time when I was just out of school, still wet behind the ears...

I was coding for an accounting application. We had a list of values and a percentage charge back rate. We wanted to allocate the charge back to the individual list entries in such a way that things still added up -- the sum of the individual charge backs had to match the mandated percentage of the total.

Danged accountants.

The approach that I took was to first total up the line items and apply the charge back percentage to obtain a desired total charge back. Then I went through the list a second time computing each item's rounded fair share of the remaining charge back total and then deducting that line item from the remaining total and the remaining chargeback before proceeding to the next item.

By the time one comes to the last item in the list, the fair share is 100% and the totals are guaranteed to match to the penny.

Nobody ever talked to me about the possibility that the percentages on the individual line items could be out of whack in pathological cases. The books balanced and random data tends not to be pathological so nobody noticed. Remediation for that defect would be an interesting task.

[I was using double precision floating point for the totals. But I scaled everything up so that the floats encoded integer numbers of pennies. Floating point adds and subtracts are free from round-off error in that environment. It was 32 bit hardware so the vanilla integer data type would have risked overflow]
 
Last edited:
  • Like
Likes jim mcnamara
jbriggs444 said:
Once upon a time when I was just out of school, still wet behind the ears...

I was coding for an accounting application. We had a list of values and a percentage charge back rate. We wanted to allocate the charge back to the individual list entries in such a way that things still added up -- the sum of the individual charge backs had to match the mandated percentage of the total.

Danged accountants.

The approach that I took was to first total up the line items and apply the charge back percentage to obtain a desired total charge back. Then I went through the list a second time computing each item's rounded fair share of the remaining charge back total and then deducting that line item from the remaining total and the remaining chargeback before proceeding to the next item.

By the time one comes to the last item in the list, the fair share is 100% and the totals are guaranteed to match to the penny.

Nobody ever talked to me about the possibility that the percentages on the individual line items could be out of whack in pathological cases. The books balanced and random data tends not to be pathological so nobody noticed. Remediation for that defect would be an interesting task.

[I was using double precision floating point for the totals. But I scaled everything up so that the floats encoded integer numbers of pennies. Floating point adds and subtracts are free from round-off error in that environment. It was 32 bit hardware so the vanilla integer data type would have risked overflow]

I like that approach. We decided to sum the rounded products and subtract that sum from the true total, then add the difference to the largest rounded product. It's the same as yours except that our largest value makes up the difference whereas your last entry makes up the difference.
 
pob said:
I like that approach. We decided to sum the rounded products and subtract that sum from the true total, then add the difference to the largest rounded product. It's the same as yours except that our largest value makes up the difference whereas your last entry makes up the difference.
The method I used is slightly more subtle than "last entry makes up for all". If the total chargeback so far is randomly too low, the effect is to increase the chargeback on all of the remaining items, not just the last one. Similarly if the running total is randomly too high.

In retrospect, sorting the list from low to high would have been a nice touch as well.
 

Similar threads

Replies
42
Views
6K
Replies
8
Views
21K
Back
Top