That is a very tough question from you. I just did a quick check and realized that when we multiply the errors, in this case, 1x10^-1 and 1x10^-2. The result is 1x10^-3, which makes it even smaller. That means the result is more precise than it can actually be. Is that the reason why?
I do have a few ideas on this topic. I think that significant digits are there to determine the amount of precision of the given data. I believe that the reason why we take the least number of decimal places when adding and subtracting is because we cannot have a result that is more precise than...
This question just flew right into my head when I was in the mid of a shower :), and I cannot resist finding out the answer. Question: Why do we take the least number of significant digits when multiplying/ dividing, and the least number of decimal places when adding/subtracting? Thank you!