Ok so my textbook explains all the rules to carry the inherent uncertainty in measurements through mathematic calculations (the result of an addition must have as many decimal places as the term with the least decimal places, etc, it also explains significant figures etc...) and at the very end of the detailed explanation it warns us that we shouldn't round off results in intermediate steps because it will affect the final answer. That is what I don't get at all and sounds contradictory to me maybe because of the wording the book is using to explain all of this. Say we are trying to solve a long problem and we have an intermediate step (out of many), say an addition or what have you, 75.382 + 31.2 (disregard units), following the rules for carrying the uncertainty through calculations we should write 106.6 as a result right? and carry this result into the next step of the solving process. This to me is just that, carrying the uncertainty, it's not rounding off for the sake of rounding off! If we take 106.582 as a result for the previous addition and input this into the next step we will be losing track of the uncertainty right? Someone please make this clear because it looks like a paradox to me according to the book's wording.