- #1
shuxue
- 7
- 0
According to rules for working with approximate data, why the final result of a multiplication or division involving approximation data is round off so that the result has as many significant digits as the given data with the fewest significant digits? How is this rule established? For example, suppose that a side of a square is measured to be 2.57 m. Then according to the rule the area of the square must be rounded to 6.60 m^2. Why the area of the square must be rounded to 3 significant digits (the number of significant digits in the data)? How is this rule established?