I was reading about decimal approximations in one of my math books and I like his explanation of why 5 and over we round up and so on (as it's closer to rounding down). I even understood the explanation of how to calculate what the error could be given a series of decimal numbers that have been approximated. However, this paragraph confused me: "...Thus, by the method of article 34, 23/24 = .95833+. Expressed to four decimal places the real value of this fraction lies between .9583 and .9584; .9583 is .00003+ less than the true value, and .9584 is .00006+ greater. Therefore, .9583 is nearer the correct value and is said to be correct to four decimal places. Similarly, .958 is correct to three places and .96 to two." I understand .95833 - .00003 = .9583. But shouldn't .9584 be .00007+ greater than .95833?