Often times you will have an error in a particular measurement, call it sigmax, but you want to know the error in a function of the measurement, sigmaf. To approximate how this error "propagates" through, one can treat the error terms as differentials and solve, using linear approximation (ie throwing away all of the higher order terms), for df treating sigmax as approximately equal to dx. Remember that this is not exactly true, and it is only true as an approximation.
In your problem, the value of the measurement was given with explicit bounds. Thus, the problem wants you to calculate the maximum error in the calculated volume. In real life, results are usually given to plus or minus some multiple of a standard deviation about a mean value, and we can never be completely sure that the true result lies within the interval we measured.
It is unfortunate that it is called "error" because the word "error" can be a bit confusing since there are many different ways of reporting the concept of error. Perhaps a better word for what you are doing is estimating the "uncertainty"?