# Homework Help: Machine accuracy vs. smallest representable number vs. smallest normalized number

1. Jan 27, 2013

### Imanbk

1. The problem statement, all variables and given/known data
"Explain the difference between the machine accuracy, the smallest representable number, and the smallest normalized number in a floating point system".

2. Relevant equations

There is the bit-representation of floating numbers: (-1)^S * M * b^(E-e), using the fact that we can shift numbers from left to right tells us we can represent smaller numbers (un-normalized) but that this comes at the cost of reduced precision.

3. The attempt at a solution

What I came to conclude was the following: Machine accuracy is due to machine limitation. The later two are due to the limits of variable types and reduce the precision of any floating number such that 1.0 might not be equal to 1.0 after it has been declared. I also know the later two have to do with the [\itex] (-1)^S * M * b^(E-e) [\tex] representation of floating point numbers.

I've been searching for 6 hours for an answer to this question via internet resources and textbooks. I have "Numerical Recipes" which doesn't go into much detail for a beginner like myself on any of the above topics. I've also been pointed to "What every Computer Scientist Should know About Floating-Point Arithmetic", but I left it more confused than I came in. I'd appreciate any help on this question.

Thanks a bunch,
imanbk

Last edited: Jan 27, 2013