Machine accuracy vs. smallest representable number vs. smallest normalized number

  • Thread starter Imanbk
  • Start date
  • #1
24
0

Homework Statement


"Explain the difference between the machine accuracy, the smallest representable number, and the smallest normalized number in a floating point system".


Homework Equations



There is the bit-representation of floating numbers: (-1)^S * M * b^(E-e), using the fact that we can shift numbers from left to right tells us we can represent smaller numbers (un-normalized) but that this comes at the cost of reduced precision.

The Attempt at a Solution



What I came to conclude was the following: Machine accuracy is due to machine limitation. The later two are due to the limits of variable types and reduce the precision of any floating number such that 1.0 might not be equal to 1.0 after it has been declared. I also know the later two have to do with the [\itex] (-1)^S * M * b^(E-e) [\tex] representation of floating point numbers.

I've been searching for 6 hours for an answer to this question via internet resources and textbooks. I have "Numerical Recipes" which doesn't go into much detail for a beginner like myself on any of the above topics. I've also been pointed to "What every Computer Scientist Should know About Floating-Point Arithmetic", but I left it more confused than I came in. I'd appreciate any help on this question.

Thanks a bunch,
imanbk
 
Last edited:

Answers and Replies

Related Threads on Machine accuracy vs. smallest representable number vs. smallest normalized number

  • Last Post
Replies
4
Views
1K
Replies
0
Views
3K
Replies
5
Views
1K
  • Last Post
Replies
1
Views
6K
  • Last Post
Replies
2
Views
6K
Replies
5
Views
4K
  • Last Post
Replies
7
Views
642
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
2
Views
6K
Top