1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Machine accuracy vs. smallest representable number vs. smallest normalized number

  1. Jan 27, 2013 #1
    1. The problem statement, all variables and given/known data
    "Explain the difference between the machine accuracy, the smallest representable number, and the smallest normalized number in a floating point system".


    2. Relevant equations

    There is the bit-representation of floating numbers: (-1)^S * M * b^(E-e), using the fact that we can shift numbers from left to right tells us we can represent smaller numbers (un-normalized) but that this comes at the cost of reduced precision.

    3. The attempt at a solution

    What I came to conclude was the following: Machine accuracy is due to machine limitation. The later two are due to the limits of variable types and reduce the precision of any floating number such that 1.0 might not be equal to 1.0 after it has been declared. I also know the later two have to do with the [\itex] (-1)^S * M * b^(E-e) [\tex] representation of floating point numbers.

    I've been searching for 6 hours for an answer to this question via internet resources and textbooks. I have "Numerical Recipes" which doesn't go into much detail for a beginner like myself on any of the above topics. I've also been pointed to "What every Computer Scientist Should know About Floating-Point Arithmetic", but I left it more confused than I came in. I'd appreciate any help on this question.

    Thanks a bunch,
    imanbk
     
    Last edited: Jan 27, 2013
  2. jcsd
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Can you offer guidance or do you also need help?



Similar Discussions: Machine accuracy vs. smallest representable number vs. smallest normalized number
Loading...