Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Why Machine epsilon is defined this way?

  1. Aug 14, 2013 #1
    Hi. I'm studying numerical methods so I found this subforum the most correct for this question.
    The machine epsilon for a computer is defined as the least number e such that 1 + e is different to 1.
    I just wonder, why 1 + e? And not 2 + e for instance?

    Thanks!
     
  2. jcsd
  3. Aug 14, 2013 #2

    jim mcnamara

    User Avatar

    Staff: Mentor

    For each floating point datatype there are entities that programmers call ULPs. An ULP is the ultimate limit of precision for a given floating point implmenetation. As the number stored in the variable becomes larger (in magnitude) the size of the ulp changes in terms of how it will be represented in decimal format. ULPs are not fixed.

    The EPSILON value is the decimal presentation of a single ULP when the number stored in the floating point variable, iff that value is equal to one. Since ULP's magnitude "changes" the decision was made to use a normal number, so the definition is based on one (1). This is arbitrary in a sense.

    Here is what you should know, presented in the long somewhat rigorous way courtesy of the ACM and David Goldberg.

    http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
     
  4. Aug 14, 2013 #3

    rcgldr

    User Avatar
    Homework Helper

    In most floating point formats, 1.0 is stored as 1.0 x 2^0, 2.0 is stored as 1.0 x 2^1, 4.0 as 1.0 x 2^2, ... . (In IEEE 32 bit and 64 bit formats, the 1 is not stored but assumed to be there). So it makes the most sense to define episilon relative to 1.0, since that is stored as a number multiplied by 2^0 which is the same as a number multiplied by 1.
     
  5. Aug 14, 2013 #4

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Here's a diagram of the IEEE double precision floating point representation:

    618px-IEEE_754_Double_Floating_Point_Format.svg.png

    This format is specialized for expressing numbers in the form 1.<fractional_part>*2<exponent>. Note that the 1 that precedes the binary point is not stored. With an infinite amount of storage, every computable number could be expressed in this binary format. Computers don't have an infinite amount of storage, so the floating point representations are a poor man's alternative to the pure mathematical form.

    Suppose the fractional part of some number is all zeros except for the very last bit, which is one. What's the difference between that number and the corresponding number in which the fractional part is all zeros? That's the ULP that Jim wrote about. Of course if the exponent is very large the ULP is going to be very large also. It makes a more sense to talk about the ULP when the exponent is zero than any other exponent, and that's how the machine epsilon is defined.
     
  6. Aug 14, 2013 #5
    Thank you all! I think I understand now
     
  7. Aug 14, 2013 #6
    D H how do you express zero with the format you mentioned? Is that the normalized floating point representation?
    Is 1.0 stored or not?
     
    Last edited: Aug 14, 2013
  8. Aug 14, 2013 #7

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    +0 is all-bits zero. -0 (the IEEE floating point standard has +0 and -0) is all bits zero except for the sign bit.

    The value that is stored as the exponent in the IEEE format is the true exponent plus some bias, 1023 in the case of doubles. The value of the exponent for the IEEE double that represents 1.0 is 1023. The special cases (zero is one of them) have a stored exponent value that is either all bits zero or all bits one. Everything else is treated as a normalized number.

    The other special cases:
    • Denormalized numbers. These are numbers where the implied number before the binary point is zero rather than one. The denormalized numbers have a stored exponent of zero. Thus zero is just a special case of this special case.

    • Infinity. Infinities are represented with a fractional part of all bits zero and a stored exponent of all bits one. There are only two possibilities here, the sign bit clear (positive infinity) or set (negative infinity).

    • Not-a-number. What's 0/0? It's not a number. NaNs have the exponent all bits one and a fractional part that is not all bits zero. This means are lots of representations of NaN available, but the only ones that are used in practice are a fractional part that is all bits one.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook