Why Machine epsilon is defined this way?

  • Thread starter Thread starter Hernaner28
  • Start date Start date
  • Tags Tags
    Epsilon Machine
AI Thread Summary
Machine epsilon is defined as the smallest number e such that 1 + e is distinguishable from 1, rather than using a larger base like 2, because it relates to the representation of numbers in floating point formats. The concept of ULP (Unit in the Last Place) is crucial, as it varies with the magnitude of the number stored, making the definition of epsilon relative to 1 more practical. In IEEE floating point representation, 1.0 is stored as 1.0 x 2^0, which simplifies the understanding of precision limits. Special cases like zero, denormalized numbers, and NaNs are also defined within this framework, highlighting the limitations of floating point storage. This approach provides a more consistent basis for evaluating numerical precision in computational contexts.
Hernaner28
Messages
261
Reaction score
0
Hi. I'm studying numerical methods so I found this subforum the most correct for this question.
The machine epsilon for a computer is defined as the least number e such that 1 + e is different to 1.
I just wonder, why 1 + e? And not 2 + e for instance?

Thanks!
 
Technology news on Phys.org
For each floating point datatype there are entities that programmers call ULPs. An ULP is the ultimate limit of precision for a given floating point implmenetation. As the number stored in the variable becomes larger (in magnitude) the size of the ulp changes in terms of how it will be represented in decimal format. ULPs are not fixed.

The EPSILON value is the decimal presentation of a single ULP when the number stored in the floating point variable, iff that value is equal to one. Since ULP's magnitude "changes" the decision was made to use a normal number, so the definition is based on one (1). This is arbitrary in a sense.

Here is what you should know, presented in the long somewhat rigorous way courtesy of the ACM and David Goldberg.

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
 
  • Like
Likes 1 person
In most floating point formats, 1.0 is stored as 1.0 x 2^0, 2.0 is stored as 1.0 x 2^1, 4.0 as 1.0 x 2^2, ... . (In IEEE 32 bit and 64 bit formats, the 1 is not stored but assumed to be there). So it makes the most sense to define episilon relative to 1.0, since that is stored as a number multiplied by 2^0 which is the same as a number multiplied by 1.
 
  • Like
Likes 1 person
Here's a diagram of the IEEE double precision floating point representation:

618px-IEEE_754_Double_Floating_Point_Format.svg.png


This format is specialized for expressing numbers in the form 1.<fractional_part>*2<exponent>. Note that the 1 that precedes the binary point is not stored. With an infinite amount of storage, every computable number could be expressed in this binary format. Computers don't have an infinite amount of storage, so the floating point representations are a poor man's alternative to the pure mathematical form.

Suppose the fractional part of some number is all zeros except for the very last bit, which is one. What's the difference between that number and the corresponding number in which the fractional part is all zeros? That's the ULP that Jim wrote about. Of course if the exponent is very large the ULP is going to be very large also. It makes a more sense to talk about the ULP when the exponent is zero than any other exponent, and that's how the machine epsilon is defined.
 
  • Like
Likes 1 person
Thank you all! I think I understand now
 
D H how do you express zero with the format you mentioned? Is that the normalized floating point representation?
Is 1.0 stored or not?
 
Last edited:
+0 is all-bits zero. -0 (the IEEE floating point standard has +0 and -0) is all bits zero except for the sign bit.

The value that is stored as the exponent in the IEEE format is the true exponent plus some bias, 1023 in the case of doubles. The value of the exponent for the IEEE double that represents 1.0 is 1023. The special cases (zero is one of them) have a stored exponent value that is either all bits zero or all bits one. Everything else is treated as a normalized number.

The other special cases:
  • Denormalized numbers. These are numbers where the implied number before the binary point is zero rather than one. The denormalized numbers have a stored exponent of zero. Thus zero is just a special case of this special case.
  • Infinity. Infinities are represented with a fractional part of all bits zero and a stored exponent of all bits one. There are only two possibilities here, the sign bit clear (positive infinity) or set (negative infinity).
  • Not-a-number. What's 0/0? It's not a number. NaNs have the exponent all bits one and a fractional part that is not all bits zero. This means are lots of representations of NaN available, but the only ones that are used in practice are a fractional part that is all bits one.
 
Back
Top