Floating point : (+inf - nan) = ? What the most correct way of handling this?

  • Thread starter Thread starter uart
  • Start date Start date
  • Tags Tags
    Floating Point
Click For Summary

Discussion Overview

The discussion revolves around the handling of floating point special values, specifically the operation of adding or subtracting NaN (Not a Number) from +inf (positive infinity). Participants explore the implications of different software implementations and standards, particularly focusing on the IEEE standard for floating point arithmetic.

Discussion Character

  • Debate/contested
  • Technical explanation
  • Exploratory

Main Points Raised

  • Some participants note that different math software handles the operation +inf - NaN differently, with one returning +inf and another returning NaN.
  • One participant cites the IEEE standard, stating that NaN combined with any operation results in NaN, which is implemented in modern CPUs.
  • Another participant mentions that the standard is IEEE 7584-2008 and emphasizes that any binary operation involving NaN should yield NaN.
  • A participant reflects on their initial belief that NaN is the most correct result, reasoning that NaN represents an indeterminate value, making +inf - NaN also indeterminate.
  • Discussion includes a specific example involving GNU Octave and MATLAB, highlighting differences in how they handle complex numbers and NaN propagation.
  • One participant points out that comparisons involving NaN yield unordered results, which complicates the handling of NaN in calculations.
  • Concerns are raised about the performance implications of NaNs and Infs in long-running applications, suggesting that floating point exceptions can help identify errors early.

Areas of Agreement / Disagreement

Participants express differing views on the correct handling of +inf - NaN, with some supporting the idea that NaN is the correct result while others argue for +inf. The discussion remains unresolved, with multiple competing interpretations of the standard and software behavior.

Contextual Notes

Participants highlight limitations in the implementation of floating point standards in various CPUs, noting that not all CPUs fully adhere to the IEEE standard, which can lead to performance issues and unexpected behavior in calculations involving NaNs and Infs.

uart
Science Advisor
Messages
2,797
Reaction score
21
Two pieces of maths software that I have handle these floating point "special values" differently. One returns +inf and the other returns nan. (For the calculation of +inf - nan).

I can see some logic to both of those answers. I guess in first case it is following a rule that +inf - (anything other than +inf) = +inf (and +inf - +inf = nan). While in the second case I guess it's following the rule that nan (+ - * /) anything = nan.

Does anyone know of any "standard" for dealing with these special FP values that would make one of the above interpretations more correct than the other.

Thanks. :)
 
Technology news on Phys.org
Hi uart! :smile:

The IEEE standard says: nan (+ - * /) anything = nan.
This is implemented in modern cpu's, making it hard for a math package to implement it differently.
Which math package does that?
 
The standard is IEEE 7584-2008. Most CPU chips implement this in hardware (or at least, they implement one of the several different options in the standard for rounding floating ponit arithmetic operations, etc)

IIRC, the standard says that any binary operation where one operand is NaN should produce NaN.

And x = NaN, then "x == x" is false, not true!
 
Yeah I posted this last night just before I went to bed and within about a minute I had already convinced myself that nan really was the most correct way to handle it.

In the particular case in question the nan originally arose from performing a subtraction of +inf - +inf, but in any case that you've got nan it means that the number is indeterminate, could be –inf could be +inf or anything in between. So it makes perfect sense that +inf – nan = nan, since the nan on the LHS is completely indeterminate and thus could correspond to +inf.

I like Serena said:
Which math package does that?

Hi ILS. The software is gnu-Octave (a MATLAB clone). The problem arose from differences in the Matlab implementation versus the Octave implementation of some fairly simple code that was repetitively squaring a complex number until its magnitude exceeded some constant. The code didn’t check for overflows and it was causing a bug where very large complex numbers seemed to be returning an absolute value less than 2.0. It's an interesting problem because with real numbers the FP implementation of +inf and –inf usually allows this type of overflow to be handled fairly gracefully, but as this problem demonstrated, no so with complex numbers.

Take some simple code that repeats z = z^2 to demonstrate the problem. If the starting z is complex then eventually you'll end up with something like z = inf + j inf (here "j" is the Engineers sqrt(-1) btw).

Square it again and it's going to be trying to do something like (inf^2 – inf^2) + j(2 inf^2), which both Matab and Octave correctly return as z = nan + j inf. At this point there's no problem and abs(z) returns inf for each of the software.

Square it one more time however and it’s trying to do something like (nan^2 – inf^2) + j(2* nan * inf). Matlab returns nan + j nan for this while Octave returns –inf + j nan. If you keep going like this obviously the Matlab result will just stay at nan+j nan whereas the octave scheme will always keep an inf for the real component (+inf after the next iteration).

The upshot is that although the Matlab way of handling it is more correct it has the unfortunate side effect that these very large complex numbers start to fail the test abs(z) > 2.0 because nan returns false in all comparisons.
 
Interesting! :smile:

I found this on the wiki page for NaN:

"The NaN 'toolbox' for GNU Octave and MATLAB goes one step further and skips all NaNs. NaNs are assumed to represent missing values and so the statistical functions ignore NaNs in the data instead of propagating them. Every computation in the NaN toolbox is based on the non-NaN data only."

It might explain (the bug) why Octave handles an addition with NaN as ignoring NaN.
uart said:
The upshot is that although the Matlab way of handling it is more correct it has the unfortunate side effect that these very large complex numbers start to fail the test abs(z) > 2.0 because nan returns false in all comparisons.

Actually any comparison with NaN (even with itself) returns an unordered result, so your comparison should return true.
But yes, that is a pitfall when ignoring NaN results.
 
AlephZero said:
The standard is IEEE 7584-2008. Most CPU chips implement this in hardware (or at least, they implement one of the several different options in the standard for rounding floating ponit arithmetic operations, etc)
Most CPUs do not implement this in hardware, at least not in full. I have some applications that run for a long time. I can tell when those applications start polluting themselves with NaNs and Infs: they come to a crawl, with the output suddenly accumulating much slower than nominal.

What's happening is that the floating point implementation in the math coprocessor is not complete with respect to the standard. Oddities such as inf-nan are dealt with as a shared effort between the math coprocessor and the CPU. The math coprocessor merely signals to the CPU that there's trouble right here in River City. The CPU has to stop in its tracks and deal with those signals. That signaling slows things down a lot.

The solution is almost always to enable floating point exceptions. Now the first Inf, NaN, or unnormalizable number will cause the program to drop core, and usually right at the site of the erroneous equation. (My experience is that those Infs, NaNs, and unnormalized numbers are almost always caused by a programmer error.)
 

Similar threads

  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 30 ·
2
Replies
30
Views
7K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
4
Views
2K
Replies
4
Views
2K
  • · Replies 23 ·
Replies
23
Views
4K
  • · Replies 7 ·
Replies
7
Views
1K
Replies
16
Views
5K
  • · Replies 11 ·
Replies
11
Views
2K