Floating point : (+inf - nan) = ? What the most correct way of handling this?

  • Thread starter Thread starter uart
  • Start date Start date
  • Tags Tags
    Floating Point
Click For Summary
The discussion centers on how two different math software packages, GNU Octave and MATLAB, handle floating point special values, specifically the operation of +inf - nan. One software returns +inf while the other returns nan, leading to a debate on which approach is more correct. The IEEE 754 standard dictates that any operation involving NaN should yield NaN, which is implemented in modern CPUs and affects how math packages operate. The conversation highlights an issue with complex numbers where overflow can lead to unexpected results, particularly when squaring complex numbers repeatedly. MATLAB's handling of NaN is deemed more accurate, but it can lead to issues in comparisons, as NaN returns false in all comparisons, causing large complex numbers to fail certain tests. The discussion also touches on the performance implications of NaNs and Infs in computations, suggesting that enabling floating point exceptions can help identify errors in programming that lead to these special values.
uart
Science Advisor
Messages
2,797
Reaction score
21
Two pieces of maths software that I have handle these floating point "special values" differently. One returns +inf and the other returns nan. (For the calculation of +inf - nan).

I can see some logic to both of those answers. I guess in first case it is following a rule that +inf - (anything other than +inf) = +inf (and +inf - +inf = nan). While in the second case I guess it's following the rule that nan (+ - * /) anything = nan.

Does anyone know of any "standard" for dealing with these special FP values that would make one of the above interpretations more correct than the other.

Thanks. :)
 
Technology news on Phys.org
Hi uart! :smile:

The IEEE standard says: nan (+ - * /) anything = nan.
This is implemented in modern cpu's, making it hard for a math package to implement it differently.
Which math package does that?
 
The standard is IEEE 7584-2008. Most CPU chips implement this in hardware (or at least, they implement one of the several different options in the standard for rounding floating ponit arithmetic operations, etc)

IIRC, the standard says that any binary operation where one operand is NaN should produce NaN.

And x = NaN, then "x == x" is false, not true!
 
Yeah I posted this last night just before I went to bed and within about a minute I had already convinced myself that nan really was the most correct way to handle it.

In the particular case in question the nan originally arose from performing a subtraction of +inf - +inf, but in any case that you've got nan it means that the number is indeterminate, could be –inf could be +inf or anything in between. So it makes perfect sense that +inf – nan = nan, since the nan on the LHS is completely indeterminate and thus could correspond to +inf.

I like Serena said:
Which math package does that?

Hi ILS. The software is gnu-Octave (a MATLAB clone). The problem arose from differences in the Matlab implementation versus the Octave implementation of some fairly simple code that was repetitively squaring a complex number until its magnitude exceeded some constant. The code didn’t check for overflows and it was causing a bug where very large complex numbers seemed to be returning an absolute value less than 2.0. It's an interesting problem because with real numbers the FP implementation of +inf and –inf usually allows this type of overflow to be handled fairly gracefully, but as this problem demonstrated, no so with complex numbers.

Take some simple code that repeats z = z^2 to demonstrate the problem. If the starting z is complex then eventually you'll end up with something like z = inf + j inf (here "j" is the Engineers sqrt(-1) btw).

Square it again and it's going to be trying to do something like (inf^2 – inf^2) + j(2 inf^2), which both Matab and Octave correctly return as z = nan + j inf. At this point there's no problem and abs(z) returns inf for each of the software.

Square it one more time however and it’s trying to do something like (nan^2 – inf^2) + j(2* nan * inf). Matlab returns nan + j nan for this while Octave returns –inf + j nan. If you keep going like this obviously the Matlab result will just stay at nan+j nan whereas the octave scheme will always keep an inf for the real component (+inf after the next iteration).

The upshot is that although the Matlab way of handling it is more correct it has the unfortunate side effect that these very large complex numbers start to fail the test abs(z) > 2.0 because nan returns false in all comparisons.
 
Interesting! :smile:

I found this on the wiki page for NaN:

"The NaN 'toolbox' for GNU Octave and MATLAB goes one step further and skips all NaNs. NaNs are assumed to represent missing values and so the statistical functions ignore NaNs in the data instead of propagating them. Every computation in the NaN toolbox is based on the non-NaN data only."

It might explain (the bug) why Octave handles an addition with NaN as ignoring NaN.
uart said:
The upshot is that although the Matlab way of handling it is more correct it has the unfortunate side effect that these very large complex numbers start to fail the test abs(z) > 2.0 because nan returns false in all comparisons.

Actually any comparison with NaN (even with itself) returns an unordered result, so your comparison should return true.
But yes, that is a pitfall when ignoring NaN results.
 
AlephZero said:
The standard is IEEE 7584-2008. Most CPU chips implement this in hardware (or at least, they implement one of the several different options in the standard for rounding floating ponit arithmetic operations, etc)
Most CPUs do not implement this in hardware, at least not in full. I have some applications that run for a long time. I can tell when those applications start polluting themselves with NaNs and Infs: they come to a crawl, with the output suddenly accumulating much slower than nominal.

What's happening is that the floating point implementation in the math coprocessor is not complete with respect to the standard. Oddities such as inf-nan are dealt with as a shared effort between the math coprocessor and the CPU. The math coprocessor merely signals to the CPU that there's trouble right here in River City. The CPU has to stop in its tracks and deal with those signals. That signaling slows things down a lot.

The solution is almost always to enable floating point exceptions. Now the first Inf, NaN, or unnormalizable number will cause the program to drop core, and usually right at the site of the erroneous equation. (My experience is that those Infs, NaNs, and unnormalized numbers are almost always caused by a programmer error.)
 
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...

Similar threads

  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 30 ·
2
Replies
30
Views
6K
  • · Replies 6 ·
Replies
6
Views
2K
Replies
4
Views
2K
Replies
4
Views
2K
  • · Replies 23 ·
Replies
23
Views
3K
  • · Replies 7 ·
Replies
7
Views
1K
Replies
16
Views
4K
  • · Replies 11 ·
Replies
11
Views
2K