What type of floating point representation is this?

Click For Summary

Discussion Overview

The discussion revolves around the characteristics and standards of floating point representation in computing, particularly comparing non-standard representations with IEEE 754. Participants explore historical contexts, variations in formats, and personal experiences with different systems.

Discussion Character

  • Exploratory
  • Technical explanation
  • Historical

Main Points Raised

  • One participant describes a floating point representation where the mantissa is stored in 2's complement and the exponent in excess notation, questioning its standardization compared to IEEE 754.
  • Another participant initially confuses 2's complement with fixed point data in network communications, later retracting that statement regarding floating point representation.
  • A participant asserts that the described floating point representation is non-standard and highlights the historical variability in how different computers represented real numbers before standardization.
  • Discussion includes a link to an article discussing early floating point formats, noting the lack of standards in the early days of computing.
  • Personal anecdotes are shared about converting data between different systems (PDP-10 to VAX and CDC-6600 to IBM-PC), emphasizing the challenges posed by differing floating point formats and word lengths.
  • One participant recounts issues with round-off errors when transitioning from a 60-bit word system to a 32-bit system, leading to a modification of the regression algorithm used in their program.

Areas of Agreement / Disagreement

Participants express a range of views on the standardization of floating point representations, with some acknowledging the historical lack of standards while others focus on specific non-standard representations. The discussion remains unresolved regarding the implications of these differences on computational accuracy.

Contextual Notes

Participants reference specific systems and their floating point representations, highlighting the limitations and challenges of transitioning between different formats without resolving the underlying technical discrepancies.

agent1594
Messages
4
Reaction score
0
In some lecture hand-outs I found the following,
  • Mantissa is stored in 2’s compliment
  • Exponent is in excess notation
8 bit exponent field
Pure range is 0 – 255
Subtract 127 to get correct value
Range -127 to +128​

In IEEE 754, we just put the binaries of negative fractions in the mantissa without converting to 2C, aren't we?
If then, what is the above standard of FP representation?

Thanks.
 
Technology news on Phys.org
(See correction below) I see 2's complement a lot in network messages. There are no rules there. Also, I believe that the representation of floating point in IEEE 754 is only defined for the number of bits that normally occur in the processor hardware (16, 32, 64). So I don't think you can count on 754 in other situations.

CORRECTION: I take it back. Those were fixed point data in network communications where I saw all the 2's complements. I don't know about floating point.
 
Last edited:
  • Like
Likes   Reactions: agent1594
agent1594 said:
If then, what is the above standard of FP representation?
Simple: It's non-standard.

How different computers represented the reals varied widely and wildly up until various standards organizations stepped into clean up that mess. Nowadays you're hard pressed to find a computer that doesn't comply with one of the ISO floating point standards.
 
  • Like
Likes   Reactions: agent1594
The article at the following link discusses some of the floating point formats developed during the early days of computers:

http://www.quadibloc.com/comp/cp0201.htm

In the early days, there were no standards to follow. You designed and built a computer, you also designed your own floating point and fixed point formats. Even the word lengths of computers varied, because the byte format had not been adopted. Some computers used 16-bit words, some 32-bits, some 36-bits, and Control Data Corp. went all the way to 60-bit words.
 
  • Like
Likes   Reactions: agent1594
While I was a grad student, my physics department migrated from a Digital Equipment PDP-10 (36-bit words) to a Digital Equipment VAX 11/780 (32-bit words). Complicating matters was that the PDP-10 used 7-track magnetic tapes while the VAX used 9-track tapes. Someone rigged up a 9-track tape drive to run on the PDP-10, or a 7-track drive to run on the VAX; I don't remember which.

I had to write a FORTRAN program to convert my group's data tapes from PDP-10 floating point (and integer) to VAX floating point (and integer), running on whichever machine had both kinds of tape drives.

Notice that VAX floating point had the exponent in the middle of the 32-bit word, with the fraction split into two pieces, one on each end of the word! ?:)
 
Last edited:
  • Like
Likes   Reactions: agent1594, pbuk and FactChecker
jtbell said:
Notice that VAX floating point had the exponent in the middle of the 32-bit word, with the fraction split into two pieces, one on each end of the word! ?:)
The nice thing about that was that a mismatching FORTRAN subroutine parameter (single matched to a double) would only mess up the less significant digits. I converted a lot of code from VAXs to Unix machines. Programs that worked fine on the VAX would suddenly have their exponents messed up on the Unix box. We saw a lot of HUGE forces and moments.
 
  • Like
Likes   Reactions: agent1594
I recall converting a Fortran program designed and written to run on the CDC-6600 series with the 60-bit words.

This particular program needed to solve a regression equation to determine some coefficients for subsequent calculations. I converted the program to run in an IBM-PC type environment where the REAL data type was 32 bits instead of 60 bits as used on the CDC machine. When I went to run a test case for which I had results, the program promptly blew up on the PC, the cause of which I traced to the solution of the regression equation. The original programmers had formed the normal equations like you would for doing regression by hand, and the 60-bit words on the CDC machines were less sensitive to round-off errors with the 60-bit words than the PC with 32-bit words. To solve this problem, without resorting to using DOUBLE PRECISION, I re-wrote the regression routine so that it used the much more stable QR algorithm instead of forming and solving the normal equations. As I recall, the QR routine worked and the program spit out the correct results on the PC.
 
  • Like
Likes   Reactions: agent1594

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
Replies
2
Views
2K
  • · Replies 32 ·
2
Replies
32
Views
2K
Replies
9
Views
2K
Replies
4
Views
2K
  • · Replies 23 ·
Replies
23
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
Replies
10
Views
4K
  • · Replies 5 ·
Replies
5
Views
6K