What type of floating point representation is this?

In summary, IEEE 754 specifies a 16, 32, or 64 bit word length for floating point, with the exponent in the middle of the word. Non-standard FP representations can be found in network communications.
  • #1
agent1594
4
0
In some lecture hand-outs I found the following,
  • Mantissa is stored in 2’s compliment
  • Exponent is in excess notation
8 bit exponent field
Pure range is 0 – 255
Subtract 127 to get correct value
Range -127 to +128​

In IEEE 754, we just put the binaries of negative fractions in the mantissa without converting to 2C, aren't we?
If then, what is the above standard of FP representation?

Thanks.
 
Technology news on Phys.org
  • #2
(See correction below) I see 2's complement a lot in network messages. There are no rules there. Also, I believe that the representation of floating point in IEEE 754 is only defined for the number of bits that normally occur in the processor hardware (16, 32, 64). So I don't think you can count on 754 in other situations.

CORRECTION: I take it back. Those were fixed point data in network communications where I saw all the 2's complements. I don't know about floating point.
 
Last edited:
  • Like
Likes agent1594
  • #3
agent1594 said:
If then, what is the above standard of FP representation?
Simple: It's non-standard.

How different computers represented the reals varied widely and wildly up until various standards organizations stepped into clean up that mess. Nowadays you're hard pressed to find a computer that doesn't comply with one of the ISO floating point standards.
 
  • Like
Likes agent1594
  • #4
The article at the following link discusses some of the floating point formats developed during the early days of computers:

http://www.quadibloc.com/comp/cp0201.htm

In the early days, there were no standards to follow. You designed and built a computer, you also designed your own floating point and fixed point formats. Even the word lengths of computers varied, because the byte format had not been adopted. Some computers used 16-bit words, some 32-bits, some 36-bits, and Control Data Corp. went all the way to 60-bit words.
 
  • Like
Likes agent1594
  • #5
While I was a grad student, my physics department migrated from a Digital Equipment PDP-10 (36-bit words) to a Digital Equipment VAX 11/780 (32-bit words). Complicating matters was that the PDP-10 used 7-track magnetic tapes while the VAX used 9-track tapes. Someone rigged up a 9-track tape drive to run on the PDP-10, or a 7-track drive to run on the VAX; I don't remember which.

I had to write a FORTRAN program to convert my group's data tapes from PDP-10 floating point (and integer) to VAX floating point (and integer), running on whichever machine had both kinds of tape drives.

Notice that VAX floating point had the exponent in the middle of the 32-bit word, with the fraction split into two pieces, one on each end of the word! ?:)
 
Last edited:
  • Like
Likes agent1594, pbuk and FactChecker
  • #6
jtbell said:
Notice that VAX floating point had the exponent in the middle of the 32-bit word, with the fraction split into two pieces, one on each end of the word! ?:)
The nice thing about that was that a mismatching FORTRAN subroutine parameter (single matched to a double) would only mess up the less significant digits. I converted a lot of code from VAXs to Unix machines. Programs that worked fine on the VAX would suddenly have their exponents messed up on the Unix box. We saw a lot of HUGE forces and moments.
 
  • Like
Likes agent1594
  • #7
I recall converting a Fortran program designed and written to run on the CDC-6600 series with the 60-bit words.

This particular program needed to solve a regression equation to determine some coefficients for subsequent calculations. I converted the program to run in an IBM-PC type environment where the REAL data type was 32 bits instead of 60 bits as used on the CDC machine. When I went to run a test case for which I had results, the program promptly blew up on the PC, the cause of which I traced to the solution of the regression equation. The original programmers had formed the normal equations like you would for doing regression by hand, and the 60-bit words on the CDC machines were less sensitive to round-off errors with the 60-bit words than the PC with 32-bit words. To solve this problem, without resorting to using DOUBLE PRECISION, I re-wrote the regression routine so that it used the much more stable QR algorithm instead of forming and solving the normal equations. As I recall, the QR routine worked and the program spit out the correct results on the PC.
 
  • Like
Likes agent1594

1. What is a floating point representation?

A floating point representation is a way of storing and representing numbers in a computer system. It is used to represent real numbers, which include both whole numbers and fractions.

2. How does floating point representation differ from other number systems?

Floating point representation differs from other number systems, such as binary or decimal, in that it allows for a larger range of numbers to be represented with a limited amount of memory. It also allows for greater precision in representing numbers with decimal points.

3. What types of floating point representation are there?

There are two main types of floating point representation: single precision and double precision. Single precision uses 32 bits to represent a number, while double precision uses 64 bits.

4. How does a computer perform calculations using floating point representation?

When a computer performs calculations using floating point representation, it first converts the numbers into their binary representation. The computer then performs the calculations using algorithms specifically designed for floating point numbers.

5. Are there any drawbacks to using floating point representation?

One potential drawback of using floating point representation is that it can sometimes result in rounding errors, particularly when dealing with very large or very small numbers. It is important for programmers to be aware of these potential errors and to use appropriate techniques to minimize them.

Similar threads

  • Computing and Technology
Replies
4
Views
778
  • Engineering and Comp Sci Homework Help
Replies
9
Views
962
  • Programming and Computer Science
Replies
32
Views
1K
Replies
4
Views
941
  • Programming and Computer Science
Replies
2
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
  • Programming and Computer Science
Replies
23
Views
4K
  • Programming and Computer Science
Replies
2
Views
4K
  • Engineering and Comp Sci Homework Help
Replies
10
Views
3K
  • Engineering and Comp Sci Homework Help
Replies
5
Views
5K
Back
Top