In IEEE 754, we just put the binaries of negative fractions in the mantissa without converting to 2C, aren't we?
If then, what is the above standard of FP representation?

(See correction below) I see 2's complement a lot in network messages. There are no rules there. Also, I believe that the representation of floating point in IEEE 754 is only defined for the number of bits that normally occur in the processor hardware (16, 32, 64). So I don't think you can count on 754 in other situations.

CORRECTION: I take it back. Those were fixed point data in network communications where I saw all the 2's complements. I don't know about floating point.

How different computers represented the reals varied widely and wildly up until various standards organizations stepped in to clean up that mess. Nowadays you're hard pressed to find a computer that doesn't comply with one of the ISO floating point standards.

In the early days, there were no standards to follow. You designed and built a computer, you also designed your own floating point and fixed point formats. Even the word lengths of computers varied, because the byte format had not been adopted. Some computers used 16-bit words, some 32-bits, some 36-bits, and Control Data Corp. went all the way to 60-bit words.

While I was a grad student, my physics department migrated from a Digital Equipment PDP-10 (36-bit words) to a Digital Equipment VAX 11/780 (32-bit words). Complicating matters was that the PDP-10 used 7-track magnetic tapes while the VAX used 9-track tapes. Someone rigged up a 9-track tape drive to run on the PDP-10, or a 7-track drive to run on the VAX; I don't remember which.

I had to write a FORTRAN program to convert my group's data tapes from PDP-10 floating point (and integer) to VAX floating point (and integer), running on whichever machine had both kinds of tape drives.

Notice that VAX floating point had the exponent in the middle of the 32-bit word, with the fraction split into two pieces, one on each end of the word!

The nice thing about that was that a mismatching FORTRAN subroutine parameter (single matched to a double) would only mess up the less significant digits. I converted a lot of code from VAXs to Unix machines. Programs that worked fine on the VAX would suddenly have their exponents messed up on the Unix box. We saw a lot of HUGE forces and moments.

I recall converting a Fortran program designed and written to run on the CDC-6600 series with the 60-bit words.

This particular program needed to solve a regression equation to determine some coefficients for subsequent calculations. I converted the program to run in an IBM-PC type environment where the REAL data type was 32 bits instead of 60 bits as used on the CDC machine. When I went to run a test case for which I had results, the program promptly blew up on the PC, the cause of which I traced to the solution of the regression equation. The original programmers had formed the normal equations like you would for doing regression by hand, and the 60-bit words on the CDC machines were less sensitive to round-off errors with the 60-bit words than the PC with 32-bit words. To solve this problem, without resorting to using DOUBLE PRECISION, I re-wrote the regression routine so that it used the much more stable QR algorithm instead of forming and solving the normal equations. As I recall, the QR routine worked and the program spit out the correct results on the PC.