Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Why are Ints 32 bits?

  1. Jul 1, 2013 #1
    In C#, why did they decide to use 32 bits for storing integers?
    I understand that 32 = 2^5, but why is this significant? Why not have 30 or whatever number of bits?

    If the answer is that the registers in 32 bit operating systems are 32 bits long, then why did they decide to have 32 bit registers?
    Last edited: Jul 1, 2013
  2. jcsd
  3. Jul 1, 2013 #2


    User Avatar
    Homework Helper

    32 bit register just became a standard for many types of computers. 64 bits is becoming the standard now.

    The first electronic general purpose computer, ENIAC's held ten decimal digits in it's "registers". Early computers had 16 bit, 24 bit, 32 bit, 60 bit, and 64 bit registers. (I don't know if any had 48 bit registers). Some microprocessors have 8 and 16 bit registers. Some devices also have 4 bit registers, but these are usually grouped to form 8 to 32 bit registers.
  4. Jul 1, 2013 #3
    a lot of those lengths are powers of 2 (2^5=32, 2^6=64 etc) what is the reason for that, just a coincidence?
  5. Jul 1, 2013 #4

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    It has less to do with registers than memory. Memory is addressed in terms of bytes, which on many machines is now 8 bits. A word is some integral multiple of 8 bits. For a number of reasons it makes a lot of sense to double each stage in the size hierarchy. Thus you'll see 16 bit integers, 32 bit integers, 64 bit integers, and now on some machines, 128 bit integers.

    You can also see some machines on which a "word" is 36 bits (and so is a char). There's no universal rule saying the number of bits in a byte has to be some power of 2.
  6. Jul 3, 2013 #5
    It gets back to the history of computers:

    First came 8 bits = 1 byte which basically boils down to the defining of the ASCII character set (which is only 7 bits). But why fully use all the bits you have for the character set? That would be bad engineering not leaving yourself room for growth. So they added an extra bit making 8 bits (or 1 byte) the standard unit for storing characters.

    So now you have 8 bits as your minimum computation size your computer needs to handle and you need at least an 8-bit computer to efficiently process characters. But what if we want to scale the computer up? For efficiencies sake your computer now really needs to scale base 8 and not base 2.

    So if I want to bump up my architecture it makes much more sense to go to a 16-bit architecture than say a 12 bit architecture. A 16 bit architecture could operate on 2 characters or 1 16-bit integer at a time. A 12 bit architecture could still only operate on 1 character or 1 12-bit integer (not as efficient).

    If I want to scale it again then 32 bits is the next design step (4 characters, 2 16 bit integers, or 1 32 bit integer at a time).

    Scale it again and you get 64 bit (more or less where we are at now).
  7. Jul 3, 2013 #6
    Isn't the reason they added an extra bit so they could use parity checking? Not just so it would leave "room for growth" ?
  8. Jul 3, 2013 #7

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Before that switch to ASCII, some computers used 6 bits to represent a character. Some used 9. Some used 7, some used 8. Even after the switch to ASCII, some computers used 7 bits to represent a character and packed 5 such 7 bit characters into a 36 bit word. The extreme was Cray computers, which had 60 bit words. This was the smallest and largest addressable unit on a Cray. Everything was shoved into a 60 bit word, even if the thing shoved into a word only needed 6 or 7 bits. Crays were tuned for fast numerical processing. There weren't very many integers and even fewer characters involved in the types of problems people bought Crays to solve.

    The migration to 8 bits per byte as an ad hoc standard has a lot more to do with finance, personality, and randomness than logic. The same goes for many technologies. The winner rarely goes to the best technology. The winner is much more likely to be to the best financed company, the best managed company, and sometimes the luckiest company.

    Microsoft, for example. By all rights we should be using something based on CP/M rather than a DOS derivative. Gary Kindall didn't think meeting with IBM was important, he didn't like IBM's nondisclosure agreement, he didn't like IBM's offer for CP/M, and he had no clue how to negotiate a better one. So IBM well to Bill Gates, their second choice at the time. That Gates was second choice was a combination of IBM cluelessness and sheer dumb luck on Gates' part. Gates didn't have an operating system to sell to IBM. For some clueless reason, IBM thought he did. Gates was smart and shrewd. He sold IBM a non-existent OS and then went out and bought one to make good on the deal.
  9. Jul 3, 2013 #8


    User Avatar
    Gold Member

    In the early days of mainframe computing, systems such as DEC, UNIVAC, IBM, and others had word lengths of 8, 12, 16 18, 32, 36 or so( double word lengths in some instances), thus a word length to the power of 2 was not ubitigous. Several reasons - memory was an expensive commodity not like it is now where gigabytes and terabytes are common. Scientific calculations were done to a precision of 10 decimal points rather than floating point. As it was, these computers were just as efficient in processing data as any based on powers of 2.
    IBM used EBCDIC, which is a four bit code to represent letters and numerals, before ASCII was introduced.

    With the advent of ASCII by committee, and the following government decree that all government purchased computers must support ASCII, any thing other than powers of 2 for registers slowly died out.
  10. Jul 4, 2013 #9
    The architectures I have seen that included parity for memory had a hidden bit where parity was stored. The parity checks were implemented at a hardware level and if the parity check failed a hardware interrupt was issued. It would be awfully inefficient if software read out the 8th bit and did a parity check on every character, especially at a time when computing power was low.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook