Why Are Integers 32 Bits in C#?

  • Thread starter Thread starter frenzal_dude
  • Start date Start date
  • Tags Tags
    Bits
Click For Summary

Discussion Overview

The discussion centers around the choice of using 32 bits for storing integers in C#, exploring the historical, technical, and architectural reasons behind this decision. Participants delve into the significance of bit lengths in computing, the evolution of register sizes, and the implications of memory addressing.

Discussion Character

  • Exploratory
  • Technical explanation
  • Historical
  • Debate/contested

Main Points Raised

  • Some participants suggest that the use of 32 bits is related to the standardization of 32-bit registers in many computers.
  • Others argue that the choice of bit lengths, such as 32 and 64 bits, is influenced by the need for efficiency in processing data and memory addressing.
  • A participant notes that the progression of bit lengths follows powers of 2, questioning whether this is merely coincidental.
  • One participant emphasizes the historical context, explaining that the transition from 8 bits to larger sizes was driven by the need for growth and efficiency in computing.
  • Another participant raises the point that the addition of an extra bit in the byte may have been for parity checking, rather than solely for future expansion.
  • Several contributions highlight the variability in word lengths across different computing systems, noting that not all architectures adhered to powers of 2.
  • Some participants discuss the influence of historical decisions and market dynamics on the adoption of certain standards, such as ASCII over other character encoding systems.
  • There is mention of how early computing systems had diverse word lengths, and that the shift towards powers of 2 became more prevalent with the adoption of ASCII.

Areas of Agreement / Disagreement

Participants express a range of views on the reasons for the 32-bit integer standard, with no consensus reached. The discussion includes competing explanations and acknowledges the complexity of historical and technical factors involved.

Contextual Notes

Limitations in the discussion include varying definitions of "word" across different architectures, the historical context of computing standards, and the lack of universal rules governing bit lengths and memory addressing.

frenzal_dude
Messages
76
Reaction score
0
In C#, why did they decide to use 32 bits for storing integers?
I understand that 32 = 2^5, but why is this significant? Why not have 30 or whatever number of bits?

If the answer is that the registers in 32 bit operating systems are 32 bits long, then why did they decide to have 32 bit registers?
 
Last edited:
Technology news on Phys.org
32 bit register just became a standard for many types of computers. 64 bits is becoming the standard now.

The first electronic general purpose computer, ENIAC's held ten decimal digits in it's "registers". Early computers had 16 bit, 24 bit, 32 bit, 60 bit, and 64 bit registers. (I don't know if any had 48 bit registers). Some microprocessors have 8 and 16 bit registers. Some devices also have 4 bit registers, but these are usually grouped to form 8 to 32 bit registers.
 
a lot of those lengths are powers of 2 (2^5=32, 2^6=64 etc) what is the reason for that, just a coincidence?
 
It has less to do with registers than memory. Memory is addressed in terms of bytes, which on many machines is now 8 bits. A word is some integral multiple of 8 bits. For a number of reasons it makes a lot of sense to double each stage in the size hierarchy. Thus you'll see 16 bit integers, 32 bit integers, 64 bit integers, and now on some machines, 128 bit integers.

You can also see some machines on which a "word" is 36 bits (and so is a char). There's no universal rule saying the number of bits in a byte has to be some power of 2.
 
It gets back to the history of computers:

First came 8 bits = 1 byte which basically boils down to the defining of the ASCII character set (which is only 7 bits). But why fully use all the bits you have for the character set? That would be bad engineering not leaving yourself room for growth. So they added an extra bit making 8 bits (or 1 byte) the standard unit for storing characters.

So now you have 8 bits as your minimum computation size your computer needs to handle and you need at least an 8-bit computer to efficiently process characters. But what if we want to scale the computer up? For efficiencies sake your computer now really needs to scale base 8 and not base 2.

So if I want to bump up my architecture it makes much more sense to go to a 16-bit architecture than say a 12 bit architecture. A 16 bit architecture could operate on 2 characters or 1 16-bit integer at a time. A 12 bit architecture could still only operate on 1 character or 1 12-bit integer (not as efficient).

If I want to scale it again then 32 bits is the next design step (4 characters, 2 16 bit integers, or 1 32 bit integer at a time).

Scale it again and you get 64 bit (more or less where we are at now).
 
Floid said:
It gets back to the history of computers:

First came 8 bits = 1 byte which basically boils down to the defining of the ASCII character set (which is only 7 bits). But why fully use all the bits you have for the character set? That would be bad engineering not leaving yourself room for growth. So they added an extra bit making 8 bits (or 1 byte) the standard unit for storing characters.

Isn't the reason they added an extra bit so they could use parity checking? Not just so it would leave "room for growth" ?
 
Floid said:
It gets back to the history of computers:

First came 8 bits = 1 byte which basically boils down to the defining of the ASCII character set (which is only 7 bits). But why fully use all the bits you have for the character set? That would be bad engineering not leaving yourself room for growth. So they added an extra bit making 8 bits (or 1 byte) the standard unit for storing characters.
Before that switch to ASCII, some computers used 6 bits to represent a character. Some used 9. Some used 7, some used 8. Even after the switch to ASCII, some computers used 7 bits to represent a character and packed 5 such 7 bit characters into a 36 bit word. The extreme was Cray computers, which had 60 bit words. This was the smallest and largest addressable unit on a Cray. Everything was shoved into a 60 bit word, even if the thing shoved into a word only needed 6 or 7 bits. Crays were tuned for fast numerical processing. There weren't very many integers and even fewer characters involved in the types of problems people bought Crays to solve.

The migration to 8 bits per byte as an ad hoc standard has a lot more to do with finance, personality, and randomness than logic. The same goes for many technologies. The winner rarely goes to the best technology. The winner is much more likely to be to the best financed company, the best managed company, and sometimes the luckiest company.

Microsoft, for example. By all rights we should be using something based on CP/M rather than a DOS derivative. Gary Kindall didn't think meeting with IBM was important, he didn't like IBM's nondisclosure agreement, he didn't like IBM's offer for CP/M, and he had no clue how to negotiate a better one. So IBM well to Bill Gates, their second choice at the time. That Gates was second choice was a combination of IBM cluelessness and sheer dumb luck on Gates' part. Gates didn't have an operating system to sell to IBM. For some clueless reason, IBM thought he did. Gates was smart and shrewd. He sold IBM a non-existent OS and then went out and bought one to make good on the deal.
 
In the early days of mainframe computing, systems such as DEC, UNIVAC, IBM, and others had word lengths of 8, 12, 16 18, 32, 36 or so( double word lengths in some instances), thus a word length to the power of 2 was not ubitigous. Several reasons - memory was an expensive commodity not like it is now where gigabytes and terabytes are common. Scientific calculations were done to a precision of 10 decimal points rather than floating point. As it was, these computers were just as efficient in processing data as any based on powers of 2.
IBM used EBCDIC, which is a four bit code to represent letters and numerals, before ASCII was introduced.

With the advent of ASCII by committee, and the following government decree that all government purchased computers must support ASCII, any thing other than powers of 2 for registers slowly died out.
 
The architectures I have seen that included parity for memory had a hidden bit where parity was stored. The parity checks were implemented at a hardware level and if the parity check failed a hardware interrupt was issued. It would be awfully inefficient if software read out the 8th bit and did a parity check on every character, especially at a time when computing power was low.
 

Similar threads

  • · Replies 32 ·
2
Replies
32
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
7K
  • · Replies 10 ·
Replies
10
Views
4K
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
8
Views
3K
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
16
Views
4K