Why Ints are 32 Bits in C#? Exploring the Reasons

  • Thread starter frenzal_dude
  • Start date
  • Tags
    Bits
In summary, the decision to use 32 bits for storing integers in C# was largely influenced by the history of computing and the need for efficient memory and register usage. The progression of 8-bit, 16-bit, 32-bit, and now 64-bit architectures was based on the standardization of byte size and the need for scalable computing. This standardization was not always based on logical reasons, but more on finance, personality, and randomness. Additionally, the shift to 8 bits per byte as a standard was largely due to the success of Microsoft's DOS operating system, which was originally not even their own product.
  • #1
frenzal_dude
77
0
In C#, why did they decide to use 32 bits for storing integers?
I understand that 32 = 2^5, but why is this significant? Why not have 30 or whatever number of bits?

If the answer is that the registers in 32 bit operating systems are 32 bits long, then why did they decide to have 32 bit registers?
 
Last edited:
Technology news on Phys.org
  • #2
32 bit register just became a standard for many types of computers. 64 bits is becoming the standard now.

The first electronic general purpose computer, ENIAC's held ten decimal digits in it's "registers". Early computers had 16 bit, 24 bit, 32 bit, 60 bit, and 64 bit registers. (I don't know if any had 48 bit registers). Some microprocessors have 8 and 16 bit registers. Some devices also have 4 bit registers, but these are usually grouped to form 8 to 32 bit registers.
 
  • #3
a lot of those lengths are powers of 2 (2^5=32, 2^6=64 etc) what is the reason for that, just a coincidence?
 
  • #4
It has less to do with registers than memory. Memory is addressed in terms of bytes, which on many machines is now 8 bits. A word is some integral multiple of 8 bits. For a number of reasons it makes a lot of sense to double each stage in the size hierarchy. Thus you'll see 16 bit integers, 32 bit integers, 64 bit integers, and now on some machines, 128 bit integers.

You can also see some machines on which a "word" is 36 bits (and so is a char). There's no universal rule saying the number of bits in a byte has to be some power of 2.
 
  • #5
It gets back to the history of computers:

First came 8 bits = 1 byte which basically boils down to the defining of the ASCII character set (which is only 7 bits). But why fully use all the bits you have for the character set? That would be bad engineering not leaving yourself room for growth. So they added an extra bit making 8 bits (or 1 byte) the standard unit for storing characters.

So now you have 8 bits as your minimum computation size your computer needs to handle and you need at least an 8-bit computer to efficiently process characters. But what if we want to scale the computer up? For efficiencies sake your computer now really needs to scale base 8 and not base 2.

So if I want to bump up my architecture it makes much more sense to go to a 16-bit architecture than say a 12 bit architecture. A 16 bit architecture could operate on 2 characters or 1 16-bit integer at a time. A 12 bit architecture could still only operate on 1 character or 1 12-bit integer (not as efficient).

If I want to scale it again then 32 bits is the next design step (4 characters, 2 16 bit integers, or 1 32 bit integer at a time).

Scale it again and you get 64 bit (more or less where we are at now).
 
  • #6
Floid said:
It gets back to the history of computers:

First came 8 bits = 1 byte which basically boils down to the defining of the ASCII character set (which is only 7 bits). But why fully use all the bits you have for the character set? That would be bad engineering not leaving yourself room for growth. So they added an extra bit making 8 bits (or 1 byte) the standard unit for storing characters.

Isn't the reason they added an extra bit so they could use parity checking? Not just so it would leave "room for growth" ?
 
  • #7
Floid said:
It gets back to the history of computers:

First came 8 bits = 1 byte which basically boils down to the defining of the ASCII character set (which is only 7 bits). But why fully use all the bits you have for the character set? That would be bad engineering not leaving yourself room for growth. So they added an extra bit making 8 bits (or 1 byte) the standard unit for storing characters.
Before that switch to ASCII, some computers used 6 bits to represent a character. Some used 9. Some used 7, some used 8. Even after the switch to ASCII, some computers used 7 bits to represent a character and packed 5 such 7 bit characters into a 36 bit word. The extreme was Cray computers, which had 60 bit words. This was the smallest and largest addressable unit on a Cray. Everything was shoved into a 60 bit word, even if the thing shoved into a word only needed 6 or 7 bits. Crays were tuned for fast numerical processing. There weren't very many integers and even fewer characters involved in the types of problems people bought Crays to solve.

The migration to 8 bits per byte as an ad hoc standard has a lot more to do with finance, personality, and randomness than logic. The same goes for many technologies. The winner rarely goes to the best technology. The winner is much more likely to be to the best financed company, the best managed company, and sometimes the luckiest company.

Microsoft, for example. By all rights we should be using something based on CP/M rather than a DOS derivative. Gary Kindall didn't think meeting with IBM was important, he didn't like IBM's nondisclosure agreement, he didn't like IBM's offer for CP/M, and he had no clue how to negotiate a better one. So IBM well to Bill Gates, their second choice at the time. That Gates was second choice was a combination of IBM cluelessness and sheer dumb luck on Gates' part. Gates didn't have an operating system to sell to IBM. For some clueless reason, IBM thought he did. Gates was smart and shrewd. He sold IBM a non-existent OS and then went out and bought one to make good on the deal.
 
  • #8
In the early days of mainframe computing, systems such as DEC, UNIVAC, IBM, and others had word lengths of 8, 12, 16 18, 32, 36 or so( double word lengths in some instances), thus a word length to the power of 2 was not ubitigous. Several reasons - memory was an expensive commodity not like it is now where gigabytes and terabytes are common. Scientific calculations were done to a precision of 10 decimal points rather than floating point. As it was, these computers were just as efficient in processing data as any based on powers of 2.
IBM used EBCDIC, which is a four bit code to represent letters and numerals, before ASCII was introduced.

With the advent of ASCII by committee, and the following government decree that all government purchased computers must support ASCII, any thing other than powers of 2 for registers slowly died out.
 
  • #9
The architectures I have seen that included parity for memory had a hidden bit where parity was stored. The parity checks were implemented at a hardware level and if the parity check failed a hardware interrupt was issued. It would be awfully inefficient if software read out the 8th bit and did a parity check on every character, especially at a time when computing power was low.
 

1. Why are Ints 32 bits in C#?

Ints are 32 bits in C# because it is the default integer type used in the language. This allows for efficient memory usage and faster execution of code.

2. Is 32 bits the only possible size for Ints in C#?

No, 32 bits is not the only possible size for Ints in C#. C# also has a long integer type which is 64 bits, providing a larger range of possible values.

3. What are the advantages of using 32-bit Ints in C#?

There are several advantages to using 32-bit Ints in C#. These include faster processing speed, efficient memory usage, and compatibility with other languages that also use 32-bit Ints as their default integer type.

4. Are there any limitations to using 32-bit Ints in C#?

One limitation of using 32-bit Ints in C# is the maximum value they can hold, which is 2,147,483,647. This may be a limitation for certain applications that require larger numbers to be stored.

5. Can Ints in C# be converted to other data types?

Yes, Ints in C# can be converted to other data types using type casting. This allows for flexibility in data manipulation and ensures that data is stored in the appropriate format for different operations.

Similar threads

  • Programming and Computer Science
Replies
32
Views
1K
  • Programming and Computer Science
Replies
30
Views
4K
  • Engineering and Comp Sci Homework Help
Replies
10
Views
1K
  • Electrical Engineering
Replies
12
Views
1K
  • Programming and Computer Science
Replies
10
Views
3K
  • Programming and Computer Science
Replies
1
Views
1K
  • Computing and Technology
2
Replies
37
Views
5K
  • Programming and Computer Science
Replies
16
Views
2K
  • Programming and Computer Science
Replies
21
Views
2K
Replies
1
Views
1K
Back
Top