The discussion revolves around a C program that multiplies two integers, 1000 and 1000, and the unexpected output of 16960 when printed as a float. The primary reason for this result is attributed to the compiler using two-byte integers, which cannot accommodate the value of 1,000,000. Instead, the multiplication is performed with truncated values, leading to the output of 16960. This occurs because the multiplication exceeds the maximum value that can be stored in a two-byte signed integer (32,767), resulting in an overflow. To resolve this issue, it is suggested to use long integers by appending 'L' to the constants, which allows the multiplication to be performed correctly. Additionally, including the appropriate header file for the printf function is necessary for the program to compile successfully. The conversation also touches on the differences in integer sizes across compilers and the importance of understanding type casting and initialization in C and C++.
#1
sunakshi
2
0
Hello,
I want to know the answer of following program and the reason behind it...as soon as possible
OK, I see what's going on. I'm pretty sure that your compiler uses two-byte (signed, probably) ints, which can hold a largest (signed) value of 32,767. More modern compilers use four bytes for ints.
The compiler does the multiplication using two-byte quantities, getting a value of 16960.
This is easier to see if you work with hex (base-16) numbers. In hex, each pair of bytes is one byte, so a four-digit hex number fits in two bytes.
1000 * 1000 in base 10 is the same as 3E8 * 3E8 in hex, which is F4240 in hex (or 1,000,000 in decimal).
Since the multiplication is done using two-byte integers, there is no room for the F digit, so the result is truncated to the lower two bytes, leaving 4240 (hex), which is 16960 (decimal).
This value is stored in the float variable j, and is later displayed by the printf statement. A float can store much larger numbers, but the damage has already been done, and you get the result that you did.
You can force the multiplication to be done using long ints by appending L to the two constants, as in the following code.
Also, I have #included "stdio.h" to provide the prototype for printf.
Don't be so hard on yourself. We've been working with compilers that use 4-byte ints for what, 14 or 15 years?
1000 * 1000 == 16960 is sort of a surprising result. It was something of a fluke that it occurred to me that that maybe truncation was causing this output. Once I started down that path, I discovered that 0xF4240 - 0xF0000 = 0x4240, which is 16960 in decimal.
Once you posted about 2 bytes I started to think in terms of 1000*1000 mod 65536 - same result
#9
airborne18
25
0
I think you guys are being too hard on yourselves. Off the top of my head I believe you will find a pragma or compiler option that controls implicit casts and addresses this.
This is one of those things that really was not underscored until I played with C++ and the distinction of assignment versus initialization had to be made. The whole scope of operator overloading and the casting operator. ( I wrote a ton of printf debugging programs to understand how the c++ compiler treated expressions with implied casting ).
In c++ it is not straightforward, and is compiler dependent.