# Int, float, unsigned char

hello everyone, will you help me out with some of the problems please? i will very much appreciate this kindness. to cut the long story story. in the case of "int" there are four bytes and there are 8 bits in one byte. so there are total (4 x 8) 32 bits. then if i raise 2^(32) i get 4294967296 which is equal to combined numerical range of int (2,147,483,648 + 2,147,483,647). but this formula does not work in case of "float" and double". can you tell why this is so please?

and what those digits of precision, 7 and 15, in case of "float" and "double" respectively. does this mean decimal point is followed by 7 and 15 digits respectively. tell me please.

"unsigned int" means all integers excluding -ve integers. but what are "unsigned characters"? please shed some light.

okay last question. i have seen questions asking how many bytes would be occupied by "int" and "float" on 32-bit and 16-bit system? what is this? i know you can help me here.

i am very much grateful for this help. many many thanks.

cheers

Related Programming and Computer Science News on Phys.org
rcgldr
Homework Helper
unsigned character - usually an 8 bit unsigned integer. There were some old machines that had 6 bit characters, and I'm not sure how C would be implemented on those machines.

float - normally these are 32 bit or 64 bit for most modern machines, and many use the IEEE standard.

http://en.wikipedia.org/wiki/IEEE_754-2008

many thanks rcgldr.

sorry due to my limited knowledge it was not possible for to complely understand your kind reply.

please elaborate it a bit why the formula in my first post does not work on float and double.

is my understanding of precision correct?

are there also negative integers included in character set of ascii?

how many bytes float and int will occupy on 64 bit system?

i very much request you to help me with this stuff. many many thanks for this consideration.

cheers

many thanks rcgldr.

sorry due to my limited knowledge it was not possible for to complely understand your kind reply.

please elaborate it a bit why the formula in my first post does not work on float and double.

is my understanding of precision correct?
I'm afraid that understanding floating point numbers and how they're represented on computers will need a little research and study from you. All the information you need is in the IEEE 754-2008 link rcgldr gave you. It's not super complicated if you take your time to understand it.

Basically, no, your understanding of precision as applied to floats/doubles isn't totally correct. Nor does the same formula apply. It's stored as described in that link, and there are three parts to it. The number of bits used in the significand is the "precision". But there's also the sign and the exponent. Those take up bits too, and only together do you get a proper number.

Anyways, understand how they are represented and you will understand why the formula for integers will not work for floating point numbers. And it's all described in that link.

are there also negative integers included in character set of ascii?
Strictly speaking, no, because ASCII goes from 0 to 127, which requires only 7 bits. So even in a signed char, none of the ASCII characters will be negative.

how many bytes float and int will occupy on 64 bit system?
That's language dependent, and it's not defined as nicely as you'd think in C/C++. But that's a long story. In brief, an int on a 64 bit system can be 32 or 64 bits long, but will generally be 32 bits long.

Single precision floating point numbers (float) are 32 bits long, and double precision (double) are 64 bits long.

EDIT: Another page with info on floating point number representation:
http://en.wikipedia.org/wiki/Floating_point

Last edited:
Mark44
Mentor
When I first learned C, using compilers on PC-type computers, an int was the same size as a short (also known as a short int) - 16 bits. A long (aka long int) in those days was 32 bits.

Sometime after the introduction of 32-bit processors such as the Intel 80386, the size of an int was changed to 32 bits. A short was still 16 bits, but an int and a long were both 32 bits.

If I recall correctly, there's nothing in the standards that decree what size the various integral types should be. Compiler vendors have the latitude to define the size of an int so that the size matches the width of the registers in the CPU. That's a little confusing now, because the registers on the CPUs from Intel and AMD are 64 bits wide, but have instructions that can work with 64-bit registers, 32-bit registers, and 16-bit registers.

If I recall correctly, there's nothing in the standards that decree what size the various integral types should be. Compiler vendors have the latitude to define the size of an int so that the size matches the width of the registers in the CPU. That's a little confusing now, because the registers on the CPUs from Intel and AMD are 64 bits wide, but have instructions that can work with 64-bit registers, 32-bit registers, and 16-bit registers.
As I recall the only guarantees are that sizeof(char) is 1, and

sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long).