# In binary can we have a value with "deci" "centi" "mili" or more lower valued prefix?

Tags: binary, byte
 P: 13 i have learnt that, 1 Kilo Byte = 210 byte 1 Miga byte = 220 byte . . . . 1 Yotta byte = 224 byte Now my question is, can we have a DECI, CENTI, MILI bytes? i assume the answer could be negative, but why? can anyone explain this thingto me? please! i WILL be greatful to you!
P: 3,390
 Quote by sadaf2605 i have learnt that, 1 Kilo Byte = 210 byte 1 Miga byte = 220 byte . . . . 1 Yotta byte = 224 byte
1 Yotta byte = 280 byte

Bits aside, Bytes are the smallest value possible as far as I'm aware. You can't have a milli/deci/centibyte as they are all less than 1 byte.
P: 13
 Quote by jarednjames 1 Yotta byte = 280 byte Bits aside, Bytes are the smallest value possible as far as I'm aware. You can't have a milli/deci/centibyte as they are all less than 1 byte.
ow, thanks, i was thinking more complicated way :P thaks alot alot lots!

Mentor
P: 19,704

## In binary can we have a value with "deci" "centi" "mili" or more lower valued prefix?

A byte is the smallest unit of addressable computer storage, but there is terminology for parts of a byte. The modern byte is made up of eight bits, but older computer architectures worked with bytes made up of six bits. Half a byte is a nibble (sometimes spelled nybble or less often nyble), or four bits.

Since it is meaningless to think about fractional parts of a bit, which can be either 0 or 1, the decimal fraction prefixes deci-, centi-, etc. aren't used.
 Admin P: 21,687 I don't see a problem with millibytes, it is just a matter of application. If I am sending one SMS (180 characters) per hour, transmission speed is on average 50 millibytes per second.
P: 13
 Quote by Mark44 The modern byte is made up of eight bits, but older computer architectures worked with bytes made up of six bits.
Why they had to add three more bits to six? What kinds of problem we could have possibly faced if six bits made up bytes were still working?
P: 3,390
 Quote by sadaf2605 Why they had to add three more bits to six? What kinds of problem we could have possibly faced if six bits made up bytes were still working?
They added two more bits, not three.

The fact it still works is irrelevant. My old computer with a 20gb hd and 256kb ram still work, but there's a reason I'm sitting here on my shiny new HP.
Mentor
P: 19,704
 Quote by sadaf2605 Why they had to add three more bits to six? What kinds of problem we could have possibly faced if six bits made up bytes were still working?
they added two more bits to get eight, not three. The reason for six-bit bytes on some old computers (such as Digital Equipment Corporation's PDP-5, see http://en.wikipedia.org/wiki/Programmed_Data_Processor) was that the CPU registers held twelve bits, which is two six-bit bytes.

Later PDP versions such as the PDP-11 had 16-bit and 32-bit processors, with 8-bit bytes. Other PDP versions had processors that could handle 18 bits or 36 bits. Most personal computers these days have either 32-bit processors or 64-bit processors.

Raising the bit number in a processor enables the processor to work with larger numbers, and can speed things up by performing an operation in one cycle instead of two or more.
 P: 1 I think the reason for 8 bits in a byte is primarily because it let people use one byte to represent one English character, when people started wanting to handle text on computers. Also, a 3 digit octal expression can be used to represent the value of an 8 bit byte, which may have had some influence. But six bits was not enough to represent the necessary letters, numbers, and punctuation, and still have characters terminate on word-boundaries in memory. (Not terminating on a word boundary would make data retrieval from disk inefficient, or result in wasted part-words.) The PDP-10 actually used 7 bit bytes, so in a 36 bit word there were 5 characters and a left-over bit. That bit was used to check parity on dectapes. If the parity was wrong, the tape reader knew that the tape was damaged. Actually, one character per byte is just about obsolete, because modern computers need to be able to handle lots of foreign characters, so now multiple bytes are often used to represent a single letter. But 8-bits has become synonymous with a "byte", and I don't see that changing in the near future.
HW Helper
P: 6,164
 Quote by sadaf2605 i have learnt that, 1 Kilo Byte = 210 byte 1 Miga byte = 220 byte . . . . 1 Yotta byte = 224 byte Now my question is, can we have a DECI, CENTI, MILI bytes? i assume the answer could be negative, but why? can anyone explain this thingto me? please! i WILL be greatful to you!
This is not quite right. The proper terms (IEC prefixes) are:
1 KiB = 1 kibibyte = 210 bytes = 1024 bytes
1 MiB = 1 mibibyte = 220 bytes = 1048576 bytes
. . . .
1 YiB = 1 yobibyte = 280 bytes

Furthermore we have the SI prefixes:
1 kB = 1 kilobyte = 1000 bytes
1 MB = 1 megabyte = 106 bytes
. . . .
1 YB = 1 yottabyte = 1024 bytes

The SI prefixes also work for amounts smaller than a byte
1 mB = 1 millibyte = 0.001 byte

I guess you could also use smaller IEC prefixes, although there would be a problem with their names.
We could for instance define 1 miB for 1/1024 bytes, but its logical name would be mibibyte, but this one is already taken for 1 MiB.
 P: 3 Another sphere where we meet fractions of bit is a data compression. Here you can often see expressions like "0.78 output bits per one input byte" meaning compression ratio.
 HW Helper P: 6,761 other 6 bit character machines: CDC 3000 series - 24 bit words CDC 6000 series - 60 bit words baudot is/was a 5 bit code http://en.wikipedia.org/wiki/Baudot_code ascii is/was a 7 bit code, but stored as 8 bit codes on most computers. ebcdic is an 8 bit code mostly used by IBM computers unicode is a 16 bit code used on Windows based systems
HW Helper
P: 6,164
 Quote by rcgldr unicode is a 16 bit code used on Windows based systems
Unicode is a set of encodings for the Universal Character Set that is used on most systems nowadays.
It can be encoded as 16 bit (UTF-16, which is incomplete), 32 bit (UTF-32), or a variable number of bytes (UTF-8).
There are also a number of other encodings.

I believe that UTF-8 is becoming the defacto standard fast.
It means that ascii is just ascii, and all other characters are encoded with a non-ascii prefix.

UTF-16, which is used by Windows, is broken since it does not support all unicode characters.
 P: 30 In the seventies I worked with HELL scanners that worked with 4K 18bit core stores that were used to store 12K 6bit words.
 PF Patron P: 4,945 As I recall, some of the early Univac computers had byte=word=66bits (or maybe it was 60 bits) and I believe most mini-computers in the last 1960s early 1970s had byte=word=16 bits

 Related Discussions Calculus & Beyond Homework 10 Calculus & Beyond Homework 9