Register to reply 
What are all the possible 5digit binary code combinations? 
Share this thread: 
#1
Dec1209, 04:19 AM

P: 54

Hi all. I recently am thinking of developing a compression software with extreme speed and compression.
The simple principle behind the working of this software is that the software is converting the program into binary code. Then, five sets of 0s and 1s are converted into alphanumeric characters. Like for example, 00101 becomes 2 (number); 01101 becomes, say, y (alphabetical character); 01111 like, say, # (special characters); 11111 as to J (CAPS); and so on. But as far as for now, I require to get a list of all five digit combinations of 0s and 1s possible. For the time being, I'll manage with 5digit sets of binary code. I could later expand the compression efficiency using ten digits, and so on. 


#2
Dec1209, 04:36 AM

Sci Advisor
P: 1,750

Cheers  sylas 


#3
Dec1409, 11:59 PM

P: 595

In fact, for any ndigit binary number, it's always equal to 2^{n}. 


#4
Dec1509, 07:48 AM

P: 54

What are all the possible 5digit binary code combinations?
I see. I hope this new algorithm works.
Let's say we replace each set of characters with only 0s or 1s. That gives us only two numbers that can be converted. Let's take this to be the code of a program: 0111100000111010101010000000101001010010010101001000000000010100101001000000111 Which sets of numbers do occur here the most? It's 00000 and 10101. Let us replace the 00000 with 0 and 10101 with 1. The source code is now: 011110111010100010100101001001001001010010100100111 So we have reduced, that is, in essence, compressed the binary code by 35 numerals. Now we shall have secondary, tertiary and several more compressions, all using the sets 10101 and 00000 replaced by 1 and 0 respectively. 


#5
Dec1509, 08:02 AM

Sci Advisor
P: 1,750

The method you are approaching is called Huffmann coding (wikipedia link). Basically you calculate the frequency of all your "characters" and then pick a number of bits for each one so that you can recognize them unambiguously and also get the maximum compression (or minimum entropy). Cheers  sylas 


#6
Dec1509, 08:32 AM

P: 54

I did forget to add on. In this compression method, not only is the most common number set replaced, but the computer computes out the most common set in the binary code. Following the primary encryption, say that we have the binary code (as previously mentioned):
011110111010100010100101001001001001010010100100111 What's most common here? Obviously 10010 and 00101. We'll take 0 and 1 for them respectively. So now, it's: 0111101110101000101101001110111 We have compressed it by 25 digits. Let us do it again. Here, 000 and 0111 are most common; replace them with 0 and 1. I do understand that they aren't five digits, but it can vary if there aren't much possibilities. 11101010101101001111 It has been reduced by a further 11 digits. And again we shall do it. 010 is 0; 111 is 1, both are the most common sets. 1010110011 I shall show in the next post how I continue this until I achieve about a maximum 7 letters. It is about as enough, since I have reduce the binary code from 79 characters to a mere 11 characters, approximately 86.075% cut off the original size. What an excellent job, I must say! 


#7
Dec1509, 08:44 AM

Sci Advisor
P: 1,750

When you see the compressed string, how do you tell whether the first bit (which is either 0 or 1) is one of your common strings, or a part of a longer string that is not so common? Answer... you can't. You have to come up with a code that maps every input to a unique output; one where you can tell where each compressed letter starts and finishes. Check the wikipage I cited for you previously. Cheers  sylas 


#8
Dec2309, 04:53 PM

P: 2,292

I agree. Properly compressing a binary string absolutely requires that the decompression can accurately expand the compressed to it's original form.
There are many ways to do this, and it has been done and in use for many years. Interestingly enough, some highend compression techniques are based on the idea that the compression/decompression does not have to be 100% accurate in some cases. Or even really needed. This allows for extraordinarily high compressions. from 1:1 to 1:2 to 1:50 and beyond. Take JPEG for example. A common compression for digital photos over the Internet, storage, etc... One can choose the larger, "loseless" compression, or a smaller and much faster file transfer which does not 100% accurately give you every bit of the original pic, but does give you an "acceptable and usable" overall sense of the pic. The tradeoff of this technique is that it requires acceptance that the partially accurate image is sufficient for the recipient purposes. For example, if a live video stream is transmitted in gray scale versus, say, 32bit color, the data stream is much more compact. So, if the recipient needs no info on color for whatever reason, grayscale transmission is much faster if bandwidth dependent. 


Register to reply 
Related Discussions  
Code : Binary Search Tree C++  Programming & Computer Science  7  
What is Code/binary portability ?(computer science)  Programming & Computer Science  2  
Confused, how many symbols can be represented by braille code? Combinations! wee  Calculus & Beyond Homework  3  
Need help with binary tree code  Engineering, Comp Sci, & Technology Homework  2  
Security code combinations  General Math  1 