Huffman file Decompression algorithm

  • Thread starter Thread starter DODGEVIPER13
  • Start date Start date
  • Tags Tags
    Algorithm File
AI Thread Summary
The discussion revolves around understanding Huffman coding, specifically the implementation of a decompression program for files compressed using this algorithm. The user seeks guidance on how to write a program that decompresses a file named filename.huf into a readable format, prompting for the filename and creating a new output file. Key points include the importance of understanding the Huffman tree algorithm and the structure of Huffman codes, which are uniquely decodable and non-ambiguous. The conversation emphasizes the need to extract bit fields from memory accurately, keeping track of bit and byte offsets while reading Huffman codes. An example illustrates how to manage these offsets and decode the compressed data effectively. The discussion also references a Wikipedia article for further clarification on Huffman coding principles and codeword structures. Overall, a foundational grasp of Huffman coding and its implementation is crucial for successfully completing the decompression task.
DODGEVIPER13
Messages
668
Reaction score
0
I am going to be compleley honest, I don't know anything about Huffman coding. Except that I think I understand the Huffman tree algorithm. Can anyone assist me in getting started the problems states:

The preceding exercise compresses a file to generate
two files filename.huf and filename.new. Write a program that decompresses
the file. The program prompts the user to enter the file name and
decompresses it into a file named filename

Sorry I don't have any code but the copy and paste feature always jumbles up the format and I don't wish to manuallly enter it. If you have the book it is Introduction To Java Programming 8th ed written by Daniel Liang
 
Technology news on Phys.org
DODGEVIPER13 said:
...the copy and paste feature always jumbles up the format...
Put your code inside [noparse]
Code:
...
[/noparse] tags.
 
You have to know the huffman coding scheme. It may be a fixed scheme, or it could be defined by the initial data in the file. Generally there's some minimal pattern of leading bits that determines the actual length of a "codeword".

In this wiki article

http://en.wikipedia.org/wiki/Huffman_coding

The example table on the right side of the page uses three 3 bit values, 000, 010, 111, to represent 3 bit codewords. If the first 3 bits are 001, 011, 100, 101, or 101, it's a longer code word, and you need to include a 4th bit and use the leading 4 bits to determine if a code is 4 bits or 5 bits.

That example table could have used huffman codes that were ordered numerically: 3 bit codes: {000, 001, 010}, 4 bit codes {0110, 0111, 1000, 1001, 1010, 1011, 1100}, 5 bit codes: {11010, 11011, 11100, 11101, 11110, 11111}, but this won't have much effect on the software used to encode or decode a huffman string of data.
 
Hey DODGEVIPER13 and welcome to the forums.

One of the most important elements of coding mechanisms like the Huffman codes is that the codes themselves are non-ambiguous.

In other words the codes are uniquely decodable in the way that there is a bijection between the compressed and uncompressed sources.

By creating codes that have this property and also in the way that you get an optimal code alphabet that corresponds to probability information for your source file, then the result is basically the huffman coding system.

Once this is understood, it will be a lot easier to understand the algorithm and its implementation.
 
thanks guys I am sure if I knew what I was doing I could understand what yah gave meh but I don't so I really can't do anything but that's ok I need a tutor.
 
The first step you need to do is to be able to extract bit fields from memory. You'll have to keep track of the bit and byte offset as you "read" huffman codes. As an example, pretend that all codes are 3 bits long, using a group of letters to represent the codes it would look like this in memory (spaces used to separate the bytes)

aaabbbcc cdddeeef ffggghhh jjjkkkll lmmmnnno oopppqq ...

so numbering bits and bytes left to right, aaa starts at byte 0, bit 0, bbb starts at byte 0, bit 3, ccc starts at byte 0, bit 6, and ends at byte 1 bit 0. ddd starts at byte 1, bit 1, ...

Using the wiki example, the longest field is 5 bits. What you could do is get 5 bits from the current byte and bit postion, then use that 5 bit value to index a 32 entry table that indicates the actual number of bits used for that code, based on the upper bits of the code, and the character that the code represents. You would then update the byte and bit offset pointers by the actual number of bits in the code you just retrieved, and then continue the process.
 
Thread 'Is this public key encryption?'
I've tried to intuit public key encryption but never quite managed. But this seems to wrap it up in a bow. This seems to be a very elegant way of transmitting a message publicly that only the sender and receiver can decipher. Is this how PKE works? No, it cant be. In the above case, the requester knows the target's "secret" key - because they have his ID, and therefore knows his birthdate.
I tried a web search "the loss of programming ", and found an article saying that all aspects of writing, developing, and testing software programs will one day all be handled through artificial intelligence. One must wonder then, who is responsible. WHO is responsible for any problems, bugs, deficiencies, or whatever malfunctions which the programs make their users endure? Things may work wrong however the "wrong" happens. AI needs to fix the problems for the users. Any way to...

Similar threads

Replies
30
Views
6K
Replies
1
Views
2K
Replies
4
Views
9K
Replies
12
Views
15K
Replies
4
Views
11K
Replies
10
Views
25K
Replies
5
Views
4K
Back
Top