Register | Login | |||||
Main
| Memberlist
| Active users
| Calendar
| Chat
| Online users Ranks | FAQ | ACS | Stats | Color Chart | Search | Photo album |
| |
0 users currently in Programming. |
Acmlm's Board - I3 Archive - Programming - Let's Rant About LZ!!! | New poll | | |
Add to favorites | Next newer thread | Next older thread |
User | Post | ||
Guy Perfect Since: 11-18-05 Last post: 6433 days Last view: 6431 days |
| ||
I've just coded up a simple LZ compressor in C that searches through 64KB of data looking for matches. It's a fairly standard app that looks through the backbuffer for the next byte to be encoded, then finds the match length when it finds one. After finding the longest match (by seeking through the entire backbuffer), it encodes the data appropriately to save space.
It works great. The 64KB buffer compresses a 740KB, 24-bit .bmp image to 175KB, where a ZIP compression (which also adds a Huffman compression to the mix) only gives 148KB. A mere 30KB difference with ZIP ain't too bad, right? Thing is, ZIP takes a mere instant to compress, and my app takes upwards of a minute! ZIP does more in its round than my app, but mine takes significantly longer to encode. What kinds of optimizations are available? Comparing multiple bytes simultaneously by using larger variables? Keeping track of what bytes are in the backbuffer such that bytes that don't have significant matches won't be checked? It's kinda bugging me as to how to improve performance. It's not too important that I do, but I'd like to know how if anyone has any tips. |
|||
Kyoufu Kawa Intends to keep Rom Hacking in one piece until the end Since: 11-18-05 From: Catgirl Central Station Last post: 6431 days Last view: 6431 days |
| ||
If you want it, you can have EliteMap's compression module, which I think is fairly fast considering it's in VB... should be rewritable, considering it was based on C code... | |||
Guy Perfect Since: 11-18-05 Last post: 6433 days Last view: 6431 days |
| ||
If I wanted source code, I would have asked for source code. zlib is free source, which is faster and offers better compression for the most part. But that's not what I want.
I specifically want to optimize the code I conjured up all by myself. After all, you can't improve in your art if you just slap in someone else's work. |
|||
Dwedit Rope フクト オン フォニクス Since: 11-17-05 From: Chicago! Last post: 6432 days Last view: 6432 days |
| ||
I used the STL Multimap class in C++ to make an LZ compressor once. You build a list which contains locations of all the bytes with that value. So you're looking for FF 46 A4 or something, you look through the FF list, and see which bytes are followed by 46, then A4. This way you don't have to look through the file and see where the FFs are, you already stored those locations into a table.
It takes a few minutes to do a 16MB file though, but 64k takes no time at all. It's not ZLIB fast. I'm sure it would be a lot faster if you did the simple 64k buffer way, where the table is made up of the location of two-byte combos, so looking up two-byte combos is instantaneous. The problem with that method is that you might not find optimal matches. |
Add to favorites | Next newer thread | Next older thread |
Acmlm's Board - I3 Archive - Programming - Let's Rant About LZ!!! | | |