(Link to AcmlmWiki) Offline: thank ||bass
Register | Login
Views: 13,040,846
Main | Memberlist | Active users | Calendar | Chat | Online users
Ranks | FAQ | ACS | Stats | Color Chart | Search | Photo album
05-17-24 10:29 PM
0 users currently in Programming.
Acmlm's Board - I3 Archive - Programming - Let's Rant About LZ!!!
  
User name:
Password:
Reply:
 
Options: - -
Quik-Attach:
Preview for more options

Max size 1.00 MB, types: png, gif, jpg, txt, zip, rar, tar, gz, 7z, ace, mp3, ogg, mid, ips, bz2, lzh, psd

UserPost
Dwedit
Posts: 107/116
I used the STL Multimap class in C++ to make an LZ compressor once. You build a list which contains locations of all the bytes with that value. So you're looking for FF 46 A4 or something, you look through the FF list, and see which bytes are followed by 46, then A4. This way you don't have to look through the file and see where the FFs are, you already stored those locations into a table.

It takes a few minutes to do a 16MB file though, but 64k takes no time at all. It's not ZLIB fast.

I'm sure it would be a lot faster if you did the simple 64k buffer way, where the table is made up of the location of two-byte combos, so looking up two-byte combos is instantaneous. The problem with that method is that you might not find optimal matches.
Guy Perfect
Posts: 430/451
If I wanted source code, I would have asked for source code. zlib is free source, which is faster and offers better compression for the most part. But that's not what I want.

I specifically want to optimize the code I conjured up all by myself. After all, you can't improve in your art if you just slap in someone else's work.
Kyoufu Kawa
Posts: 1321/1353
If you want it, you can have EliteMap's compression module, which I think is fairly fast considering it's in VB... should be rewritable, considering it was based on C code...
Guy Perfect
Posts: 429/451
I've just coded up a simple LZ compressor in C that searches through 64KB of data looking for matches. It's a fairly standard app that looks through the backbuffer for the next byte to be encoded, then finds the match length when it finds one. After finding the longest match (by seeking through the entire backbuffer), it encodes the data appropriately to save space.

It works great. The 64KB buffer compresses a 740KB, 24-bit .bmp image to 175KB, where a ZIP compression (which also adds a Huffman compression to the mix) only gives 148KB. A mere 30KB difference with ZIP ain't too bad, right?

Thing is, ZIP takes a mere instant to compress, and my app takes upwards of a minute! ZIP does more in its round than my app, but mine takes significantly longer to encode. What kinds of optimizations are available? Comparing multiple bytes simultaneously by using larger variables? Keeping track of what bytes are in the backbuffer such that bytes that don't have significant matches won't be checked?

It's kinda bugging me as to how to improve performance. It's not too important that I do, but I'd like to know how if anyone has any tips.
Acmlm's Board - I3 Archive - Programming - Let's Rant About LZ!!!


ABII

Acmlmboard 1.92.999, 9/17/2006
©2000-2006 Acmlm, Emuz, Blades, Xkeeper

Page rendered in 0.002 seconds; used 346.34 kB (max 379.90 kB)