diff options
Diffstat (limited to '')
| -rw-r--r-- | algorithm.txt (renamed from algorithm.doc) | 122 |
1 files changed, 115 insertions, 7 deletions
diff --git a/algorithm.doc b/algorithm.txt index 01902af..cdc830b 100644 --- a/algorithm.doc +++ b/algorithm.txt | |||
| @@ -1,6 +1,6 @@ | |||
| 1 | 1. Compression algorithm (deflate) | 1 | 1. Compression algorithm (deflate) |
| 2 | 2 | ||
| 3 | The deflation algorithm used by zlib (also zip and gzip) is a variation of | 3 | The deflation algorithm used by gzip (also zip and zlib) is a variation of |
| 4 | LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in | 4 | LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in |
| 5 | the input data. The second occurrence of a string is replaced by a | 5 | the input data. The second occurrence of a string is replaced by a |
| 6 | pointer to the previous string, in the form of a pair (distance, | 6 | pointer to the previous string, in the form of a pair (distance, |
| @@ -35,12 +35,12 @@ parameter of deflateInit). So deflate() does not always find the longest | |||
| 35 | possible match but generally finds a match which is long enough. | 35 | possible match but generally finds a match which is long enough. |
| 36 | 36 | ||
| 37 | deflate() also defers the selection of matches with a lazy evaluation | 37 | deflate() also defers the selection of matches with a lazy evaluation |
| 38 | mechanism. After a match of length N has been found, deflate() searches for a | 38 | mechanism. After a match of length N has been found, deflate() searches for |
| 39 | longer match at the next input byte. If a longer match is found, the | 39 | a longer match at the next input byte. If a longer match is found, the |
| 40 | previous match is truncated to a length of one (thus producing a single | 40 | previous match is truncated to a length of one (thus producing a single |
| 41 | literal byte) and the longer match is emitted afterwards. Otherwise, | 41 | literal byte) and the process of lazy evaluation begins again. Otherwise, |
| 42 | the original match is kept, and the next match search is attempted only | 42 | the original match is kept, and the next match search is attempted only N |
| 43 | N steps later. | 43 | steps later. |
| 44 | 44 | ||
| 45 | The lazy match evaluation is also subject to a runtime parameter. If | 45 | The lazy match evaluation is also subject to a runtime parameter. If |
| 46 | the current match is long enough, deflate() reduces the search for a longer | 46 | the current match is long enough, deflate() reduces the search for a longer |
| @@ -57,6 +57,8 @@ but saves time since there are both fewer insertions and fewer searches. | |||
| 57 | 57 | ||
| 58 | 2. Decompression algorithm (inflate) | 58 | 2. Decompression algorithm (inflate) |
| 59 | 59 | ||
| 60 | 2.1 Introduction | ||
| 61 | |||
| 60 | The real question is, given a Huffman tree, how to decode fast. The most | 62 | The real question is, given a Huffman tree, how to decode fast. The most |
| 61 | important realization is that shorter codes are much more common than | 63 | important realization is that shorter codes are much more common than |
| 62 | longer codes, so pay attention to decoding the short codes fast, and let | 64 | longer codes, so pay attention to decoding the short codes fast, and let |
| @@ -91,8 +93,114 @@ interesting to see if optimizing the first level table for other | |||
| 91 | applications gave values within a bit or two of the flat code size. | 93 | applications gave values within a bit or two of the flat code size. |
| 92 | 94 | ||
| 93 | 95 | ||
| 96 | 2.2 More details on the inflate table lookup | ||
| 97 | |||
| 98 | Ok, you want to know what this cleverly obfuscated inflate tree actually | ||
| 99 | looks like. You are correct that it's not a Huffman tree. It is simply a | ||
| 100 | lookup table for the first, let's say, nine bits of a Huffman symbol. The | ||
| 101 | symbol could be as short as one bit or as long as 15 bits. If a particular | ||
| 102 | symbol is shorter than nine bits, then that symbol's translation is duplicated | ||
| 103 | in all those entries that start with that symbol's bits. For example, if the | ||
| 104 | symbol is four bits, then it's duplicated 32 times in a nine-bit table. If a | ||
| 105 | symbol is nine bits long, it appears in the table once. | ||
| 106 | |||
| 107 | If the symbol is longer than nine bits, then that entry in the table points | ||
| 108 | to another similar table for the remaining bits. Again, there are duplicated | ||
| 109 | entries as needed. The idea is that most of the time the symbol will be short | ||
| 110 | and there will only be one table look up. (That's whole idea behind data | ||
| 111 | compression in the first place.) For the less frequent long symbols, there | ||
| 112 | will be two lookups. If you had a compression method with really long | ||
| 113 | symbols, you could have as many levels of lookups as is efficient. For | ||
| 114 | inflate, two is enough. | ||
| 115 | |||
| 116 | So a table entry either points to another table (in which case nine bits in | ||
| 117 | the above example are gobbled), or it contains the translation for the symbol | ||
| 118 | and the number of bits to gobble. Then you start again with the next | ||
| 119 | ungobbled bit. | ||
| 120 | |||
| 121 | You may wonder: why not just have one lookup table for how ever many bits the | ||
| 122 | longest symbol is? The reason is that if you do that, you end up spending | ||
| 123 | more time filling in duplicate symbol entries than you do actually decoding. | ||
| 124 | At least for deflate's output that generates new trees every several 10's of | ||
| 125 | kbytes. You can imagine that filling in a 2^15 entry table for a 15-bit code | ||
| 126 | would take too long if you're only decoding several thousand symbols. At the | ||
| 127 | other extreme, you could make a new table for every bit in the code. In fact, | ||
| 128 | that's essentially a Huffman tree. But then you spend two much time | ||
| 129 | traversing the tree while decoding, even for short symbols. | ||
| 130 | |||
| 131 | So the number of bits for the first lookup table is a trade of the time to | ||
| 132 | fill out the table vs. the time spent looking at the second level and above of | ||
| 133 | the table. | ||
| 134 | |||
| 135 | Here is an example, scaled down: | ||
| 136 | |||
| 137 | The code being decoded, with 10 symbols, from 1 to 6 bits long: | ||
| 138 | |||
| 139 | A: 0 | ||
| 140 | B: 10 | ||
| 141 | C: 1100 | ||
| 142 | D: 11010 | ||
| 143 | E: 11011 | ||
| 144 | F: 11100 | ||
| 145 | G: 11101 | ||
| 146 | H: 11110 | ||
| 147 | I: 111110 | ||
| 148 | J: 111111 | ||
| 149 | |||
| 150 | Let's make the first table three bits long (eight entries): | ||
| 151 | |||
| 152 | 000: A,1 | ||
| 153 | 001: A,1 | ||
| 154 | 010: A,1 | ||
| 155 | 011: A,1 | ||
| 156 | 100: B,2 | ||
| 157 | 101: B,2 | ||
| 158 | 110: -> table X (gobble 3 bits) | ||
| 159 | 111: -> table Y (gobble 3 bits) | ||
| 160 | |||
| 161 | Each entry is what the bits decode to and how many bits that is, i.e. how | ||
| 162 | many bits to gobble. Or the entry points to another table, with the number of | ||
| 163 | bits to gobble implicit in the size of the table. | ||
| 164 | |||
| 165 | Table X is two bits long since the longest code starting with 110 is five bits | ||
| 166 | long: | ||
| 167 | |||
| 168 | 00: C,1 | ||
| 169 | 01: C,1 | ||
| 170 | 10: D,2 | ||
| 171 | 11: E,2 | ||
| 172 | |||
| 173 | Table Y is three bits long since the longest code starting with 111 is six | ||
| 174 | bits long: | ||
| 175 | |||
| 176 | 000: F,2 | ||
| 177 | 001: F,2 | ||
| 178 | 010: G,2 | ||
| 179 | 011: G,2 | ||
| 180 | 100: H,2 | ||
| 181 | 101: H,2 | ||
| 182 | 110: I,3 | ||
| 183 | 111: J,3 | ||
| 184 | |||
| 185 | So what we have here are three tables with a total of 20 entries that had to | ||
| 186 | be constructed. That's compared to 64 entries for a single table. Or | ||
| 187 | compared to 16 entries for a Huffman tree (six two entry tables and one four | ||
| 188 | entry table). Assuming that the code ideally represents the probability of | ||
| 189 | the symbols, it takes on the average 1.25 lookups per symbol. That's compared | ||
| 190 | to one lookup for the single table, or 1.66 lookups per symbol for the | ||
| 191 | Huffman tree. | ||
| 192 | |||
| 193 | There, I think that gives you a picture of what's going on. For inflate, the | ||
| 194 | meaning of a particular symbol is often more than just a letter. It can be a | ||
| 195 | byte (a "literal"), or it can be either a length or a distance which | ||
| 196 | indicates a base value and a number of bits to fetch after the code that is | ||
| 197 | added to the base value. Or it might be the special end-of-block code. The | ||
| 198 | data structures created in inftrees.c try to encode all that information | ||
| 199 | compactly in the tables. | ||
| 200 | |||
| 201 | |||
| 94 | Jean-loup Gailly Mark Adler | 202 | Jean-loup Gailly Mark Adler |
| 95 | gzip@prep.ai.mit.edu madler@alumni.caltech.edu | 203 | jloup@gzip.org madler@alumni.caltech.edu |
| 96 | 204 | ||
| 97 | 205 | ||
| 98 | References: | 206 | References: |
