java - Deflate data into a fixed length buffer -


for project, work lot large volumes of encrypted data used read heavy. since decryption takes longer inflating i'm willing deflate data before encrypting , storing.

the difficulty i'm facing data stored in fixed length chunks or pages. these chunks on disk need remain fixed length in order fast page lookup. i'm trying deflate data possible fixed size page.

at current i'm trying find approach so. however, @ moment i'm bit stuck @ trailing compressed size each time data added , uncompressed size near page limit. (since data can in theory grow bit due compression if entropy of data high). currently, i'm trying following approach:

    final deflater deflater = new deflater();//deflater.huffman_only);     final inflater inflater = new inflater();      long start;     long duration;     int freespace = size;     int fill = 0;     byte[] page;     final byte[] buf = new byte[8];      deflater.reset();     try( bytearrayoutputstream boas = new bytearrayoutputstream(size);             deflateroutputstream dos = new deflateroutputstream(boas, deflater, size, true)){         start = system.currenttimemillis();         while(true){             long compressable = (long) (random.nextlong(30) + 100);              fill += bytetools.longtobytearray(compressable, buf, 0, 8);             dos.write(buf);             freespace = size - boas.size();              if(freespace < 16){                 system.out.println(boas.size());                 dos.finish();                 system.out.println(boas.size());                 page = boas.tobytearray();                 break;             }         }         duration = system.currenttimemillis() - start;     } 

the above code functional deflating, length of output increased dramatically upon dos.finished(). not surprising, there way of determining resulting output size, or there other compression scheme's more appropriate task?

since padding can applied there no need 100% accurate output size, range of 95%-100% perfect , performant enough. of course 100%+ should prevented @ times.

based on trail , error adapted routine bit gives me nice results. not feel comfortable solution yet.

        while(true){             long compressable = (long) (random.nextlong(30) + 100);              block += bytetools.longtobytearray(compressable, buf, 0, 8);             dos.write(buf);              if(block >= check){                 //check /= 2;                 dos.flush();                 fill += block;                 block = 0;                 check = (size - boas.size()) - 8;                 system.out.println(check);             }              if(check < 16){                 fill += block;                 dos.finish();                 page = boas.tobytearray();                 break;             }         } 

the solution has compression ratio not far original comression ratio (in 1 block) , stays within 8 bytes of required output size. check size decrease takes following forms:

 16384  8088  4259  2207  1110  540  246  94  32  3 

resulting in 9 flushes during page generation , 1 finish.

deflate not suited this, can coerced getting close filling block if let try few times. take @ fitblk, asking for, doing 3 compression passes, including 2 decompressions in between them.

the idea compress more block size, decompress block size, , recompress decompressed. twice close to, or lot of time, filling block.


Comments

Popular posts from this blog

php - Vagrant up error - Uncaught Reflection Exception: Class DOMDocument does not exist -

vue.js - Create hooks for automated testing -

Add new key value to json node in java -