compression utility benchmarks on linux

In order to test the performance of linux compression utilities, I compressed and decompressed a 336MB directory. Here are the results.

Compression
VersionCommandTimeSize (in MB)
1.4.1_01jar cf14m41.860s206
0.93fastjar cf4m7.200s206
2.3-12zip -rq3m43.058s206
1.3.19tar cf0m42.199s315
1.3.19tar czf (gzip)3m34.422s162
1.3.19tar cjf (bzip)8m11.907s153
Decompression
VersionCommandTime
1.4.1_01jar xf2m2.162s
0.93fastjar xf1m16.303s
5.50unzip -qq1m10.796s
1.3.19tar xf1m20.216s
1.3.19tar xzf (gzip)1m6.717s
1.3.19tar xjf (bzip)3m15.317s

Comments

 (Post a comment)
  1. I remember watching you perform this exercise a while back. What I didn’t know, is that [jar] is basically just zip. Since .jar file are passed over the network so often I wonder why they didn’t go for something a bit tighter and faster. Would it interfere with how java pulls classes from the jar file?

    Comment by jason on March 27, 2003 @ 2:34 pm
  2. One reason is that there is not a standard tar (that I know of) on Mac and Windows, while zip is more universal. The only difference between jar files and zip files are that jar adds a manifest.

    Comment by dan on March 27, 2003 @ 7:36 pm
  3. hello sir,
    i want to know whether there is any data lose during the compression and decompression? is it everytime safe to do in regards of data lose?

    Comment by dushyant on June 11, 2004 @ 1:56 am
  4. Dushyant: There is no data loss. They would be pretty useless utilities if you lost your data in the process.

    Comment by dan on June 11, 2004 @ 10:22 am

Comments are closed