<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Compressing Big Data: when the rate of convergence to the entropy matters</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Filippo Mignosi</string-name>
          <email>Filippo.Mignosi@univaq.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computer Science Department, University of L'Aquila</institution>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this talk we discuss of the rate of convergence to the entropy of dictionary based compressors. A faster rate of convergence to the theoretical compression limit should correspond to better compression in practice, but constants also matters. Therefore in the analysis of the rate of convergence one must also analyse the “transient” phase. Concerning dictionary based compressors, it is known that LZ78-alike compressors have a faster convergence than LZ77-alike compressors, when the texts to be compressed are generated by a memoryless source. In practice instead it seems that LZ77-alike performs better. This seems due to the e↵ect of a strategy of Optimal Parsing (that can be applied in both LZ77 and LZ78 cases) rather then to the fact that the texts are generated by a memoryless source. To our best knowledge there are no theoretical results concerning the rate of convergence to the entropy of both LZ77 and LZ78 case when it is used a strategy of Optimal Parsing. We discuss some experimental results on LZ78 that show that the rate of convergence to the entropy presents a kind of wave e↵ect that become bigger and bigger as the entropy of the memoryless source decrease. It can be a tsunami for a zero entropy source.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Compressing Big Data: when the rate of convergence to the entropy matters
8</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>