<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>First Attempt of Rapid Compression of 2D Images Based on Histograms Analysis</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Danuta Jama Institute of Mathematics Silesian University of Technology Kaszubska 23</institution>
          ,
          <addr-line>44-100 Gliwice</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <fpage>9</fpage>
      <lpage>14</lpage>
      <abstract>
        <p>-Expanding technological advances and digitization generate more and more data, so the amount of space to store them is greater. To minimize the amount of stored data, efficient compression algorithms are needed. In this paper, the idea of the use of histograms for analysis of 2D images for the purpose of compression is shown. Performance tests were carried out, presented and discussed in terms of advantages and disadvantages.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>Digital imagery has a huge impact on our lives. In every
place on earth, at any time, a person has access to their data
using a laptop or a cell phone connected to the Internet. And
thus, the exchange of data between two points on the globe is
no longer a problem. On the other hand, Internet has caused
a wave of popularity of various social networking sites where
people publish various photos of their life. This is just one
of the phenomena of the last years, which intensified the
amount of data and exchange information in the network at
least several times.</p>
      <p>These issues have caused several problems. First, the data
traffic burden the entire network, so the upload speed or
download can be reduced at the time. Secondly, the amount
of data grows every day - files must be stored somewhere, so
the number of servers will also increase. These are problems
that can not be solved, but we can minimize them.</p>
      <p>Data compression algorithms rely on conversion method of
writing data in such a way that the weight of the data was
smaller. In other words, the input file should be processed in
order to save the file to a smaller number of bits.</p>
      <p>Compression algorithms can be divided into two types
lossless and lossy. Lossless methods do not change the
contents of the file, so the only form of writing a file is changed.
The best-known algorithms is Huffman and Shannon-Fano
coding, LZW or PNG. On the other hand, lossy algorithms
operate by manipulating not only a record but also the quality
of the file, eg .: DPCM or JPEG.</p>
    </sec>
    <sec id="sec-2">
      <title>II. RELATED WORK</title>
      <p>In recent years, data compression algorithms are
experiencing a renaissance. The demand for methods of reducing the
Copyright c 2016 held by the author.
size of digital data is just one of the reasons for developing this
subject. The second reason is the considerable development
of related branches of computer science such as artificial
intelligence methods that expand the possibilities of action of
compression algorithms.</p>
      <p>
        In 2012, Vikas Goyal [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] introduced the use of wavelet
coding for the purpose of minimizing the size of image files.
In addition, analysis of the use of different wavelets (including
Haar or Dauchechies) EZW algorithm has been presented.
Similarly, the authors of [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] presented an algorithm based on
the combination of wavelet theory and chaos theory (fractals).
For comparison, Cenugopal et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] proposed the use of
Arnold transform with chaos encoding technique showing the
effectiveness of the creation of hybrid algorithms. In a similar
time, Zhou et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] presented the idea of encryption hybrid
algorithm based on key-controlled measurement matrix.
      </p>
      <p>
        The rapid development of artificial intelligence methods
allowed to take a more random direction of compression,
which can be seen on the example of the use of heuristic
algorithms [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]–[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], or fuzzy logic [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], what helped gain a
significant advantage over other existing algorithms in terms of
weight input files. And in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], the authors introduced a novel
predictor using causal block matching and 3D collaborative
filtering.
      </p>
      <p>
        Increasingly, modern computer science uses the latest
achievements, but that does not mean that the older
mathematical theories such as Fourier transforms are no longer used
– in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]–[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], a new approach to the use of either transform
is described. Again in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], Fracastoro et al. pointed to the use
of transformation based on graphs.
      </p>
      <p>In this paper, lossy compression algorithm for 2D images
based on the analysis of the histogram is presented.</p>
    </sec>
    <sec id="sec-3">
      <title>III. COMPRESSION ALGORITHM</title>
      <p>To optimize the amount of compressed material from
twodimensional images, specific areas will be analyzed using the
mechanism of the grid. The grid of size n pixels gets the area
of the image and processes it. Then, the grid is shifted by n
pixels and the operation is repeated until the entire image will
not be covered with the grid.</p>
      <p>The processing of the the grid is to calculate the histogram
and check whether the histogram exceeds the threshold value
(see Fig. 2, where a sample histogram is cut by the threshold
value). In the case where histogram of the area covered by a
grid exceeds the threshold value, all the colors of pixels are
scaled using the following formula</p>
      <p>( K if K &lt; 256
CK = ; (1)</p>
      <p>255 if K &gt; 255
where K means a specific color component from the RGB
model (R, G or B) and is a given parameter in the range of
h0; 2i.</p>
      <p>The parameter manipulates the brightness of the image
what can be represented as</p>
      <p>8&gt;h0; 1)
= &lt;1
&gt;:(1; 2i
dim image
normal image
brighter image
:
(2)</p>
      <p>The implementation of the described technique is presented
in Algorithm 1.</p>
      <p>Algorithm 1 The algorithm color change based on a histogram</p>
    </sec>
    <sec id="sec-4">
      <title>9 and a threshold value size was set to 81 pixels, and thus 9 was set as 0:4.</title>
      <p>Performed tests showed that for the small parameters
and , the image is darker but with a smaller size. The
measurement results can be seen in Figures 5, 6 and 7. The
weight of each file was measured before and after
compression. Graphical comparison of the size is shown in Figure 3.
The effectiveness of the proposed method was measured using
the arithmetic mean. For some parameters, the compressed
image has average 3:1 times lower weight. The dependence
of the average running time of the algorithm from the size
of the image is presented in Fig. 4. The image file is larger,
the more time is needed to perform all operations. The time
required for compression increases linearly for files composed
of a maximum of 150 000 pixels. For larger files, the time
increases fourfold.</p>
      <p>The proposed method of compression of graphic images is
able to reduce the image size by more than 50% with a good
selection of threshold values. The biggest drawback of the
algorithm is the large decrease in the visibility of the image
by applying modification of colors, but the described technique
allows to design an algorithm reverse if the input parameters
are known.</p>
      <p>Fig. 4: The dependence of the average time from the size of
the image.</p>
      <p>In future work, the design of the algorithm reverse is
planned. This method, lossy compression of the file and the
ability to recover lost data would enable a much better flow
of data on the Internet, because the two sides could have the
original file, where the transferred file would be not only lower
quality, but much smaller size.
(c) 113KB
(d) 48,6KB
(b) 61,6KB
(c) 99,7KB
(d) 26,9KB
(e) 87,4KB
(f) 26,3KB</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V.</given-names>
            <surname>Goyal</surname>
          </string-name>
          , “
          <article-title>A performance and analysis of ezw encoder for image compression,” GESJ: Computer Science</article-title>
          and Telecommunications, no.
          <issue>2</issue>
          , p.
          <fpage>34</fpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Al-Fahoum</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Harb</surname>
          </string-name>
          , “
          <article-title>A combined fractal and wavelet angiography image compression approach,” The Open Medical Imaging Journal</article-title>
          , vol.
          <volume>7</volume>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Venugopal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gunasekaran</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Sivanatharaja</surname>
          </string-name>
          , “
          <article-title>Secured color image compression and efficient reconstruction using arnold transform with chaos encoding technique,” signal</article-title>
          , vol.
          <volume>4</volume>
          , no.
          <issue>5</issue>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zheng</surname>
          </string-name>
          , and L. Gong, “
          <article-title>Novel image compressionencryption hybrid algorithm based on key-controlled measurement matrix in compressive sensing,” Optics &amp; Laser Technology</article-title>
          , vol.
          <volume>62</volume>
          , pp.
          <fpage>152</fpage>
          -
          <lpage>160</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>M.-H. Horng</surname>
          </string-name>
          , “
          <article-title>Vector quantization using the firefly algorithm for image compression</article-title>
          ,
          <source>” Expert Systems with Applications</source>
          , vol.
          <volume>39</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>1078</fpage>
          -
          <lpage>1091</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Ramanathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kalaiarasi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Prabha</surname>
          </string-name>
          , “
          <article-title>Improved wavelet based compression with adaptive lifting scheme using artificial bee colony algorithm</article-title>
          ,”
          <source>International Journal of Advanced Research in Computer Engineering &amp; Technology (IJARCET)</source>
          , vol.
          <volume>2</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>pp</fpage>
          -
          <volume>1549</volume>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Jiao</surname>
          </string-name>
          , “
          <article-title>Improved bandelet with heuristic evolutionary optimization for image compression</article-title>
          ,
          <source>” Engineering Applications of Artificial Intelligence</source>
          , vol.
          <volume>31</volume>
          , pp.
          <fpage>27</fpage>
          -
          <lpage>34</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V. S.</given-names>
            <surname>Thakur</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Thakur</surname>
          </string-name>
          , “
          <article-title>Design and implementation of a highly efficient gray image compression codec using fuzzy based soft hybrid jpeg standard,” in Electronic Systems</article-title>
          ,
          <source>Signal Processing and Computing Technologies (ICESC)</source>
          ,
          <source>2014 International Conference on. IEEE</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>484</fpage>
          -
          <lpage>489</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Tsolakis</surname>
          </string-name>
          and
          <string-name>
            <given-names>G. E.</given-names>
            <surname>Tsekouras</surname>
          </string-name>
          , “
          <article-title>A fuzzy-soft competitive learning approach for grayscale image compression,” in Unsupervised Learning Algorithms</article-title>
          . Springer,
          <year>2016</year>
          , pp.
          <fpage>385</fpage>
          -
          <lpage>404</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Crandall</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Bilgin</surname>
          </string-name>
          , “
          <article-title>Lossless image compression using causal block matching and 3d collaborative filtering,” in Image Processing (ICIP</article-title>
          ),
          <source>2014 IEEE International Conference on. IEEE</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>5636</fpage>
          -
          <lpage>5640</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Gupta</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Garg</surname>
          </string-name>
          , “
          <article-title>Analysis of image compression algorithm using dct,”</article-title>
          <source>International Journal of Engineering Research and Applications</source>
          , vol.
          <volume>2</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>515</fpage>
          -
          <lpage>521</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rawat</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Meher</surname>
          </string-name>
          , “
          <article-title>A hybrid image compression scheme using dct and fractal image compression</article-title>
          .
          <source>” Int. Arab J. Inf. Technol.</source>
          , vol.
          <volume>10</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>553</fpage>
          -
          <lpage>562</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.</given-names>
            <surname>Anitha</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Ramachandran</surname>
          </string-name>
          , “
          <article-title>Novel algorithms for 2-d fft and its inverse for image compression</article-title>
          ,
          <source>” in Signal Processing Image Processing &amp; Pattern Recognition (ICSIPR)</source>
          ,
          <source>2013 International Conference on. IEEE</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>65</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>W.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cheung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ortega</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O. C.</given-names>
            <surname>Au</surname>
          </string-name>
          , “
          <article-title>Multiresolution graph fourier transform for compression of piecewise smooth images,” Image Processing</article-title>
          , IEEE Transactions on, vol.
          <volume>24</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>419</fpage>
          -
          <lpage>433</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fracastoro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Verdoja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Grangetto</surname>
          </string-name>
          , and E. Magli, “
          <article-title>Superpixeldriven graph transform for image compression</article-title>
          ,
          <source>” in Image Processing (ICIP)</source>
          ,
          <source>2015 IEEE International Conference on. IEEE</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>2631</fpage>
          -
          <lpage>2635</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>