<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Parameterization of the nonlinear interpolator invariant to four directions contours for multidimensional digital signals</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>A I Maksimov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>M V Gashnikov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Samara National Research University</institution>
          ,
          <addr-line>Moskovskoe Shosse 34А, Samara, Russia, 443086</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>21</fpage>
      <lpage>28</lpage>
      <abstract>
        <p>In this paper, a parameterization of the nonlinear interpolator invariant to four directions contours for the digital signals is described. The interpolator automatically selects one of the possible calculation methods for each signal sample based on the presence of the contour in a sample. Training procedure is performed to calculate the optimal value of the interpolator parameter. A criterion for minimizing the post-interpolation residues energy and a criterion for minimizing of the post-interpolation residues entropy are used to find the optimal value. The proposed interpolator is applied to the task of signal compression and the task of combining heterogeneous signals. In order to examine interpolators, computational experiments are carried out on a test set. The advantage in terms of the quadratic error of suggested interpolator over prototypes is shown.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>During recent years, the size of processed multidimensional digital signals has been expanding faster
than the storage capacities. That is why compression methods nowadays are in a great demand.</p>
      <p>
        Today, a large number of compression methods have been developed. JPEG method [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is, perhaps,
the most common of them. It uses statistical coding [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ] of transformants obtained by the
twodimensional discrete cosine transform [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Other popular compression methods are based on
twodimensional orthogonal transformations [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], fractals [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and wavelets [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>The listed methods have one common flaw – they are resource-intensive, which makes it difficult
to use them in real-time systems, for example, in on-board imaging systems of satellites, drones and
other aircrafts. These systems need compression methods that have little computational complexity.
Methods which perform signal processing in a spatial domain, without transitioning to spectral
components will be perfect for such application.</p>
      <p>
        Methods based on differential pulse-code modulation (DPCM) [
        <xref ref-type="bibr" rid="ref10 ref7 ref8 ref9">7-10</xref>
        ] satisfy these requirements.
Decorrelation of the signal in DPCM compression is achieved via the use of difference representation.
However, the scope of DPCM is not limited to real-time systems, for example, DPCM acts as a stage
in other compression methods. For example, JPEG algorithm uses DPCM compression at one of the
stages of operation. Thus, the task of enhancement of DPCM and increasing its efficiency is topical.
In this paper, we propose a new parameterized interpolator that makes it possible to increase the
efficiency of DPCM compression by switching between different interpolation methods for each
processed sample, depending on the presence of a contour in the vicinity of it.
      </p>
      <p>The work is arranged in the following way. First, a general differential pulse-code modulation
method, main types of DPCM interpolators and quantizers are presented. The next paragraph outlines
the interpolation and training procedures of proposed interpolator. The third paragraph shows the
results of experimental study of proposed interpolator.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Differential pulse-code modulation</title>
      <p>DPCM compression is the sequence of following procedures – interpolation, quantization,
reconstruction, and coding. The generalization of this method is presented below.</p>
      <sec id="sec-2-1">
        <title>2.1. General DPCM compression</title>
        <p>Let (  ̅) be the original (compressible) multidimensional digital signal where  ̅is the vector of its
arguments. Differential pulse-code modulation can be represented as the following sequential
procedure:
1. First, a reference value  ̅( ̅) is interpolated;
2. Then, the differential signal  ( ̅) is calculated as the difference between reference and original
value of the signal (1):</p>
        <p>(̅ ) =  (̅ ) −  ̅(̅ ),
where (, ) – is the original value of the signal.</p>
        <p>3. The resulting differential signal is quantized (2):</p>
        <p>(̅) = ((̅)) ,
where  –is the quantization function.</p>
        <p>4. The reconstructed signal value  ̂( ̅) id calculated as the sum of the quantized differential signal
and the reference value. The reconstructed value (3) is used to further interpolation:
 ̂(̅ ) =  ̅(̅ ) +   (̅) ; (3)
5. After processing the entire signal, the quantized differential signal  ( ,  ) is statistically
encoded.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. DPCM compression: Interpolation</title>
        <p>
          High DPCM compression ratio is achieved when the difference signal is close to zero. Thus, it is
necessary to interpolate the reference value as close to the reference one as possible. There are three
main types of DPCM interpolators: averaging [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], nonlinear [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] and adaptive [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>Averaging (linear) interpolators calculate the reference value the average over several previous
signal samples. The simplest linear interpolator is the one that chooses the previous sample as the
reference value. Let us describe the work of interpolators for the case of a two-dimensional original
(compressible) signal  ( ̅) =  ( ,  ).</p>
        <p>
          Figure 1 shows the work of the interpolator, averaging over the two previous samples [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. The
interpolated value in this case is formed as follows:
 ̅(,  ) = [ ̂( −1, )+ ̂( −1, )+ ̂( −1, −1)+ ̂( +1, −1)]. (4)
4
        </p>
        <p>Interpolators of this type are robust to noise, since they use averaging over several samples, but
they can have a huge error on the contours.</p>
        <p>Non-linear interpolators have a small error on the contours. The main idea of such interpolators is
to interpolate the value of the signal "along" the contour using the values of the previous signal
samples.</p>
        <p>
          Graham interpolator [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] is a non-linear one. It is able to interpolate "along» horizontal and vertical
contours. As the reference value the sample corresponding to the smallest of the following differences
(5,6,7) is taken:
 ̅= {
 ̂( ,
        </p>
        <p>− 1),    &gt;   ;
 ̂ ( − 1,  ),    ≤   ,
here
  = | ̂ (,  − 1 ) −  ̂ ( − 1,  − 1) |, (6)
  = | ̂( − 1,  ) −  ̂( − 1,  − 1)|, (7)
where   – is the horizontal difference;   – is the vertical difference. Figure 2 shows, how this
interpolator works.
(1)
(2)
(5)</p>
        <p>
          The interpolator invariant to the four directions contours [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] is the enhancement of Graham
interpolator. It is able to interpolator "along" horizontal, vertical and two diagonal contours. To select
the reference value, 12 differences between the 8 previous samples are calculated. The interpolation is
performed as follows :
 ̅=
 ̂ (,  − 1
 ̂ ( − 1, 
 
 =
  = | ̂(,  − 1
= | ̂( − 1, 
  −   ,
) −  ̂( − 1,  − 1)
) −  ̂( − 1,  − 1)
|,
|,
(8)
(14)
(15)
(16)
(17)
        </p>
        <p>The disadvantage of this interpolation method is the need for a preliminary training procedure to
calculate the thresholds which allow switching between interpolation methods.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. DPCM compression: Quantization</title>
        <p>
          Quantization [
          <xref ref-type="bibr" rid="ref11 ref8">8,11</xref>
          ] is the process of dividing the signal value range into a finite number of ranges,
each of which is represented by one specific value.
        </p>
        <p>
          Quantization allows increasing the compression ratio of the processed signal by introducing an
error in it. The quantization of the difference signal has a special feature - a small number of
quantization levels, therefore, they should be chosen thoroughly. In the present study, two quantization
scales are used - uniform [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] and Max [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] quantization scales.
        </p>
        <p>
          All intervals of the uniform quantization scale are equal, its representative values are located in the
middle of the intervals and at equal distances from each other. The uniform quantization scale is a
scale with a controllable error [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. The maximum error introduced into the quantized signal is
determined by the width of the scale interval. Uniform scale quantization can be written as follows:
  (̅ ) = 
( (̅ )) × [| ̅( )|×
        </p>
        <p>],
2 
+1
where  
original one.</p>
        <p>– is the maximum error.</p>
        <p>
          The advantage this quantization scale is the level proportionality of the quantized signal and the
Intervals of Max quantization scale are not equal. Its levels are constructed on the basis of the
mean-square quantization error minimization criterion [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ]. With a known distribution density of the
difference signal, expression (19) for the standard-mean quantization error can be written as follows:
where  is the contour criterion;  − is the lower threshold for switching between interpolation
methods;  + is the upper threshold for switching between interpolation methods;  
maximum value of the signal. Figure 4 shows the work of adaptive Graham interpolator.
        </p>
        <p>In practice, Max scale is constructed from the uniform one iteratively. Its border values are
calculated with the use of its representative values, and vice versa. The resulting Max scale levels are
more frequent on the signal value area where the values most often appear.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Parameterization of the nonlinear interpolator invariant to four directions contours</title>
      <p>We propose a new adaptive interpolator. It is a parameterization of the interpolator invariant to the
four directions contours. Interpolation and trainig procedures are developed for proposed interpolator.
ℰ2 = ∑−1
=0
 
∫ +1 ( −   )()
 ,
density.
where ℰ2 – is the quantization root-mean-square error;  – is the number of quantization levels;
  – is the border value;   – is the representative value;  ( )- is the differential signal distribution</p>
      <p>Since the difference signal is discrete, (19) can be written as:
(20). Expressions (21,22) for them are found by taking the partial derivatives of expression (20):
To construct the Max scale, it is necessary to choose   and   which would minimize the error
ℰ2 = ∑−1
=0
∑+1
=

( −   )()</p>
      <p>.</p>
      <p>=
∑ +1  ()</p>
      <p>=
∑ +1 ()</p>
      <p>=
  =  −1+  
2
 ̂ ( − 1,  − 1
During training procedure a preliminary pass through the original signal is required to calculate the
threshold value, upon which the interpolation method is chosen.</p>
      <p>The contour criterion for the given parameterized interpolator is the difference module between the
reference values obtained by means of the interpolator averaging over the four previous readings (4)
and the interpolator invariant to the four directions contours (8). If the difference absolute value
between the averaging and nonlinear methods is small, averaging interpolation is chosen, since it is
noise-resistant. If the difference absolute value is large, then a contour is detected and nonlinear
interpolation is chosen. Let us describe the work of interpolator for the case of a two-dimensional
original signal  (̅ ) =  ( ,  ).</p>
      <sec id="sec-3-1">
        <title>3.1. Interpolation procedure</title>
        <p>The interpolation procedure for proposed interpolator can be represented as follows:
( ̂ ( − 1, 
) +  ̂(,  − 1)
̂( − 1,  − 1
) +  ̂( + 1,  − 1
))/4 ,
signal sample and the interpolated value) for which criterion equals  :
(23)
(24)
(25)
(26)
(27)
(28)
(29)
∑
,∈
( ) | (, 
) − 
∆(1,  ) =
 { ̂( − 1, 
),  ̂ (,  − 1
), ̂( − 1,  − 1
), ̂( + 1,  − 1)}|
∆(2,  ) = ∑,∈()
where Nq  is the amount of the quantized post-interpolation residuals with a value q.</p>
        <p>To solve the optimization problem (35), a three-dimensional matrix is used:</p>
        <p>N(i,)q , i 1, 2, 0    Cmax .
(36)
Each element of N(i,)q contains the amount of quantized post-interpolation residues (18) with a value
q, given the feature value (24)  and interpolator number i, which is taken from (23). i =0 means that
the first row of (23) is taken as the interpolator, for i =1 all other rows of (23) are taken as interpolator.</p>
        <p>Matrix N(i,)q is used to calculate the amount of quantized post-interpolation residuals (18) Nq 
,
with value q for all possible values of the threshold </p>
        <p>Cmax 1
Nq 0   N(1,)q
0</p>
        <p>, Nq  1, q  Nq , q  N(1,)q  N(2,q) .</p>
        <p>The number of values of the quantized post-interpolation residues (18) equal to q makes it possible
to calculate a one-dimensional array of entropy values (35) for all possible values of the threshold .
The length of this array is small (Cmax of the elements, i.e. equal to the maximum value of the signal).
The number of the minimum entropy value (35) can be found by the direct search:
  arg min  H 
</p>
        <p>Thus, the optimization problem (35) is solved.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental study of DPCM interpolators</title>
      <p>For the developed interpolator based on criterion (29) a series of computational experiments was
carried out, during which the dependencies of the root-mean-square error (RMS) on the compression
ratio of the quantization scale were obtained. The uniform (18) and the Max scale (21, 22) were used.
The mathematical expression for the root-mean-square is as follows:</p>
      <p>ℰ =  1 ∑̅∈ √( (̅ ) −  ̃(̅ ))2, (39)
where   – is the original signal variance;  – is the set of signal samples;  ( ̅) – is the original signal
values;  ̃( ̅) – is the processed (decompressed) signal values.
(35)
(37)
(38)</p>
      <p>
        The results obtained for the developed interpolator were compared with the others. For comparison,
prototypes on the basis of which a parameterized interpolator was developed were taken. They are
averaging over the four previous samples (4) and invariant to the contours of the four directions (8-12)
interpolators. As a test signal set Waterloo Grayscale Set 1 [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] was used. Figure 6 shows the test set.
The computational experiment was carried out in the following way: the test signals were
DPCMcompressed [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] and decompressed with a constant compression ratio of a uniform quantization scale
and a selected interpolator, after which the error was calculated. The results for the selected
interpolator were averaged over all test set. Then the procedure was repeated for the next compression
ratio of the scale. Thus, the dependencies of the root-mean-square error on the compression ratio of a
uniform quantization scale were constructed for all three interpolators. A similar series of experiments
was carried out for the Max quantization scale. Figures 7 and 8 show the results of the computational
experiments.
      </p>
      <p>As can be seen from the presented dependencies, the developed parameterization outperforms its
prototypes. During computational experiments, the type of signal was found, with which the
developed parameterization gives the least RMS. This result is shown in Figures 9 and 10.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>This article presents a new adaptive interpolator for DPCM-signal compression, based on the
parameterization of the interpolator, which is invariant to four directions contours. Its interpolation
and training procedures are presented. Training procedure includes a preliminary pass through the
signal.</p>
      <p>A series of computational experiments was carried out. The developed parameterization
outperformed its prototypes. During the computational experiments, the type of signals on which the
proposed interpolator shows the best results was determined.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>The reported study was funded by RFBR according to the research projects № 18-01-00667,
№ 18-07-01312.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Gupta</surname>
            <given-names>V 2014</given-names>
          </string-name>
          <string-name>
            <surname>Enhanced Image Compression Using</surname>
          </string-name>
          Wavelets
          <source>International Journal of Research in Engineering and Science (IJRES) 2</source>
          (
          <issue>5</issue>
          )
          <fpage>55</fpage>
          -
          <lpage>62</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Sayood</surname>
            <given-names>K 2012</given-names>
          </string-name>
          <string-name>
            <surname>Introduction to Data Compression</surname>
          </string-name>
          (The Morgan Kaufmann Series in
          <source>Multimedia Information and Systems)</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Wallace</surname>
            <given-names>G 1991</given-names>
          </string-name>
          <article-title>The JPEG Still Picture Compression Standard Communications of the ACM</article-title>
          ,
          <volume>34</volume>
          (
          <issue>4</issue>
          )
          <fpage>30</fpage>
          -
          <lpage>44</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Huffman</surname>
            <given-names>D</given-names>
          </string-name>
          <article-title>A 1952 A Method for the Construction of Minimum Redundancy Codes Proc</article-title>
          .
          <source>IRE</source>
          ,
          <volume>40</volume>
          <fpage>1098</fpage>
          -
          <lpage>1101</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>MacKay D 2003 Information Theory</surname>
          </string-name>
          , Inference, and
          <string-name>
            <surname>Learning Algorithms</surname>
          </string-name>
          (Cambridge Univ. Press)
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Plonka G M 2005</surname>
          </string-name>
          <article-title>Fast and numerically stable algorithms for discrete cosine transforms</article-title>
          <source>Linear Algebra and its Applications</source>
          <volume>394</volume>
          (
          <issue>1</issue>
          )
          <fpage>309</fpage>
          -
          <lpage>345</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Woon W M 2000</surname>
          </string-name>
          <article-title>Achieving high data compression of self-similar satellite images using fractal</article-title>
          <source>Proceedings of IEEE International Geoscience and Remote Sensing Symposium 609-611</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Pratt</surname>
            <given-names>W 2007</given-names>
          </string-name>
          <string-name>
            <surname>Digital Image Processing</surname>
          </string-name>
          (Wiley)
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Gonzalez</surname>
            <given-names>R 2008</given-names>
          </string-name>
          <string-name>
            <surname>Digital Image</surname>
          </string-name>
          <article-title>Processing (Pearson Education)</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Soifer</surname>
            <given-names>V A</given-names>
          </string-name>
          <string-name>
            <surname>2010 Computer Image Processing. Part</surname>
            <given-names>II</given-names>
          </string-name>
          :
          <article-title>Methods and algorithms</article-title>
          (VDM Verlag)
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Gashnikov</surname>
            <given-names>M V</given-names>
          </string-name>
          <year>2017</year>
          <article-title>Parameterized adaptive predictor for digital image compression based on the differential pulse code modulation</article-title>
          <source>Proceedings of SPIE The International Society for Optical Engineering 1034110 DOI: 10.1117/12</source>
          .2268530
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Sayood</surname>
            <given-names>K 2006</given-names>
          </string-name>
          <article-title>Introduction to data compression</article-title>
          (Morgan Kaufmann Publishers)
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Salomon D 2007 Data Compression</surname>
          </string-name>
          .
          <source>The Complete Reference</source>
          (Springer-Verlag)
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Lin</surname>
            <given-names>S 2004</given-names>
          </string-name>
          <string-name>
            <surname>Error Control</surname>
          </string-name>
          <article-title>Coding: Fundamentals and Applications, second edition (New Jersey: Prentice-Hall, inc)</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Image</surname>
            <given-names>Repository:</given-names>
          </string-name>
          <article-title>The Waterloo Fractal Coding and Analysis Group (Access mode: http://links</article-title>
          .uwaterloo.ca/Repository.html) (
          <volume>17</volume>
          .
          <fpage>11</fpage>
          .
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Gashnikov</surname>
            <given-names>M V</given-names>
          </string-name>
          <year>2016</year>
          <article-title>Parameterization of nonlinear Greham predictor for digital image compression</article-title>
          <source>Computer Optics</source>
          <volume>40</volume>
          (
          <issue>2</issue>
          )
          <fpage>225</fpage>
          -
          <lpage>231</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2016-40-2-
          <fpage>225</fpage>
          -231
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>