<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The technology of correction of dynamic distortions on mobile devices</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>E F Fatkhutdinova</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>V A Fursov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Image Processing Systems Institute - Branch of the Federal Scientific Research Centre “Crystallography and Photonics” of Russian Academy of Sciences</institution>
          ,
          <addr-line>Molodogvardeyskaya str. 151, Samara, Russia, 443001</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Samara National Research University</institution>
          ,
          <addr-line>Moskovskoe Shosse 34, Samara, Russia, 443086</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>429</fpage>
      <lpage>437</lpage>
      <abstract>
        <p>We propose the technology of image correction in mobile devices based on the use of a parametric FIR filter. The filter is constructed using a frequency response specified in the form of a parametric family of centrally symmetric frequency characteristics. A model of a one-dimensional frequency response is used in the form of segments of three functions: parabola, constant, and exponential function. Within the framework of the proposed filter model, a mobile application has been created that implements two tuning schemes: identification of the filter parameters from the reference image and adjustment without a reference. Besides, it is possible to vary the frequency response parameter characterizing the mid-range interval, which provides additional possibilities for adjusting the quality of the recovery. An example of processing test images is given.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Statistics of recent years show that the volume of traffic of mobile devices on the Internet is growing
rapidly. A mobile application is a program that works on tablet PCs and smartphones. The number of
smartphone owners around the world is growing every day. Music, working with photos, social networks
the most popular types of applications among smartphone owners [1]. One of the most popular functions
of mobile devices is image registration[2-3]. Mass use of this function is associated with the ability to
quickly register images in unexpected and unique situations. In this case, often you have to deal with
problems due to defocusing and/or blurring (when the object being recorded is rapidly moving relative to
the camera) [4]. Distortions such as blurring and defocusing are usually called dynamic. The task of
correcting dynamic distortions in images recorded by mobile devices using the digital image processing
algorithms implemented directly in the mobile device itself is extremely relevant.</p>
      <p>Known classical approaches to the construction of filters for image processing are inverse filtering and
Wiener filtering. When constructing an inverse filter, one often has to face the fact that the inverse
operator does not exist or the corresponding transfer function has poles close to zero [5]. At the same time,
noise is emphasized on the restored image. In the Wiener filter, this problem is overcome by taking into
account in the filter transfer function the frequency characteristics of noise [6]. However, in practice these
characteristics are often absent, and the synthesis of the optimal Wiener filter becomes a serious
problem. Unfortunately, other known methods of constructing filters, in one form or another, also face
these problems [5], [6]. Therefore, often filters for the correction of dynamic distortions are built by
parametric identification in the class of filters with finite impulse response (FIR filters). In [7], such a
technology was considered, based on the use of a parametric model of the frequency response in the form
of segments of quadratic and exponential functions. In this work, two variants of the technology were
considered: identifying the parameters of the impulse response by use cases and setting the filter by
indirect characteristics without a reference image.</p>
      <p>In [8], the model of the frequency response from segments of quadratic and exponential functions was
generalized, in particular, the region of medium frequencies was expanded due to the adding of a
frequency response constant interval. On model examples it was shown that this modification improves
the quality of image reconstruction in comparison with the previous realization of the quadratic
exponential FIR filter. However, the implementation of a filter with an extended frequency response
region requires, albeit insignificantly, an increase in computational resources. In this paper, we present the
results of work on developing a mobile application that implements both of the above options.</p>
      <p>The work is structured as follows. Section 1 describes the methods and algorithms for filter adjustment,
both for the reference image and for the absence of a reference. In Section 2, the libraries used for
constructing the mobile application, pseudo- code and user interface are explained. Section 3 provides
examples of processing test images in a mobile device.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Method and algorithms</title>
      <p>An FIR filter with a radially symmetric real frequency response [5, 6] is constructed with a supporting
region D in the form  ×  - a square with the center at the point k1 = 0, k2 = 0. Assuming that the
central sample of the impulse response h(0,0) of the reference region D is at point  1,  2 readings of the
restored image  ( 1,  2) [5] can be represented as:</p>
      <p>( 1,  2) =
 2),
 −1  −1
∑ 2 ∑ 2</p>
      <p>1=− 2−1  2=− 2−1 ℎ[ ( 1,  2)] ( 1 −  1,  2 −
where  ( 1,  2) = √ 12 +  22, and ℎ[ ( 1,  2)] –
counts of the one-dimensional impulse response
defined on a set of circles with radii  =
 ( 1,  2),  1,  2 ∈  .</p>
      <p>
        The one-dimensional frequency
function is used for all values 0 ≤ 
and exponential function:
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
  2,
      </p>
      <p>
        0 ≤  &lt;  1,
=   12,   1 ≤ 
 − ,   ≥  2,
≤  2,
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
 ( 2) =   12 =  −  2 .
      </p>
      <p>Filter, corresponding to the described frequency response, hereinafter referred to as a generalized
square-law exponential filter (Generalized Square-Exponential) or briefly a GSE-filter. The graph of this
function is shown in figure 1.</p>
      <p>The impulse response corresponding to this spectral characteristic, by virtue of the radial symmetry
property, is obtained as a function of the spatial parameter r inverse Fourier transform:
h  r  
1 Re  S  e jr d   1
 0 </p>
      <p>
        1 2  
Re   a2e jr d    Ae jr d    eсe jr d  
 0 1 2 
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )

ec1 sin 1r 
      </p>
      <p>
        
  r
sin 1r  
2 cos 1r 
(we excluded  and  2, taking into account (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) substitution  =  −  2⁄ 12 and take  2 =   1).
      </p>
      <p>
        It is easy to see that when  = 1 impulse response (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) coincides with the impulse response of the
quadratic-exponential filter, which we considered in the previous paper [7]. In this paper we will build a
mobile application in which the user can set this parameter (or the range of this parameter change) taking
into account subjective requirements for the quality of recovery and available computing resources.
      </p>
      <p>
        The counts of the two-dimensional impulse response, as in [8], are determined by discretizing the
continuous function (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) for all directions corresponding to all samples of the reference region. For each
sample (points  1,  2) of the support region in the ratio (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), the argument  =  ( 1,  2) = √ 12 +  22.
Since at r=0 in (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) there is uncertainty, the value of the central reading is calculated as the sum of all
samples except for the central one:
      </p>
      <p>
        ℎ(0,0) = ∑ ℎ( 1,  2), ∀ 1,  2 ∈  ,  1,  2 ≠ 0. (
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
      </p>
      <p>Then, all the samples in the reference area are normalized so that the requirement of maintaining the
average brightness level of the processed image is fulfilled:</p>
      <p>
        ∑ ℎ( 1,  2) = 1, ∀ 1,  2 ∈  . (
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
The filter adjustment algorithm is constructed in the following sequence of steps:
1. Assigning the initial estimates of the parameters  ̂1,  ̂,  ̂ and criterion   ( ̂1,  ̂, ̂) (with k = 0).
2. Calculation of pulse response samples for all points of the reference region  ( 1,  2) with the use of
ratios (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) and the normalization of all samples, satisfying (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ).
      </p>
      <p>3. Processing of the distorted image and calculation of the quality criterion   ( ̂1,  ̂, ̂).
4. If the   ( ̂1,  ̂, ̂) &gt;   −1( ̂1,  ̂, ̂), estimates  ̂1,  ̂,  ̂ save, otherwise, according to some rule, a
new variant of estimates is formed and the transition to step 2 is performed. If all estimates from the range
of admissible values are "viewed" is an output.</p>
      <p>Based on the described technology, it is possible to adjust the filter parameters both from the reference
image and without the reference. When setting up a benchmark as a criterion   ( ̂1,  ̂, ̂) proximity of the
restored image to the standard is used indicator:</p>
      <p> MAX 2 
PSNR  10log10  </p>
      <p>
         MSE  , (
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
where MAX – is the maximum value received by the image pixel, and MSE – is the root-mean-square error
(MSE) characterizing the proximity of the restored image to the reference image.
      </p>
      <p>
        In the filter adjustment technology without a reference image, an MSE is calculated between the
recovered and the original distorted images. In this case, in contrast to the setting according to the etalon,
the improvement in the quality of the restored image is accompanied by a decrease in the PSNR (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ). As
the PSNR value is reduced, the quality of the resulting image can be either higher or lower, an additional
condition is used: increasing the variance of the processed image. This requirement provides an increase
in the average contrast, which is usually observed with an increase in the sharpness of the image. In
addition, a restriction on the minimum allowable value of PSNR is introduced, since at its small values,
significant distortions, characterized by high contrast, are possible.
      </p>
      <p>Thus, checking the fulfillment of the conditions on the k-th iteration   ( ̂1,  ̂, ̂) &gt;   −1( ̂1,  ̂, ̂) on
step 4 of the described technology, it reduces to verifying that the following conditions are met:</p>
      <p>PSNR  ˆk , cˆk , ˆ k )  PSNR  ˆk1, cˆk1, ˆ k1 )</p>
      <p>D  ˆ , cˆk , ˆ   D  ˆ , cˆk , ˆ </p>
      <p>k k k k ,
PSNR  ˆk , cˆk , ˆ   PSNRдоп .</p>
      <p>
        k
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
      </p>
      <p>
        Here  ( ̂ ,  ̂,  ̂ ) value indicator (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) calculated on the k - th iteration of the image reconstructed
with filter parameters  ̂ ,  ̂,  ̂ when compared with the reference (when tuning to the etalon) or the
original distorted (when configured without reference);  ( ̂ ,  ̂,  ̂ ) – the dispersion of the reconstructed
image, and  доп – the minimum permissible value of the exponent (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ).
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Development of mobile application</title>
      <p>The development of any mobile image processing application is divided into 2 parts: the development of a
filter and the development of the main part. For the first part - the implementation of the image processing
algorithm - the OpenCV library was used, a special version for the Android platform. OpenCV is a
crossplatform open source library that is free for commercial and academic use. OpenCV contains
algorithms for interpreting images, determining optical distortions, determining the similarity, analyzing
the movement of an object, determining the shape of an object, and much more, written in C / C ++ and
using parallelism and multi-core for the tasks performed.</p>
      <p>To use the OpenCV library on the Android platform there is also a ready-made Android set NDK - tool
that allows to realize part of the Android app compiled programming languages such as C and C ++ and
contains a library to manage activities and access to various physical components of the device. Android
NDK is integrated with tools from a set of components for software development (Android SDK), as well
as with the integrated development environment of Android Studio [9-11].</p>
      <p>Thus, there are 2 ways to use the OpenCV library on the Android platform:
1. OpenCV Java API + Android SDK: functionality is written in the Java language;
2. OpenCV native interface + Android NDK: functionality is written in C ++.</p>
      <p>We will use the first way.</p>
      <p>The main library modules used are:
 module Core - contains basic operations, including arithmetic;
 module Imgproc - responsible for image processing;
 module Utils - contains helper methods, e.g., conversion Bitmap image format into a format Mat</p>
      <p>OpenCV and back and Mat download from a resource identifier for the resource.</p>
      <p>Since we are working with a limited amount of memory, it is advisable to load an image with a lower
resolution into the memory. The version with the reduced resolution should correspond to the size of the
component of the user interface that displays it. A high-resolution image takes up a large amount of
memory and does not provide a visible advantage when displayed. The choice of the option can be made
by the user taking into account the characteristics of a particular mobile device. Figure 2 shows the
program's pseudo code, which implements the algorithm described in Section 2, written with regard to the
features of programming on mobile devices.</p>
      <p>To work with bitmaps, we use the type Bitmap, which allows you to maintain the responsiveness of the
user interface and reduce the amount of memory [12,13]. Next, for image processing, the Bitmap type
is converted to the Mat type. Mat is a class for storing images that can be interpreted in C ++.</p>
      <p>Using the BitmapFactory class, you can learn the resolution and type of graphics data before creating a
Bitmap object and before allocating memory to that object. When you receive information about the
resolution and size of the image, you can decide whether to load into memory the full-sized version of the
image or its reduced version. We list some factors that need to be taken into account.
Image Processing and Earth Remote Sensing</p>
      <p>Start
start function1
if r &gt; 0,5 then

2
( cos   1 − sin   1 )) end
start function2</p>
      <p>2+ 2
end</p>
      <p>end
end</p>
      <p>end
end</p>
      <p>end
keep values of w/с/α
keep value of PSNR
end
for w ∈ (w_In, w_End)/с ∈ (с_In, с_End)/α ∈ (α _In, α_End) with Step_w/Step_c/Step_ α do
ℎ( ,  ) = exp(−  1)(sin  1 + sin 1 +
2 cos  1 − 2 sin  21 + (sin   1 −sin  1 ) +
 2 1
 3 1

for i ∈ (1, N) with step = 1 do
for j ∈ (1, N) with step = 1 do
( ,  ) = 
( ,  )+  ( −
+  ,  −</p>
      <p>+  )ℎ( ,  )
 +1
2
 +1
2</p>
      <sec id="sec-3-1">
        <title>Input: min value of PSNR</title>
        <p>New value of PSNR = psnr(XR, X, 1)
if calculated value of PSNR &lt; previous value of PSNR and
dispersion of processed image(XR) &gt; dispersion of distorted image (X) then
for i ∈ (1, N) with step = 1 do</p>
        <p>for j ∈ (1, N) with step = 1 do
 = |√( −
 +1)2 + ( −
 +1)2|
2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Call function1 и function2 with new parameters w, с, α</title>
        <p>End</p>
        <p>Output: Processed image.
1. Estimated memory consumption when downloading the full version.</p>
        <p>2. The amount of memory that can be spent on an image, considering the total memory consumption of
the application.</p>
        <p>3. Resolution of the component that will display the downloaded data.
4. The size and density of the screen points on the current device.</p>
        <p>Methods of the BitmapFactory class should not be executed in the main thread of the user interface, so
as not to reduce system performance. It is impossible to predict how long it will take to download data and
processing them, it depends on various factors (speed of reading from the disk, the size of pattern to
be displayed, processing power, and e.g.). If such a task will block the main UI thread, the system will
mark the application as hovering, and you can get out of it, and even if removed.</p>
        <p>If you want to perform operations that take some time, they must necessarily be performed in separate
threads, called "background" or "worker" threads. To implement this task, we use the class AsyncTask,
which offers a simple and convenient mechanism for moving laborious operations into the background
thread.</p>
        <p>The method of the AsyncTask class allows you to perform asynchronous operation in the user
interface. About nor can perform minor image loading operations, file operations, database operations etc.
in the workflow, and then publish the results in the main thread of the user interface without having to
independently process the threads and/or handlers.</p>
        <p>Figure 3 shows an application mockup built using an online tool for prototyping applications
marvelapp.com.</p>
        <p>When you enter the application, the user enters the main screen, where he can choose how to load the
image: choose the finished one from the "gallery" or take a photo using the camera. And for
newer versions of Android, starting at 6, you need to allow access to the application to the camera. In the
next step, the user can see the image he uploaded in the user interface component, make a decision,
process whether it is an image by the standard, and click on the button that will start the process. After
processing, the user goes to the screen where he can see the result. The processed image is saved in
a special folder of the application, accessible through the built-in "gallery" application.</p>
        <p>To change filter parameters for better results during processing, the user can go to the "settings" section
of the application main menu.</p>
        <p>To get information about the application, the user needs to go to the "About the Program" section.</p>
        <p>The diagram of the main application classes that implement the described functionality is shown in
Figure 4.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental results</title>
      <p>Figure 4 shows the result of the filter with the configured parameters  , с и a. Figure 6 (a) shows an
example of a distorted halftone image by modeling a low-pass Gaussian filter with a blur:  = 3. Figure
6 (b) shows the reconstructed image using a filter configured without a reference image, with a reference
area dimension of 7 × 7. The result achieved during the restoration: PSNR = 25.13.</p>
      <p>To quantify the achievable quality of recovery, we used test images (undistorted and distorted). We
emphasize that the original undistorted image was not used to adjust the filter and restore it. We carried
out a "blind" configuration.</p>
      <p>(а) (b)</p>
      <p>Figure 6. Images of "Lena": (a) - distorted with a blur  = 3, (b) – processed.</p>
      <p>This algorithm for image restoration without using a reference image can also be used to restore color
images. In this case, the processing is performed using the same algorithms for each component.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>The method of correction of dynamic distortions is based on the "blind" identification of the parameters of
the restoring filter, which does not require the reference image. A program is developed on the Android
platform, designed for image processing in mobile devices. An example of processing test image obtained
by modeling the lowpass Gaussian filter is given. The result is achieved when reconstructing the image
with a blur:  = 3 by PSNR = 25, 13. The results obtained may be of interest to users of tablets and
smartphones. Further plans of the authors are in order to make it accessible to the wide range of users.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>The work was supported by the Ministry of Education and Science of the Russian Federation and the
Russian Foundation for Basic Research, projects No. 17-29-03112, No. 16-07-00729-a.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Mail</surname>
          </string-name>
          .Ru Group,
          <article-title>Mobile Internet in Russia (Access mode: https://corp</article-title>
          .mail.ru/media/files/40314- researchmob i l e mail.pdf)
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Greisukh</surname>
            <given-names>G I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ezhov</surname>
            <given-names>E G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kazin</surname>
            <given-names>S V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Stepanov</surname>
            <given-names>S A</given-names>
          </string-name>
          <year>2017</year>
          <article-title>Diffractive elements for imaging optics of mobile communication devices</article-title>
          <source>Computer Optics</source>
          <volume>41</volume>
          (
          <issue>4</issue>
          )
          <fpage>581</fpage>
          -
          <lpage>584</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          - 6179-2017-41-4-
          <fpage>581</fpage>
          -584
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Greisukh</surname>
            <given-names>G I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ezhov</surname>
            <given-names>E G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kazin</surname>
            <given-names>S V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Stepanov</surname>
            <given-names>S A</given-names>
          </string-name>
          <year>2017</year>
          <article-title>Single-layer kinoforms for cameras and video cameras of mobile communication devices</article-title>
          <source>Computer Optics</source>
          <volume>41</volume>
          (
          <issue>2</issue>
          )
          <fpage>218</fpage>
          -
          <lpage>226</lpage>
          DOI: 10.18287/
          <fpage>0134</fpage>
          -2452-2017-41-2-
          <fpage>218</fpage>
          -226
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Nikonorov</surname>
            <given-names>A V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petrov</surname>
            <given-names>M V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bibikov</surname>
            <given-names>S A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kutikova</surname>
            <given-names>V V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kazanskiy N L 2017</surname>
          </string-name>
          <article-title>Image restoration in diffractive optical systems using deep learning</article-title>
          and
          <source>deconvolution Computer Optics</source>
          <volume>41</volume>
          (
          <issue>6</issue>
          )
          <fpage>875</fpage>
          -
          <lpage>887</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2017-41-6-
          <fpage>875</fpage>
          -887
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Soifer</surname>
            <given-names>V A</given-names>
          </string-name>
          <string-name>
            <surname>2010 Computer Image</surname>
            <given-names>Processing</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>II</given-names>
          </string-name>
          :
          <article-title>Methods and algorithms</article-title>
          (VDM Verlag) p
          <fpage>584</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Pratt</surname>
            <given-names>U 1982</given-names>
          </string-name>
          <article-title>Digital image processing</article-title>
          (Moscow: Mir) 2 p
          <fpage>480</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Fursov</surname>
            <given-names>V A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Yakimov P Y 2017</surname>
          </string-name>
          <article-title>Internet technology for correcting dynamic distortions on images in mobile devices Proceedings of the XIX All-Russian Scientific Conference (Moscow: IPM them</article-title>
          . M V Keldysh)
          <volume>436</volume>
          -
          <fpage>445</fpage>
          DOI: 10.20948/abrau-2017
          <source>-09</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Fursov</surname>
            <given-names>V A</given-names>
          </string-name>
          <year>2018</year>
          <article-title>Construction of quadratic-exponential FIR filters with an extended average frequency response region</article-title>
          <source>Computer Optics</source>
          <volume>42</volume>
          (
          <issue>2</issue>
          )
          <fpage>297</fpage>
          -
          <lpage>305</lpage>
          DOI: 10.18287 /
          <fpage>2412</fpage>
          -6179-2018-42-2-
          <fpage>297</fpage>
          -305
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Class</surname>
            <given-names>AsyncTask</given-names>
          </string-name>
          (Access mode: http://developer.alexanderklimov.ru/android/theory/AsyncTask .php)
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <article-title>Android developers (Access mode: https: // developer</article-title>
          .android.com/index.html) [
          <volume>11</volume>
          ] [13]
          <string-name>
            <surname>Paramonov</surname>
            <given-names>I V</given-names>
          </string-name>
          <year>2013</year>
          <article-title>Development of mobile applications for the Android platform: a tutorial (Yaroslavl: YarSU) p 88 Hardy B</article-title>
          ,
          <string-name>
            <surname>Phillips</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stuart</surname>
            <given-names>K</given-names>
          </string-name>
          and
          <string-name>
            <surname>Marsicano C 2016</surname>
          </string-name>
          <article-title>Android</article-title>
          .
          <article-title>Programming for professionals (St</article-title>
          . Petersburg: Peter) p 640
          <string-name>
            <surname>Howse J 2013 Android Application</surname>
          </string-name>
          <article-title>Programming with O penCV (Packt Publishing Ltd</article-title>
          .)
          <fpage>1</fpage>
          -
          <lpage>131</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>