<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Inpainting Using F-Transform for Cartoon-Like Images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pavel Vlašánek</string-name>
          <email>pavel.vlasanek@osu.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Irina Perfilieva</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute for Research and Applications of Fuzzy Modeling, University of Ostrava</institution>
          ,
          <addr-line>30. dubna 22, Ostrava</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <volume>1885</volume>
      <fpage>167</fpage>
      <lpage>173</lpage>
      <abstract>
        <p>We propose to modify image inpainting technique based on F-transform for application dedicated to cartoon images. The images have typical features which are taken into consideration. These features make original algorithm ineffective, because of its isotropic nature. Proposed modification changes it to an anisotropic.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Image restoration, in meaning of object removal or
damage recovery, so called image inpainting, is challenging
task in image processing. Let us consider input image
I which contains unwanted pixels considered as a
damage. In the process of image inpainting, the damaged area
should be erased and replaced by some proper part of I.
The selection of the proper part is crucial. One option is
to choose square shaped patch and replace the damaged
area by its copy. In that case, we are talking about
patchbased image inpainting[
        <xref ref-type="bibr" rid="ref1 ref6 ref7 ref8">1, 6, 7, 8</xref>
        ]. In this paper, as well as
in many others, we use principle of the techniques taking
colors of the separated pixels in the close neighborhood of
the damaged area to the consideration [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2, 3, 4, 5</xref>
        ].
      </p>
      <p>Structure of the paper is as follows. Section 2 gives
preliminaries including information about F-transform and
details about its two types. Section 3 describes basics of
the specific type of images used in this paper and Section
4 gives information about mathematical morphology.
Detailed description of proposed technique is in Section 5 and
conclusion is given in Section 6.
Let us fix the following notation to use throughout the
paper. Image I is a 2D vector function such as I :
[0, M] × [0, N] → [0, 255]3, where [0, 255]3 stands for pixel
intensities in three color channels. We denote [0, M] =
{0, 1, 2, . . . , M}, [0, N] = {0, 1, 2, . . . , N} and [0, 255] =
{0, 1, 2, . . . , 255}. Therefore, M + 1 is the image width
and N + 1 is the image height. Image I is assumed to
be partially defined: it is defined (known) on the areaΦ
and undefined (unknown, damaged) on the area Ω. The
border between these areas is denoted by δ Ω and
assumed to be unknown. It is assumed that Φ ∩ Ω = 0/ and
Φ ∪ Ω ∪ δ Ω = [0, M] × [0, N]. Mask S is a binary image
where white pixels denote unknown area Ω + δ Ω. The
mask is created by user with respect to areas intended for
deletion. The notation is illustrated in Fig. 1.
(a)
(b)</p>
      <p>We are focused on image restoration. By this we mean
that pixels from Ω ∪ δ Ω should be replaced by pixels from
Φ. The resulting image should make an impression that
damage is not present.
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>F0-Transform</title>
      <p>
        Below, we recall the definition of a fuzzy partition [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
Fuzzy sets A0, . . . , Am identified with their membership
functions (basic functions) A0, . . . , Am : [0, M] → [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ],
establish a fuzzy partition of [0, M] with nodes 0 = x0 &lt; x1 &lt;
· · · &lt; xm = M if the following conditions are fulfilled:
1) Ak : [0, M] → [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], Ak(xk) = 1;
2) Ak(x) = 0 if x ∈/ (xk−1, xk+1), k = 0, . . . , m;
      </p>
      <sec id="sec-2-1">
        <title>3) Ak(x) is continuous;</title>
        <p>4) Ak(x) strictly increase on [xk−1, xk],
k = 1, . . . , m; and strictly decrease on [xk, xk+1], k =
1, . . . , m;</p>
        <p>m
5) ∑k=0 Ak(x) = 1, x ∈ [0, M].</p>
        <p>We say that the fuzzy partition given by A0, . . . , Am, is
an h-uniform fuzzy partition if nodes xk = hk, k = 0, . . . , m,
are equidistant, h = M/m and two additional properties are
met:
6) Ak(xk − x) = Ak(xk + x), x ∈ [0, h], k = 0, . . . , m;
7) Ak(x) = Ak−1(x − h), k = 1, . . . , m, x ∈ [xk−1, xk+1].
Parameter h will be referred to as a radius.</p>
        <p>Assume that fuzzy sets A0, . . . , Am establish a fuzzy
partition of [0, M]. The following vector of real numbers
Fm[I] = (F0, . . . , Fm) is the (direct) discrete F-transform of
I w.r.t. A0, . . . , Am where the k−th component Fk is defined
by</p>
        <p>Fk0 =</p>
        <p>M
∑x=0 Ak(x)I(x)</p>
        <p>M
∑x=0 Ak(x)
, k = 0, . . . , m.</p>
        <p>(1)</p>
        <p>Let us introduce F-transform of a 2D grayscale image
I that is considered as a function I : [0, M] × [0, N] →
[0, 255].</p>
        <p>
          Let A0, . . . , Am and B0, . . . , Bn be basic functions,
A0, . . . , Am : [0, M] → [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] be fuzzy partition of [0, M] and
B0, . . . , Bn : [0, N] → [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] be fuzzy partition of [0, N].
        </p>
        <p>
          We say that the m × n-matrix of real numbers [Fk0l ]
is called the (discrete) F-transform of I with respect to
{A0, . . . , Am} and {B0, . . . , Bn} if for all k = 0, . . . , m, l =
0, . . . , n,
In this section, we recall the (direct) F1-transform as it has
been presented in [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. Let {Ak × Bl | k = 0, . . . , m, l =
0, . . . , n} be a fuzzy partition of [0, M] × [0, N]. L21(Ak) ⊆
L2(Ak) (L21(Bl ) ⊆ L2(Bl ))1 be a linear span of the set
consisting of two orthogonal polynomials
        </p>
        <p>Pk0(x) = 1, Pk1(x) = x − xk,
(Ql0(y) = 1, Ql1(y) = y − yl ),
where 1 is a denotation of the respective constant
function.</p>
        <p>Analogously, let L21(Ak × Bl ) ⊆ L2(Ak × Bl ) be a linear
span of the set consisting of three orthogonal polynomials</p>
        <p>Sk0l0(x, y) = 1, Sk1l0(x, y) = x − xk, Sk0l1(x, y) = y − yl .
Let I ∈ L2([0, M] × [0, N]), and Fk1l be the orthogonal
projection of I|[xk−1,xk+1]×[yl−1,yl+1] on subspace L21(Ak × Bl ),
k = 0, . . . , m, l = 0, . . . , n.</p>
        <p>We say that matrix F1mn[I] = (Fk1l ), k = 0, . . . , m, l =
0, . . . , n, is the F1-transform of I with respect to {Ak × Bl |
k = 0, . . . , m, l = 0, . . . , n}, and Fk1l is the corresponding
F1transform component.</p>
        <p>xk−1
where the weight function is equal to Ak.</p>
        <p>1L2(Ak) is a Hilbert space of square-integrable functions f :
[xk−1, xk+1] → R with the weighted inner product h f , gik given by</p>
        <p>Z xk+1
h f , gik =
f (x)g(x)Ak(x)dx,
(3)</p>
        <p>The F1-transform components of I are linear
polynomials in the form</p>
        <p>Fk1l (x, y) = ck0l0 + ck1l0(x − xk) + ck0l1(y − yl ),
where the coefficients are given by
ck0l0 =
ck1l0 =
ck0l1 =
∑y=0 ∑xM=0 I(x, y)Ak(x)Bl (y)</p>
        <p>N</p>
        <p>N M ,
∑y=0 ∑x=0 Ak(x)Bl (y)</p>
        <p>N M
∑y=0 ∑x=0 I(x, y)(x − xk)Ak(x)Bl (y)
∑y=0 ∑xM=0(x − xk)2Ak(x)Bl (y)</p>
        <p>N</p>
        <p>N M
∑y=0 ∑x=0 I(x, y)(y − yl )Ak(x)Bl (y)
∑y=0 ∑xM=0(y − yl )2Ak(x)Bl (y)</p>
        <p>N
.
2.3</p>
        <sec id="sec-2-1-1">
          <title>F-Transform Image Inpainting</title>
          <p>
            In [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ], the technique of F-transforms was proposed for
the inpainting. It uses two steps: direct and inverse of the
0th-degree F-transform. The direct step is described in the
previous section whereas the inverse is as follows
m n
O(x, y) = ∑ ∑ Fk0l Ak(x)Bl (y),
          </p>
          <p>
            k=0 l=0
where O is the output (reconstructed) image. In fact, the
algorithm computes the F-transform components of the
input image I and spreads the components afterwards to the
size of I. For details see [
            <xref ref-type="bibr" rid="ref12">12</xref>
            ].
          </p>
          <p>Let us recall basics of the technique and illustrate its
update for the cartoon images. The original technique works
with the assumption that damaged pixels of I should not
be included in a component value. For that purpose, the
binary mask S is used in the computation:</p>
          <p>
            This approach works well for photos as was shown
in [
            <xref ref-type="bibr" rid="ref12 ref13 ref15 ref9">13, 15, 12, 9</xref>
            ]. For cartoon images, the quality of
reconstruction is not sufficient because of the isotropic nature of
the algorithm. The problem is that edges are not taken into
consideration during the computation.
3
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Cartoon Images</title>
      <p>In this paper, we suggest inpainting technique aimed to be
applied to images with two specific features:
• limited color palette,
• strong and thick uni-color edges.</p>
      <p>These features are usually included in simple cartoon
images as can be seen in Fig. 2.</p>
      <p>For testing purposes, we created a set of artificial images
with the same features. The set is in Fig. 3.
(4)
(5)
(6)
(a) Boy</p>
      <p>
        (b) Goat
(c) Homer
Application of mathematical morphology [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] is an
important step in the proposed method. Let us give a short
description of this technique.
      </p>
      <p>In mathematical morphology, a structuring element is
selected and applied to the input image. For our method,
a binary image is used. We recall three main operations:
erosion, dilation and closing.
If image I contains only few colors and thick edges,
reconstruction using original algorithm based on (6) is affected
by visible artifacts. Illustration is in Fig. 12. A new
proposed algorithm is based on the assumption that similar
areas should be reconstructed independently. Main idea is
to separate these areas and reconstruct each of them with
respect to a particular color. In this paper, we propose to
separate edges (pixels with high gradient) from the rest of
the image, reconstruct their damaged parts and continue
with the other areas afterwards.</p>
      <p>For this purpose, another binary image V is taken into
consideration. The image V is created automatically
during the reconstruction process and it influences the
computation of the F0-transform components as it is shown
below</p>
      <p>We can say that image V and mask S overlaps image I.
Mask S coincides with the characteristic function of area
Φ. Image V designates by 1 the so called valid pixels. The
latter are used in the reconstruction process. Therefore,
the edges are reconstructed from pixels of the known part
of edges only and similarly, for pixels from the other
nonedge areas. This feature changes isotropic nature of the
original inpainting algorithm to anisotropic because pixel
colors are not necessarily distributed to the all
neighbourhood.</p>
      <p>Bellow, the proposed algorithm is illustrated on the
input from Fig. 4.
(a) I
(b) S</p>
      <sec id="sec-3-1">
        <title>1) Compute the F1-transform.</title>
        <p>Comment: At this step, we compute coefficients
c00, c10, c01 of I F1-transform components in accordance
with (5). The output for the input in Fig. 4 is in Fig. 5.
(a) c00</p>
        <p>(b) c01
(c) c10</p>
        <p>2) Upscale c10 and c01 to the size of image I and convert
them to gray-scale.</p>
        <p>3) Update c01 and c10 by subtracting mask S from them.
Comment: Performing this update we eliminate false
edges. Illustration is in Fig. 6.</p>
        <p>4) Make shifted copies of c01 and c10.</p>
        <p>Comment: Edges are detected in the places with the
highest gradient. Because of our assumption about the thick
edges in I, we copy and shift c01 to the left and c10 to the
up. Doing this, we restrict horizontal and vertical edges.</p>
        <p>5) Create new image V as the union of c01, c10 and their
shifted copies. Threshold V to obtain the binary
image.</p>
        <p>Comment: After this step, we obtain binary image V . Its
white pixels represent edges whereas black pixels
represent areas without significant gradient. Illustration is in
Fig. 7.</p>
        <p>Comment: The purpose of this step is to fill in all
imperfections of V . In step 3, we subtracted the mask and that
created holes in the detected edge area. By closing, we
fix these holes and prolong (connect) appropriate parts of
image edges. Illustration is in Fig. 8.
7) Use white pixels of V to find edge area of I and by
histogram analysis determine a dominant color of it.</p>
        <p>Further on, the color is called edge color.
8) Based on the edge color divide I to Vg and Vc and
subtract mask from both. Turn Vg and Vc to binary
images and apply morphological closing.</p>
        <p>Image Vg represents edges of the I whereas Vc the rest.
Image Vg contains holes because of the mask subtraction.
By closing, we fill the holes. Illustration of this step is in
Fig. 9.</p>
        <p>(a) Vg
(b) Vc
9) Find intersections of Vg and the mask S.</p>
        <p>The intersections determine places on the edge area
which are damaged. Let us name this intersection Sg.
Illustration is in Fig. 10.
(a) Edge reconstruction
(b) detail of a)
(c) Reconstruction of the rest
10) Use Sg as a mask and Vg as an valid pixels set for edge
reconstruction. Use S − Sg as a mask and Vc as a valid
pixels set for reconstruction of the rest.</p>
        <p>Because we separated edges from the rest, we can
reconstruct these two parts independently. Illustration is in
Fig. 11.
5.1</p>
        <sec id="sec-3-1-1">
          <title>Examples and Comparison</title>
          <p>Let us illustrate the proposed inpainting algorithm side by
side with original technique based on F-transform. The
(g) Shape
(h) Shape
(i) Shape
images from Fig. 3 was damaged and reconstructed
afterwards. Results are in Fig. 12.</p>
          <p>Let us magnify the details to demonstrate a difference
in higher resolution. In Fig. 13, the comparison is given.
The original technique blurs the lines, do not follow edges
(a) Circle b)</p>
          <p>(b) Lines e)
(c) Shape h)
(d) Circle c)</p>
          <p>(e) Lines f)
(f) Shape i)
and mix colors together. Reason is the isotropic nature of
the original formula. Thus for cartoon images, we propose
to use different approach described in this paper.</p>
          <p>In Fig. 14, the novel inpainting technique is illustrated
on the set of cartoon images.
6</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>
        We propose a novel inpainting technique aimed to specific
type of images. Original inpainting technique based on
F-transform was applied on photos and introduced in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>The main idea of the novel algorithm is in division of
input image and independent processing of its parts. In
this introduction paper, we suggest to divide the image to
two parts: edges and rest. The edges are separated using
coefficients of F1-transform. Its damaged (missing) parts
are connected together using mathematical morphology.
Based on that, missing parts of the edges are identified and
reconstructed using inpainting technique with updated
formulas. The same is applied to the rest of the image.
(a) Homer
(b) Boy
(c) Goat
(d) Homer
(e) Boy
(f) Goat</p>
      <p>We illustrated our technique on the two sets of images
and compared with original one.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgment</title>
      <p>This work was supported by the project LQ1602
IT4Innovations excellence in science.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ashikhmin</surname>
          </string-name>
          .
          <article-title>Synthesizing natural textures</article-title>
          .
          <source>In Proceedings of the 2001 symposium on Interactive 3D graphics</source>
          , pages
          <fpage>217</fpage>
          -
          <lpage>226</lpage>
          . ACM,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Ballester</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Caselles</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Verdera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bertalmio</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Sapiro</surname>
          </string-name>
          .
          <article-title>A variational model for filling-in gray level and color images</article-title>
          .
          <source>In Computer Vision</source>
          ,
          <year>2001</year>
          .
          <article-title>ICCV 2001</article-title>
          .
          <article-title>Proceedings</article-title>
          . Eighth IEEE International Conference on, volume
          <volume>1</volume>
          , pages
          <fpage>10</fpage>
          -
          <lpage>16</lpage>
          . IEEE,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bertalmio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. L.</given-names>
            <surname>Bertozzi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Sapiro</surname>
          </string-name>
          .
          <article-title>Navier-stokes, fluid dynamics, and image and video inpainting</article-title>
          .
          <source>In Computer Vision and Pattern Recognition</source>
          ,
          <year>2001</year>
          .
          <article-title>CVPR 2001</article-title>
          .
          <article-title>Proceedings of the 2001</article-title>
          IEEE Computer Society Conference on, volume
          <volume>1</volume>
          ,
          <string-name>
            <surname>pages</surname>
            <given-names>I</given-names>
          </string-name>
          -
          <fpage>355</fpage>
          . IEEE,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bertalmio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Sapiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Caselles</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Ballester</surname>
          </string-name>
          .
          <article-title>Image inpainting</article-title>
          .
          <source>In Proceedings of the 27th annual conference on Computer graphics and interactive techniques</source>
          , pages
          <fpage>417</fpage>
          -
          <lpage>424</lpage>
          . ACM Press/Addison-Wesley Publishing Co.,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T. F.</given-names>
            <surname>Chan</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Shen</surname>
          </string-name>
          .
          <article-title>Nontexture inpainting by curvaturedriven diffusions</article-title>
          .
          <source>Journal of Visual Communication and Image Representation</source>
          ,
          <volume>12</volume>
          (
          <issue>4</issue>
          ):
          <fpage>436</fpage>
          -
          <lpage>449</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>J. S. De Bonet</surname>
          </string-name>
          .
          <article-title>Multiresolution sampling procedure for analysis and synthesis of texture images</article-title>
          .
          <source>In Proceedings of the 24th annual conference on Computer graphics and interactive techniques</source>
          , pages
          <fpage>361</fpage>
          -
          <lpage>368</lpage>
          . ACM Press/AddisonWesley Publishing Co.,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Efros</surname>
          </string-name>
          and
          <string-name>
            <given-names>W. T.</given-names>
            <surname>Freeman</surname>
          </string-name>
          .
          <article-title>Image quilting for texture synthesis and transfer</article-title>
          .
          <source>In Proceedings of the 28th annual conference on Computer graphics and interactive techniques</source>
          , pages
          <fpage>341</fpage>
          -
          <lpage>346</lpage>
          . ACM,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Efros</surname>
          </string-name>
          and
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Leung</surname>
          </string-name>
          .
          <article-title>Texture synthesis by nonparametric sampling</article-title>
          .
          <source>In Computer Vision</source>
          ,
          <year>1999</year>
          .
          <source>The Proceedings of the Seventh IEEE International Conference on</source>
          , volume
          <volume>2</volume>
          , pages
          <fpage>1033</fpage>
          -
          <lpage>1038</lpage>
          . IEEE,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Pavel</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Irina</surname>
          </string-name>
          .
          <article-title>Interpolation techniques versus Ftransform in application to image reconstruction</article-title>
          .
          <source>In Fuzzy Systems (FUZZ-IEEE)</source>
          ,
          <year>2014</year>
          IEEE International Conference on, pages
          <fpage>533</fpage>
          -
          <lpage>539</lpage>
          . IEEE,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>I.</given-names>
            <surname>Perfilieva</surname>
          </string-name>
          .
          <article-title>Fuzzy transforms: Theory and applications</article-title>
          .
          <source>Fuzzy sets and systems</source>
          ,
          <volume>157</volume>
          (
          <issue>8</issue>
          ):
          <fpage>993</fpage>
          -
          <lpage>1023</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>I.</given-names>
            <surname>Perfilieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hodáková</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Hurtík</surname>
          </string-name>
          .
          <article-title>Differentiation by the F-transform and application to edge detection</article-title>
          .
          <source>Fuzzy Sets and Systems</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>I.</given-names>
            <surname>Perfilieva</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>Vlašánek</surname>
          </string-name>
          .
          <article-title>Image reconstruction by means of F-transform</article-title>
          .
          <source>Knowledge-Based Systems</source>
          ,
          <volume>70</volume>
          :
          <fpage>55</fpage>
          -
          <lpage>63</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>I.</given-names>
            <surname>Perfilieva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Vlašánek</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Wrublová</surname>
          </string-name>
          .
          <article-title>Fuzzy transform for image reconstruction. In Uncertainty Modeling in Knowledge Engineering and Decision Making</article-title>
          , Singapore,
          <year>2012</year>
          . World Scientific.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Serra</surname>
          </string-name>
          .
          <article-title>Image analysis and mathematical morphology</article-title>
          , v. 1. Academic press,
          <year>1982</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Vlašánek</surname>
            and
            <given-names>I. Perfilieva.</given-names>
          </string-name>
          <article-title>Image reconstruction with usage of the F-transform</article-title>
          . In International Joint Conference CISIS'
          <fpage>12</fpage>
          -ICEUTE'
          <fpage>12</fpage>
          -SOCO'12 Special Sessions, pages
          <fpage>507</fpage>
          -
          <lpage>514</lpage>
          , Berlin,
          <year>2013</year>
          . Springer.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>