<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Inpainting Using F-Transform for Cartoon-Like Images</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Pavel</forename><surname>Vlašánek</surname></persName>
							<email>pavel.vlasanek@osu.cz</email>
							<affiliation key="aff0">
								<orgName type="department">Institute for Research and Applications of Fuzzy Modeling</orgName>
								<orgName type="institution">University of Ostrava</orgName>
								<address>
									<addrLine>30. dubna 22</addrLine>
									<settlement>Ostrava</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Irina</forename><surname>Perfilieva</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute for Research and Applications of Fuzzy Modeling</orgName>
								<orgName type="institution">University of Ostrava</orgName>
								<address>
									<addrLine>30. dubna 22</addrLine>
									<settlement>Ostrava</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Inpainting Using F-Transform for Cartoon-Like Images</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">661E3AECE8050E28C20AFA292B81148C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T10:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We propose to modify image inpainting technique based on F-transform for application dedicated to cartoon images. The images have typical features which are taken into consideration. These features make original algorithm ineffective, because of its isotropic nature. Proposed modification changes it to an anisotropic.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Image restoration, in meaning of object removal or damage recovery, so called image inpainting, is challenging task in image processing. Let us consider input image I which contains unwanted pixels considered as a damage. In the process of image inpainting, the damaged area should be erased and replaced by some proper part of I. The selection of the proper part is crucial. One option is to choose square shaped patch and replace the damaged area by its copy. In that case, we are talking about patchbased image inpainting <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8]</ref>. In this paper, as well as in many others, we use principle of the techniques taking colors of the separated pixels in the close neighborhood of the damaged area to the consideration <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref>.</p><p>Structure of the paper is as follows. Section 2 gives preliminaries including information about F-transform and details about its two types. Section 3 describes basics of the specific type of images used in this paper and Section 4 gives information about mathematical morphology. Detailed description of proposed technique is in Section 5 and conclusion is given in Section 6.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Preliminaries</head><p>Let us fix the following notation to use throughout the paper. Image I is a 2D vector function such as I : 3 , where [0, 255] 3 stands for pixel intensities in three color channels. We denote [0, M] = {0, 1, 2, . . . , M}, [0, N] = {0, 1, 2, . . . , N} and [0, 255] = {0, 1, 2, . . . , 255}. Therefore, M + 1 is the image width and N + 1 is the image height. Image I is assumed to be partially defined: it is defined (known) on the area Φ and undefined (unknown, damaged) on the area Ω. The border between these areas is denoted by δ Ω and assumed to be unknown. It is assumed that</p><formula xml:id="formula_0">[0, M] × [0, N] → [0, 255]</formula><formula xml:id="formula_1">Φ ∩ Ω = / 0 and Φ ∪ Ω ∪ δ Ω = [0, M] × [0, N].</formula><p>Mask S is a binary image where white pixels denote unknown area Ω + δ Ω. The mask is created by user with respect to areas intended for deletion. The notation is illustrated in Fig. <ref type="figure" target="#fig_0">1</ref>. We are focused on image restoration. By this we mean that pixels from Ω ∪ δ Ω should be replaced by pixels from Φ. The resulting image should make an impression that damage is not present.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">F 0 -Transform</head><p>Below, we recall the definition of a fuzzy partition <ref type="bibr" target="#b9">[10]</ref>. Fuzzy sets A 0 , . . . , A m identified with their membership functions (basic functions) A 0 , . . . ,</p><formula xml:id="formula_2">A m : [0, M] → [0, 1], es- tablish a fuzzy partition of [0, M] with nodes 0 = x 0 &lt; x 1 &lt; • • • &lt; x m = M if the following conditions are fulfilled: 1) A k : [0, M] → [0, 1], A k (x k ) = 1; 2) A k (x) = 0 if x / ∈ (x k−1 , x k+1 ), k = 0, . . . , m; 3) A k (x) is continuous; 4) A k (x) strictly increase on [x k−1 , x k ], k = 1, . . . , m; and strictly decrease on [x k , x k+1 ], k = 1, . . . , m; 5) ∑ m k=0 A k (x) = 1, x ∈ [0, M].</formula><p>We say that the fuzzy partition given by A 0 , . . . , A m , is an h-uniform fuzzy partition if nodes x k = hk, k = 0, . . . , m, are equidistant, h = M/m and two additional properties are met:</p><formula xml:id="formula_3">6) A k (x k − x) = A k (x k + x), x ∈ [0, h], k = 0, . . . , m; 7) A k (x) = A k−1 (x − h), k = 1, . . . , m, x ∈ [x k−1 , x k+1 ].</formula><p>Parameter h will be referred to as a radius.</p><p>Assume that fuzzy sets A 0 , . . . , A m establish a fuzzy partition of [0, M]. The following vector of real numbers F m [I] = (F 0 , . . . , F m ) is the (direct) discrete F-transform of I w.r.t. A 0 , . . . , A m where the k−th component F k is defined by</p><formula xml:id="formula_4">F 0 k = ∑ M x=0 A k (x)I(x) ∑ M x=0 A k (x) , k = 0, . . . , m.<label>(1)</label></formula><p>Let us introduce F-transform of a 2D grayscale image I that is considered as a function</p><formula xml:id="formula_5">I : [0, M] × [0, N] → [0, 255].</formula><p>Let A 0 , . . . , A m and B 0 , . . . , B n be basic functions, A 0 , . . . , A m : [0, M] → [0, 1] be fuzzy partition of [0, M] and B 0 , . . . , B n :</p><formula xml:id="formula_6">[0, N] → [0, 1] be fuzzy partition of [0, N].</formula><p>We say that the m × n-matrix of real numbers</p><formula xml:id="formula_7">[F 0 kl ] is called the (discrete) F-transform of I with respect to {A 0 , . . . , A m } and {B 0 , . . . , B n } if for all k = 0, . . . , m, l = 0, . . . , n, F 0 kl = ∑ N y=0 ∑ M x=0 I(x, y)A k (x)B l (y) ∑ N y=0 ∑ M x=0 A k (x)B l (y) .<label>(2)</label></formula><p>The coefficients F 0 kl are called components of the Ftransform.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">F 1 -Transform</head><p>In this section, we recall the (direct) F 1 -transform as it has been presented in <ref type="bibr" target="#b10">[11]</ref>.</p><formula xml:id="formula_8">Let {A k × B l | k = 0, . . . , m, l = 0, . . . , n} be a fuzzy partition of [0, M] × [0, N]. L 1 2 (A k ) ⊆ L 2 (A k ) (L 1 2 (B l ) ⊆ L 2 (B l ))</formula><p>1 be a linear span of the set consisting of two orthogonal polynomials</p><formula xml:id="formula_9">P 0 k (x) = 1, P 1 k (x) = x − x k , (Q 0 l (y) = 1, Q 1 l (y) = y − y l ),</formula><p>where 1 is a denotation of the respective constant function.</p><p>Analogously, let</p><formula xml:id="formula_10">L 1 2 (A k × B l ) ⊆ L 2 (A k × B l</formula><p>) be a linear span of the set consisting of three orthogonal polynomials</p><formula xml:id="formula_11">S 00 kl (x, y) = 1, S 10 kl (x, y) = x − x k , S 01 kl (x, y) = y − y l . Let I ∈ L 2 ([0, M] × [0, N]), and F 1 kl be the orthogonal pro- jection of I| [x k−1 ,x k+1 ]×[y l−1 ,y l+1 ] on subspace L 1 2 (A k × B l ), k = 0, . . . , m, l = 0, . . . , n.</formula><p>We say that matrix</p><formula xml:id="formula_12">F 1 mn [I] = (F 1 kl ), k = 0, . . . , m, l = 0, . . . , n, is the F 1 -transform of I with respect to {A k × B l | k = 0, . . . , m, l = 0, . . . , n}, and F 1 kl is the corresponding F 1 - transform component. 1 L 2 (A k ) is a Hilbert space of square-integrable functions f : [x k−1 , x k+1 ] → R with the weighted inner product f , g k given by f , g k = x k+1 x k−1 f (x)g(x)A k (x)dx,<label>(3)</label></formula><p>where the weight function is equal to A k .</p><p>The F 1 -transform components of I are linear polynomials in the form</p><formula xml:id="formula_13">F 1 kl (x, y) = c 00 kl + c 10 kl (x − x k ) + c 01 kl (y − y l ),<label>(4)</label></formula><p>where the coefficients are given by</p><formula xml:id="formula_14">c 00 kl = ∑ N y=0 ∑ M x=0 I(x, y)A k (x)B l (y) ∑ N y=0 ∑ M x=0 A k (x)B l (y) , c 10 kl = ∑ N y=0 ∑ M x=0 I(x, y)(x − x k )A k (x)B l (y) ∑ N y=0 ∑ M x=0 (x − x k ) 2 A k (x)B l (y) , c 01 kl = ∑ N y=0 ∑ M x=0 I(x, y)(y − y l )A k (x)B l (y) ∑ N y=0 ∑ M x=0 (y − y l ) 2 A k (x)B l (y)</formula><p>.</p><p>(5)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">F-Transform Image Inpainting</head><p>In <ref type="bibr" target="#b11">[12]</ref>, the technique of F-transforms was proposed for the inpainting. It uses two steps: direct and inverse of the 0th-degree F-transform. The direct step is described in the previous section whereas the inverse is as follows</p><formula xml:id="formula_15">O(x, y) = m ∑ k=0 n ∑ l=0 F 0 kl A k (x)B l (y),<label>(6)</label></formula><p>where O is the output (reconstructed) image. In fact, the algorithm computes the F-transform components of the input image I and spreads the components afterwards to the size of I. For details see <ref type="bibr" target="#b11">[12]</ref>.</p><p>Let us recall basics of the technique and illustrate its update for the cartoon images. The original technique works with the assumption that damaged pixels of I should not be included in a component value. For that purpose, the binary mask S is used in the computation:</p><formula xml:id="formula_16">F 0 kl = ∑ N y=0 ∑ M x=0 I(x, y)S(x, y)A k (x)B l (y) ∑ N y=0 ∑ M x=0 S(x, y)A k (x)B l (y)</formula><p>.</p><p>This approach works well for photos as was shown in <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b8">9]</ref>. For cartoon images, the quality of reconstruction is not sufficient because of the isotropic nature of the algorithm. The problem is that edges are not taken into consideration during the computation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Cartoon Images</head><p>In this paper, we suggest inpainting technique aimed to be applied to images with two specific features:</p><p>• limited color palette,</p><p>• strong and thick uni-color edges.</p><p>These features are usually included in simple cartoon images as can be seen in Fig. <ref type="figure" target="#fig_2">2</ref>.</p><p>For testing purposes, we created a set of artificial images with the same features. The set is in Fig. <ref type="figure" target="#fig_3">3</ref>.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Mathematical Morphology</head><p>Application of mathematical morphology <ref type="bibr" target="#b13">[14]</ref> is an important step in the proposed method. Let us give a short description of this technique.</p><p>In mathematical morphology, a structuring element is selected and applied to the input image. For our method, a binary image is used. We recall three main operations: erosion, dilation and closing.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Erosion</head><p>The erosion is defined as follows</p><formula xml:id="formula_17">I T = {z ∈ [0, M] × [0, N]|T z ⊆ I},</formula><p>where T is a structuring element and z is a translation vector. Operator of binary erosion is in fact a test whether image I contains areas like T .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Dilation</head><p>The dilation is defined as follows</p><formula xml:id="formula_18">I ⊕ T = t∈T I t ,</formula><p>where T is a structuring element and I t is the translation of I by t.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Closing</head><p>Operator of closing is the erosion of dilation defined as follows</p><formula xml:id="formula_19">I • T = (I ⊕ T ) T.</formula><p>The effect of binary closing is in filling small holes and imperfections in the image I.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Novel Inpainting Technique</head><p>If image I contains only few colors and thick edges, reconstruction using original algorithm based on ( <ref type="formula" target="#formula_15">6</ref>) is affected by visible artifacts. Illustration is in Fig. <ref type="figure" target="#fig_16">12</ref>. A new proposed algorithm is based on the assumption that similar areas should be reconstructed independently. Main idea is to separate these areas and reconstruct each of them with respect to a particular color. In this paper, we propose to separate edges (pixels with high gradient) from the rest of the image, reconstruct their damaged parts and continue with the other areas afterwards.</p><p>For this purpose, another binary image V is taken into consideration. The image V is created automatically during the reconstruction process and it influences the computation of the F 0 -transform components as it is shown below</p><formula xml:id="formula_20">F 0 kl = ∑ N y=0 ∑ M x=0 I(x, y)S(x, y)V (x, y)A k (x)B l (y) ∑ N y=0 ∑ M x=0 S(x, y)V (x, y)A k (x)B l (y)</formula><p>.</p><p>We can say that image V and mask S overlaps image I. Mask S coincides with the characteristic function of area Φ. Image V designates by 1 the so called valid pixels. The latter are used in the reconstruction process. Therefore, the edges are reconstructed from pixels of the known part of edges only and similarly, for pixels from the other nonedge areas. This feature changes isotropic nature of the original inpainting algorithm to anisotropic because pixel colors are not necessarily distributed to the all neighbourhood.</p><p>Bellow, the proposed algorithm is illustrated on the input from Fig. <ref type="figure" target="#fig_5">4</ref>.  Comment: At this step, we compute coefficients c 00 , c 10 , c 01 of I F 1 -transform components in accordance with <ref type="bibr" target="#b4">(5)</ref>. The output for the input in Fig. <ref type="figure" target="#fig_5">4</ref> is in Fig. <ref type="figure" target="#fig_7">5</ref>.  2) Upscale c 10 and c 01 to the size of image I and convert them to gray-scale.</p><p>3) Update c 01 and c 10 by subtracting mask S from them.</p><p>Comment: Performing this update we eliminate false edges. Illustration is in Fig. <ref type="figure" target="#fig_9">6</ref>.</p><p>4) Make shifted copies of c 01 and c 10 .</p><p>Comment: Edges are detected in the places with the highest gradient. Because of our assumption about the thick edges in I, we copy and shift c 01 to the left and c 10 to the up. Doing this, we restrict horizontal and vertical edges.  Comment: After this step, we obtain binary image V . Its white pixels represent edges whereas black pixels represent areas without significant gradient. Illustration is in Fig. <ref type="figure" target="#fig_10">7</ref>. Comment: The purpose of this step is to fill in all imperfections of V . In step 3, we subtracted the mask and that created holes in the detected edge area. By closing, we fix these holes and prolong (connect) appropriate parts of image edges. Illustration is in Fig. <ref type="figure" target="#fig_11">8</ref>. 7) Use white pixels of V to find edge area of I and by histogram analysis determine a dominant color of it. Further on, the color is called edge color.</p><p>8) Based on the edge color divide I to V g and V c and subtract mask from both. Turn V g and V c to binary images and apply morphological closing.</p><p>Image V g represents edges of the I whereas V c the rest. Image V g contains holes because of the mask subtraction. By closing, we fill the holes. Illustration of this step is in Fig. <ref type="figure" target="#fig_12">9</ref>. The intersections determine places on the edge area which are damaged. Let us name this intersection S g . Illustration is in Fig. <ref type="figure" target="#fig_13">10</ref>. 10) Use S g as a mask and V g as an valid pixels set for edge reconstruction. Use S − S g as a mask and V c as a valid pixels set for reconstruction of the rest.</p><p>Because we separated edges from the rest, we can reconstruct these two parts independently. Illustration is in Fig. <ref type="figure" target="#fig_0">11</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Examples and Comparison</head><p>Let us illustrate the proposed inpainting algorithm side by side with original technique based on F-transform. The   ), e) and h) were reconstructed using the original technique and images c), f) and i) were reconstructed using the proposed one.</p><p>images from Fig. <ref type="figure" target="#fig_3">3</ref> was damaged and reconstructed afterwards. Results are in Fig. <ref type="figure" target="#fig_16">12</ref>.</p><p>Let us magnify the details to demonstrate a difference in higher resolution. In Fig. <ref type="figure" target="#fig_17">13</ref>, the comparison is given. The original technique blurs the lines, do not follow edges and mix colors together. Reason is the isotropic nature of the original formula. Thus for cartoon images, we propose to use different approach described in this paper. In Fig. <ref type="figure" target="#fig_19">14</ref>, the novel inpainting technique is illustrated on the set of cartoon images.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion</head><p>We propose a novel inpainting technique aimed to specific type of images. Original inpainting technique based on F-transform was applied on photos and introduced in <ref type="bibr" target="#b11">[12]</ref>.</p><p>The main idea of the novel algorithm is in division of input image and independent processing of its parts. In this introduction paper, we suggest to divide the image to two parts: edges and rest. The edges are separated using coefficients of F 1 -transform. Its damaged (missing) parts are connected together using mathematical morphology. Based on that, missing parts of the edges are identified and reconstructed using inpainting technique with updated formulas. The same is applied to the rest of the image.  We illustrated our technique on the two sets of images and compared with original one.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1</head><label>1</label><figDesc>Figure 1: a) Two areas where image I is defined (Φ) and undefined (Ω); b) mask S.</figDesc><graphic coords="1,332.32,182.45,95.20,95.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Test set of cartoon images.</figDesc><graphic coords="3,139.49,215.56,95.20,71.93" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Test set of artificial cartoon images.</figDesc><graphic coords="3,139.49,466.12,95.20,95.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Input image and mask for algorithm description.</figDesc><graphic coords="4,74.51,74.55,95.21,95.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Coefficients of F 1 -transform. Contrast was enhanced for better visibility.</figDesc><graphic coords="4,129.49,420.37,95.20,95.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Updated coefficients c 01 and c 10 . Contrast was enhanced for better visibility.</figDesc><graphic coords="4,322.32,74.55,95.20,95.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Image V composed from c 01 , c 10 and their shifted copies.</figDesc><graphic coords="4,377.30,349.26,95.20,95.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Morphological closing applied on Fig. 7.</figDesc><graphic coords="4,377.30,605.02,95.20,95.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: Binary division of image I to the edge area V g and the rest V c followed by a morphological closing.</figDesc><graphic coords="5,84.51,224.61,95.20,95.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_13"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: Mask of damaged part of the edges.</figDesc><graphic coords="5,139.49,455.66,95.20,95.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_14"><head></head><label></label><figDesc>Figure 11: a) Reconstruction of the edge; b) detail; c) reconstruction of the rest of the image I.</figDesc><graphic coords="5,387.30,193.86,95.20,95.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_16"><head>Figure 12 :</head><label>12</label><figDesc>Figure 12: Application of our algorithm to damaged images from Fig. 3. Images a), d), g) are the damaged ones, images b), e) and h) were reconstructed using the original technique and images c), f) and i) were reconstructed using the proposed one.</figDesc><graphic coords="5,406.34,520.02,57.12,57.23" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_17"><head>Figure 13 :</head><label>13</label><figDesc>Figure 13: Details of Fig. 12.</figDesc><graphic coords="6,141.39,361.09,71.40,71.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_19"><head>Figure 14 :</head><label>14</label><figDesc>Figure 14: The cartoon images from testing set in Fig. 2 damaged and reconstructed.</figDesc><graphic coords="6,374.92,506.07,99.97,93.04" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">Inpainting Using F-Transform for Cartoon-Like Images</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgment</head><p>This work was supported by the project LQ1602 IT4Innovations excellence in science.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Synthesizing natural textures</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ashikhmin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2001 symposium on Interactive 3D graphics</title>
				<meeting>the 2001 symposium on Interactive 3D graphics</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="217" to="226" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A variational model for filling-in gray level and color images</title>
		<author>
			<persName><forename type="first">C</forename><surname>Ballester</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Caselles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Verdera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bertalmio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sapiro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICCV 2001. Proceedings. Eighth IEEE International Conference on</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2001">2001. 2001</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="10" to="16" />
		</imprint>
	</monogr>
	<note>Computer Vision</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Navier-stokes, fluid dynamics, and image and video inpainting</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bertalmio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Bertozzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sapiro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2001 IEEE Computer Society Conference on</title>
				<meeting>the 2001 IEEE Computer Society Conference on</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2001">2001. 2001. 2001</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">355</biblScope>
		</imprint>
	</monogr>
	<note>Computer Vision and Pattern Recognition</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Image inpainting</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bertalmio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sapiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Caselles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ballester</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 27th annual conference on Computer graphics and interactive techniques</title>
				<meeting>the 27th annual conference on Computer graphics and interactive techniques</meeting>
		<imprint>
			<publisher>ACM Press/Addison-Wesley Publishing Co</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="417" to="424" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Nontexture inpainting by curvaturedriven diffusions</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">F</forename><surname>Chan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Visual Communication and Image Representation</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="436" to="449" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Multiresolution sampling procedure for analysis and synthesis of texture images</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>De</surname></persName>
		</author>
		<author>
			<persName><surname>Bonet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 24th annual conference on Computer graphics and interactive techniques</title>
				<meeting>the 24th annual conference on Computer graphics and interactive techniques</meeting>
		<imprint>
			<publisher>ACM Press/Addison-Wesley Publishing Co</publisher>
			<date type="published" when="1997">1997</date>
			<biblScope unit="page" from="361" to="368" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Image quilting for texture synthesis and transfer</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Efros</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">T</forename><surname>Freeman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 28th annual conference on Computer graphics and interactive techniques</title>
				<meeting>the 28th annual conference on Computer graphics and interactive techniques</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="341" to="346" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Texture synthesis by nonparametric sampling</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Efros</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">K</forename><surname>Leung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Proceedings of the Seventh IEEE International Conference on</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="1999">1999. 1999</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1033" to="1038" />
		</imprint>
	</monogr>
	<note>Computer Vision</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Interpolation techniques versus Ftransform in application to image reconstruction</title>
		<author>
			<persName><forename type="first">V</forename><surname>Pavel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Irina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2014">2014. 2014</date>
			<biblScope unit="page" from="533" to="539" />
		</imprint>
	</monogr>
	<note>Fuzzy Systems (FUZZ-IEEE)</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Fuzzy transforms: Theory and applications</title>
		<author>
			<persName><forename type="first">I</forename><surname>Perfilieva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fuzzy sets and systems</title>
		<imprint>
			<biblScope unit="volume">157</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="993" to="1023" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Differentiation by the F-transform and application to edge detection</title>
		<author>
			<persName><forename type="first">I</forename><surname>Perfilieva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hodáková</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Hurtík</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Fuzzy Sets and Systems</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Image reconstruction by means of F-transform</title>
		<author>
			<persName><forename type="first">I</forename><surname>Perfilieva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vlašánek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">70</biblScope>
			<biblScope unit="page" from="55" to="63" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Fuzzy transform for image reconstruction</title>
		<author>
			<persName><forename type="first">I</forename><surname>Perfilieva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vlašánek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wrublová</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Uncertainty Modeling in Knowledge Engineering and Decision Making</title>
				<meeting><address><addrLine>Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>World Scientific</publisher>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Serra</surname></persName>
		</author>
		<title level="m">Image analysis and mathematical morphology</title>
				<imprint>
			<publisher>Academic press</publisher>
			<date type="published" when="1982">1982</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Image reconstruction with usage of the F-transform</title>
		<author>
			<persName><forename type="first">I</forename><surname>Vlašánek</surname></persName>
		</author>
		<author>
			<persName><surname>Perfilieva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Joint Conference CISIS&apos;12-ICEUTE&apos;12-SOCO&apos;12 Special Sessions</title>
				<meeting><address><addrLine>Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="507" to="514" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
