<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Attention-based Convolutional Neural Network for MRI Gibbs-ringing Artifact Suppression?</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Lomonosov Moscow State University</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country>Russia https</country>
          <institution>://imaging.cs.msu.ru/ru</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2080</year>
      </pub-date>
      <abstract>
        <p>Gibbs-ringing artifact is a common artifact in MRI image processing. As MRI raw data is taken in a frequency domain, 2D inverse discrete Fourier transform is applied to visualize data. Inability to take inverse Fourier transform of full spectrum (full k-space) leads to the insufficient sampling of the high frequency data and results in a well-known Gibbs phenomenon. It is worth to notice that truncation of high frequency information generates a significant blur, thus some techniques from other image restoration problems (for example, image deblur task) can be successfully used. We propose attention-based convolutional neural network for Gibbs-ringing reduction which is the extension of recently proposed GAS-CNN (Gibbs-ringing Artifact Suppression Convolutional Neural Network). Proposed method includes simplified non-linear mapping, amended by LRNN (Layer Recurrent Neural Network) refinement block with feature attention module, controlling the correlation between input and output tensors of the refinement unit. The research shows that the proposed post-processing refinement construction considerably simplifies the non-linear mapping.</p>
      </abstract>
      <kwd-group>
        <kwd>Gibbs-ringing artifacts</kwd>
        <kwd>Magnetic resonance imaging</kwd>
        <kwd>Attention CNN</kwd>
        <kwd>Image deringing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Gibbs-ringing artifact reduction is an image restoration problem, that can be solved by
mathematical methods of image processing.</p>
      <p>Gibbs oscillations (Gibbs phenomenon) often occur near high-frequency image
features, for example, edges. Artifacts can be observed while mapping image on a finer
grid, during contrast enhancement, video compression and MRI data visualizing. Slight
image distortions can be left invisible, while severe Gibbs artifacts may even create
obstacles in patients diagnosing, if we refer to Gibbs oscillations caused by k-space
(Fourier space) truncation of MRI frequency domain (see Fig. 1).
? The reported study was funded by RFBR, CNPq and MOST according to the research project
19-57-80014 (BRICS2019-394).</p>
      <p>Simple finite real-valued periodic function can be observed to disclose mathematical
reasons for Gibbs phenomenon:
(t) =
(a; if t 2 [ =2; =2]
0; if t 2 [ T =2; T =2] n [
=2; =2]</p>
      <p>;
(t) = (t + T );
where (t) is a basic model of a contrast edge, a is the amplitude of edge and T is the
period of the model function.</p>
      <p>Assuming Fourier transform in a complex form and T = 2 , (1) can be rewritten in
a form:
+1</p>
      <p>X
k= 1
(t) =</p>
      <p>dkei!kt;</p>
      <p>T
where dk = T1 R 2T (t)e i!ktdt, !k =
2
k,
= 2 =T .
(t) = 2
a X+1( 1)k
2
k=0</p>
      <p>1
(2k + 1)
cos (2k + 1)
where = 2 =T .</p>
      <p>In practice it is impossible to include all terms in Fourier series (3), so Gibbs
oscillations occur (see Fig. 2). The amplitude of Gibbs oscillations is constant for a given
signal and doesn’t depend on a chosen cut-off frequency.</p>
      <p>
        In this paper we propose a new CNN architecture for MRI Gibbs-ringing
suppression. It differs from recently introduced GAS-CNN [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] model by simplified architecture
of non-linear mapping, followed by trainable LRNN [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] post-processing with
attention block, which controls correlation between input and output tensors of the
postprocessing unit. The proposed architecture outperforms GAS-CNN on the generated
synthetic testing dataset in terms of PSNR [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>The remainder of this paper is organized as follows. In Section 2 we observe some
known methods for MRI Gibbs-ringing suppression. In Section 3 we describe MRI
dataset generation, give a detailed overview of the proposed architecture and show
(1)
(2)
(3)</p>
      <p>Attention-based CNN for MRI Gibbs-ringing Artifact Suppression 3
profit of involving our modifications to the architecture. In Section 4 the results and
comparisons are presented. The work is concluded in Section 5.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>Gibbs-ringing reduction task has been solved by many methods so far. For example, the
problem can be tackled as variational, and the solution can be searched as a function,
which minimizes the stated functional in some functional space (L2 or L1, for example):
1
J (u) = 2 ku
u0k2 +</p>
      <p>
        Z
j ru(x) j dx ! mu2iUn;
(4)
where u0 is an input Gibbs-corrupted image, u is a searched Gibbs-free image from the
chosen functional space U , is the image’s area and is the regularization parameter.
The parameter can depend on the distance from the nearest image edge [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Joint ringing
estimation and suppression can be performed using sparse representations [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        Another recently introduced method is based on a search of optimal subpixels shifts [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
The approach is intended to find a unique best shift for each pixel in terms of
minimizing total variation in some predefined pixel’s neighbourhood. They found the
neighbourhood K = [1; 3] to be sufficient for the most Gibbs-ringing cases. Proposed
approach was visually compared by authors with median filter and Lanczos filtering, and
it surpassed them.
      </p>
      <p>Deep learning methods have acquired great popularity in computer vision and
image processing nowadays. Convolutional neural networks map input images in high
dimensional feature spaces, implement filtering with a convolutional set and produce
output images using final image reconstruction net.</p>
      <p>
        GAS-CNN [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is the example of very deep architecture used by authors to
suppress Gibbs-ringing artifact on MRI images. Authors proposed it as the extension of the
super-resolution model EDSR [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        The following distinct model’s features were presented by authors:
– external U-Net [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] like skip connections;
– decreasing the model’s size (diminishing of the feature space dimension);
– flat architecture, as Gibbs oscillations are almost local phenomenon (rejection of
spatial reduction layers, such as max pool or convolution with stride 2).
GAS-CNN maps input tensor into high dimentional feature space of depth 64 and then
implements non-linear residual filtering with 32 ResBlocks [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Architecture is
concluded with the simple reconstruction net, composed of one projection convolutional
layer.
      </p>
      <p>
        We chose this recently proposed model as a baseline and decided to conduct a
research on the ways of non-linear mapping simplification with the maintenance of
generalization ability. Despite of making quite extensive analysis of GAS-CNN, for
example, showing the advantages in utilizing external skip connections and residual
learning and making comparisons with other methods (sinc filtering, bilateral filtering [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ],
NLM [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], GARCNN [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]), authors of GAS-CNN didn’t pay much attention on
possible model’s redundancy. It deserves to mention that the trend of recent years is to
propose some hybrid refinement modules which make it possible to reduce amount of
convolutions in the ensemble [
        <xref ref-type="bibr" rid="ref13 ref14 ref2">2, 13, 14</xref>
        ], as the straightforward excessive stacking of
convolutional layers leads to learning degradation, vanishing gradients and so on.
      </p>
      <p>So, in this article we demonstrate the way to shrink the number of convolutions in
the non-linear mapping by two times and save (even improve) model’s generalization
ability, utilizing the proposed attention LRNN refinement module.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Proposed Architecture</title>
      <p>
        The proposed architecture is shown in Fig. 3. We call it GAS14-ACNN (Gibbs-ringing
Artifact Supression Attention-based Convolutional Neural Network). It comprises of
the following structural blocks: representing input corrupted samples into high
dimensional feature space of depth 64 with the first convolution; performing non-linear
mapping with 14 RCAN blocks [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] (two times less than in GAS-CNN); implementing
trainable post-processing with the proposed attention LRNN refinement module and
reconstructing output image with the final projection convolutional layer.
3.1
      </p>
      <sec id="sec-3-1">
        <title>Dataset generation</title>
        <p>Training, validating and testing synthetic sets were generated from ground truth MRI
dataset IXI1 using the following pipeline:
– apply Fourier transform to ground truth image 256 256 from IXI dataset;
– crop frequency spectrum: central 19 part of frequency domain is saved;
– implement zero-padding, so that Gibbs-corrupted image fits the shape of ground
truth image;
– apply inverse Fourier transform to get Gibbs-corrupted image.</p>
        <p>Attention-based CNN for MRI Gibbs-ringing Artifact Suppression 5</p>
        <sec id="sec-3-1-1">
          <title>Dataset generation process is visualized in Fig. 4. Zero-padding is not a necessary step in Gibbs data generation, Gibbs-ringing can be synthesized just by cropping frequencies. In this work we include zero-padding to</title>
          <p>N
create image pairs fIgt; IGibbsgi=1 of the same spatial size. Zero-padding is often used
before iFFT to project image on a finer grid, and zero-padding is often passed to project
image on a coarse grid by inverse Fourier transform.</p>
          <p>
            IXI dataset contains 581 T1, 578 T2 and 578 PD volumes. Firstly, the intersection of
these volumes was taking, producing 577 volumes, which have all three modalities: T1,
T2 and PD. Then, first 400 volumes were utilized to synthesize training set, next 100
1 http://brain-development.org/ixi-dataset/
volumes to create testing set and the rest of data was taken to generate validating set.
25 slices at both ends were discarded and every tenth slice was obtained to produce pair
(Igt; IGibbs)i [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ]. So, training, validating and testing sets consist of 10427, 2016 and
1617 image pairs respectively. T1, T2 and PD have different data range, thus, maxmin
normalization was used to map input features to a single band:
          </p>
          <p>Imin = min(IGT ); Imax = max(IGT );</p>
          <p>IGibbs normed =</p>
          <p>IGT normed =</p>
          <p>IGibbs
Imax
IGT
Imax</p>
          <p>Imin ;</p>
          <p>Imin
Imin ;
Imin
(5)
(6)
(7)
where IGibbs is Gibbs-corrupted image, IGT is ground truth image,
IGibbs normed is normed Gibbs-corrupted image and IGT normed is normed ground truth
image.
3.2</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Non-Linear Mapping</title>
        <p>
          Images with Gibbs-ringing artifact obtain a significant blur also, as Gibbs-corrupted
images are generated by high frequencies truncation. Noticed that RCAN structural
module was successfully used in the recently published deep CNN architecture for
image deblur [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], one of the proposed modifications to GAS-CNN is the replaced
ResBlock [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] with RCAN (see Fig. 5) in non-linear mapping. The key difference of RCAN
module is the presence of trainable weights for each slice of convolution output. These
weights are generated applying global pooling operation (calculating expectation
values over feature slices) and subsequently fusing acquired features by 1 1 ResBlock
with sigmoid as a closing activation.
        </p>
        <p>To the best of our knowledge the authors of GAS-CNN didn’t provide code and
weights, so to make fair comparisons we trained all presented here models ourselves,
utilizing the same training procedure (refer to Section 4 for details) and the same
synthetic generated dataset.</p>
        <p>GAS-CNN and RCAN-GAS-CNN were trained to evaluate RCAN performance.
RCAN-GAS-CNN precisely matches GAS-CNN architecture with the only difference
that RCAN module is used in non-linear mapping instead of an ordinary ResBlock. The
performance growth is shown in Table 1 and in Fig. 6</p>
        <p>
          Attention-based CNN for MRI Gibbs-ringing Artifact Suppression 7
LRNN refinement block was introduced in [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], and its ideas have been effectively
incorporated into deblur [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] and depth generation pipelines [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ].
        </p>
        <p>
          We utilized LRNN approach in solving Gibbs-ringing problem and, moreover,
extended it by attention mechanism, controlling correlation between LRNN input and
output features. Such attention module aims to force LRNN to be a refinement block.
Authors [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] used similar attention unit to improve biomedical image segmentation,
nevertheless, our approach differs from the existing one [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] in the way we employ
attention mechanism. For the current case, it has the sense of additional constraint to
the refinement operation, whereas in [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] it is applied to group features based on their
correlation within a single feature tensor.
        </p>
        <p>LRNN has two input tensors: feature tensor (non-linear mapping result), to be
recursively processed, and weights tensor. We acquire weights by the auxiliary RNN weights
generation net (see Fig. 3), which is trained end-to-end with the whole neural
network. LRNN implements 4 recursive updates: left-to-right, right-to-left, top-to-down
and down-to-top, applying the rule:</p>
        <p>Ht+1 := (1
!)</p>
        <p>Ht+1 + !</p>
        <p>Ht;
(8)
where Ht is the current processing row, if we refer to top-to-down or down-to-up
operations. Subsequent concatenation and convolution fuse recursively processed tensors
and conclude LRNN operation.</p>
        <p>LRNN can be viewed as an alternative (hybrid) way to enlarge receptive field and to
accumulate global spatial information within the layer. Despite of the locality of Gibbs
oscillations, mentioned above, accounting overall information within the layer occurred
to be very helpful and remarkably raised generalization ability of the architecture.</p>
        <p>We trained two extra CNNs to reveal LRNN advantages:
– RCAN8-GAS-CNN – model accurately matches with original GAS-CNN, but the
number of blocks in non-linear mapping is heavily decreased by 4 times;
– RCAN8-GAS-CNN+LRNN 2 – previously stated model, extended by proposed
LRNN refinement;</p>
        <p>The obvious positive LRNN impact can be observed in Table 2 and in Fig. 7.
RCAN8-GAS-CNN+LRNN 2 has the comparable performance with the original 4
times deeper GAS-CNN, whereas repealing of LRNN post-processing leads to
algorithm’s degradation.</p>
        <p>Attention module is shown in Fig. 8. It gets three tensors as inputs: x1 2 IRC H W
– non-linear mapping output (LRNN input), x2 2 IRH W C – LRNN output, x3 2
IRC H W – transposed copy of x2. Attention block performs weighted regrouping of
x3 features in a manner to uplift feature at position i, mostly correlated with ith feature
of LRNN’s input tensor x1.</p>
        <p>Attention-based CNN for MRI Gibbs-ringing Artifact Suppression 9</p>
        <p>Assume f1; :::; fC to be x3 features. Introduce following latent variables A =
fa1; :::; aC g. Value ai 2 f1; :::; Cg defines x3 feature, mostly correlated with ith
feature of LRNN’s input tensor x1. Weighted regrouping is executed via computed
correlation tensor C 2 IRC C (probability distribution map over the latent variables values).</p>
        <p>It deserves mentioning that there is an analogy with word aligning task in classical
machine learning (for example, IBM Model 1). Proposed attention LRNN unit performs
like feature aligner of LRNN input and output tensors, forcing LRNN to be a refinement
block.</p>
        <p>Finally, we get the overall proposed architecture: RCAN14-GAS-CNN+Attention
LRNN 2 (see Fig. 3). We call it GAS14-ACNN. Table 3 shows the increase of
performance, caused by the attention unit.
Proposed attention-based convolutional neural network for Gibbs-ringing reduction was
implemented in Python 3 with the use of deep learning framework Tensorflow 1.14. We
provide implementation of our code at2.</p>
        <sec id="sec-3-2-1">
          <title>2 https://github.com/MaksimPenkin/GAS14-ACNN</title>
          <p>
            Models were trained with Adam optimizer [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ] ( 1=0.9, 2=0.999, "=1e-08) on
GPU NVIDIA GeForce RTX 2080 Ti. Learning rate had polynomial decay:
lr(x) = (lr0
lr1) (1
x )p + lr1;
M
(9)
where x – training step, lr0 – initial learning rate value, lr1 – final learning rate value,
M = Ne Nd = bs – amount of training steps, Ne – epochs number, Nd – amount of
pairs in the training set, bs – number of pairs, fed to the algorithm on the current training
step (batch size), p – polynomial power.
          </p>
          <p>We used the following values of these parameters: lr0 = 10 4, lr1 = 0, Ne = 1000,
bs = 20, p = 0:3.</p>
          <p>We applied L1 loss function with l2 weights regularization ( = 10 4) to prevent
models’ overfit.</p>
          <p>We utilized augmentation by rotations and flips for patches of shape (48 48)
during training. 10 random patches were cropped from each training image before an
augmentation. Validation and testing were performed on full-size images.</p>
          <p>All convolutions’ kernels have spatial size 3 3, except for one projection
convolution just before LRNN refinement unit: it has kernel of spatial size 1 1, and it projects
features on some trainable manifold of the less dimension.</p>
          <p>GAS-CNN, chosen baseline model, and the proposed GAS14-ACNN can be
compared in Fig. 9 and Fig. 10. It takes approximately 1.03 sec. and 0.05 sec. to process one
image (256 256) for GAS-CNN on CPU and GPU respectively3. And it takes
approximately 1.13 sec. and 0.08 sec. to process one image (256 256) for our GAS14-ACNN
on the same CPU and GPU respectively3.</p>
          <p>Attention-based CNN for MRI Gibbs-ringing Artifact Suppression 11
We proposed the new attention-based convolutional architecture GAS14-ACNN for
MRI Gibbs-ringing suppression. This architecture is the extension of recently proposed
GAS-CNN model with significantly simplified non-linear mapping, followed by
attention LRNN unit to save generalization ability. The presented attention mechanism acts
as auxiliary constraint for LRNN post-processing and as feature filtering module. The
proposed GAS14-ACNN model outperforms baseline GAS-CNN on the generated
synthetic testing set.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , Zhang, H.,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bian</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , Zhang,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <surname>X.</surname>
          </string-name>
          :
          <article-title>Gibbs-ringing artifact suppression with knowledge transfer from natural images to MR images</article-title>
          .
          <source>Multimedia Tools and Applications</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>23</lpage>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , J.,
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ren</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lau</surname>
            ,
            <given-names>R. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>M. H.</given-names>
          </string-name>
          :
          <article-title>Dynamic scene deblurring using spatially variant recurrent neural networks</article-title>
          .
          <source>In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source>
          , pp.
          <fpage>2521</fpage>
          -
          <lpage>2529</lpage>
          . Salt Lake City,
          <string-name>
            <surname>UT</surname>
          </string-name>
          , USA (
          <year>2018</year>
          ). https://doi.org/10.1109/CVPR.
          <year>2018</year>
          .00267
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Al-Najjar</surname>
            ,
            <given-names>Y. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soong</surname>
            ,
            <given-names>D. C.</given-names>
          </string-name>
          :
          <article-title>Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI</article-title>
          .
          <source>Int. J. Sci. Eng. Res</source>
          <volume>3</volume>
          (
          <issue>8</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Sitdikov</surname>
            ,
            <given-names>I. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krylov</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          :
          <article-title>Variational Image Deringing Using Varying Regularization Parameter</article-title>
          .
          <source>Pattern Recognition and Image Analysis: Advances in Mathematical Theory and Applications</source>
          <volume>25</volume>
          (
          <issue>1</issue>
          ),
          <fpage>96</fpage>
          -
          <lpage>100</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Umnov</surname>
            ,
            <given-names>A. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krylov</surname>
            ,
            <given-names>A. S.</given-names>
          </string-name>
          :
          <article-title>Sparse Approach to Image Ringing Detection and Suppression</article-title>
          .
          <source>Pattern Recognition and Image Analysis: Advances in Mathematical Theory and Applications</source>
          <volume>27</volume>
          (
          <issue>4</issue>
          ),
          <fpage>754</fpage>
          -
          <lpage>762</lpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Kellner</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dhital</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kiselev</surname>
            ,
            <given-names>V. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Reisert</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Gibbs-ringing artifact removal based on local subvoxel-shifts</article-title>
          .
          <source>Magnetic resonance in medicine 76(5)</source>
          ,
          <fpage>1574</fpage>
          -
          <lpage>1581</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Son</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nah</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Mu</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.</surname>
          </string-name>
          :
          <article-title>Enhanced deep residual networks for single image super-resolution</article-title>
          .
          <source>In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops</source>
          , pp.
          <fpage>136</fpage>
          -
          <lpage>144</lpage>
          . Honolulu,
          <string-name>
            <surname>HI</surname>
          </string-name>
          , USA (
          <year>2017</year>
          ). https://doi.org/10.1109/CVPRW.
          <year>2017</year>
          .151
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Ronneberger</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fischer</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brox</surname>
          </string-name>
          , T.:
          <article-title>U-net: Convolutional networks for biomedical image segmentation</article-title>
          .
          <source>In: International Conference on Medical image computing and computerassisted intervention</source>
          , pp.
          <fpage>234</fpage>
          -
          <lpage>241</lpage>
          . Springer, Cham (
          <year>2015</year>
          ). https://doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -24574-4 28
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>He</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ren</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
          </string-name>
          , J.:
          <article-title>Deep residual learning for image recognition</article-title>
          .
          <source>In: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          , pp.
          <fpage>770</fpage>
          -
          <lpage>778</lpage>
          . Las Vegas,
          <string-name>
            <surname>NV</surname>
          </string-name>
          , USA (
          <year>2016</year>
          ). https://doi.org/10.1109/CVPR.
          <year>2016</year>
          .90
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gunturk</surname>
            ,
            <given-names>B. K.</given-names>
          </string-name>
          :
          <article-title>Multiresolution bilateral filtering for image denoising</article-title>
          .
          <source>IEEE Transactions on image processing 17</source>
          (
          <issue>12</issue>
          ),
          <fpage>2324</fpage>
          -
          <lpage>2333</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. Manjo´n,
          <string-name>
            <surname>J. V.</surname>
          </string-name>
          , Coupe´,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Buades</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Fonov</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          , Collins,
          <string-name>
            <given-names>D. L.</given-names>
            ,
            <surname>Robles</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          :
          <article-title>Non-local MRI upsampling</article-title>
          .
          <source>Medical image analysis 14(6)</source>
          ,
          <fpage>784</fpage>
          -
          <lpage>792</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Reduction of Gibbs artifacts in magnetic resonance imaging based on Convolutional Neural Network</article-title>
          . In:
          <year>2017</year>
          <article-title>10th international congress on image and signal processing, biomedical engineering and informatics (</article-title>
          <source>CISPBMEI)</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . Shanghai, China (
          <year>2017</year>
          ). https://doi.org/10.1109/CISP-BMEI.
          <year>2017</year>
          .8302197
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. Cheng,
          <string-name>
            <given-names>X.</given-names>
            ,
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>R.</surname>
          </string-name>
          :
          <article-title>Learning depth with convolutional spatial propagation network</article-title>
          .
          <source>arXiv preprint arXiv:1810</source>
          .
          <volume>02695</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Sinha</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dolz</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>Multi-scale self-guided attention for medical image segmentation</article-title>
          .
          <source>IEEE Journal of Biomedical and Health Informatics</source>
          , arXiv preprint arXiv:
          <year>1906</year>
          .
          <volume>02849</volume>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chun</surname>
          </string-name>
          , S. Y.:
          <article-title>Down-scaling with learned kernels in multi-scale deep neural networks for non-uniform single image deblurring</article-title>
          . arXiv preprint arXiv:
          <year>1903</year>
          .
          <volume>10157</volume>
          (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Da</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>A method for stochastic optimization</article-title>
          .
          <source>arXiv preprint arXiv:1412.6980</source>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>