<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Scene-based Non-uniformity Fixed Pattern Noise Correction Algorithm for Infrared Video Sequences</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Igor Kudinov</string-name>
          <email>i.a.kudinov@yandex.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ivan Kholopov Ryazan State Radio Engineering University named after V.</institution>
          <addr-line>F. Utkin (RSREU) Ryazan</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ryazan State Radio Engineering University named after V.</institution>
          <addr-line>F. Utkin (RSREU) Ryazan</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>135</fpage>
      <lpage>140</lpage>
      <abstract>
        <p>-An algorithm for a fixed pattern noise correction for infrared sensors based on the analysis of the video sequence of a static or dynamic scene observed by the camera is considered. It is shown that on the assumption of the additive nature of the fixed pattern noise, the frame-to-frame accumulation of such noise by analogy with the radar problem of detecting a signal against a correlated clutter can successfully compensate for it with a video sequence of more than 500 frames. During experiments with the Xenics Bobcat 640 short-wave infrared and Xenics Goby 384 long-wave infrared cameras it was demonstrated that in contrast to the well-known non-uniformity correction algorithm for a single frame, typical for it halo artifacts near extended scene objects are not observed in the resulting image, when fixed pattern noise is estimated from the results of accumulation over a set of frames.</p>
      </abstract>
      <kwd-group>
        <kwd>non-uniformity correction</kwd>
        <kwd>fixed pattern noise</kwd>
        <kwd>recurrent averaging</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>INTRODUCTION</p>
      <p>Fixed pattern noise (FPN) is commonly understood as a
set of fixed deviations of the values of the output signals
from various photosensitive elements of the infrared (IR)
photodetector device (PD) at the same intensity of the
radiation incident on them. Visually, the FPN appears on the
IR image in the form of horizontal or vertical stripes
depending on the orientation of the photodetector arrays
(PA) in the PD matrix.</p>
      <p>In a number of practical tasks (multispectral image
fusion [1], computation of no-reference image quality
indexes [2], dominant direction estimation in the task of
gradient-based technique for image structural analysis [3],
automatic recognition of bar pattern test object positions in
task of IR camera calibration [4], ect.) estimation the
parameters of FPN and its compensation (non-uniformity
correction, NUC) are important stages of digital IR image
processing.</p>
      <p>II.</p>
      <p>MATHEMATICAL MODELS OF IR CAMERAS FPN</p>
      <p>
        Despite the fact that FPN of IR cameras in the general
case is analytically described by a nonlinear dependence on
the intensity of the radiation incident on the PD [
        <xref ref-type="bibr" rid="ref6">5, 6</xref>
        ], the
authors of [
        <xref ref-type="bibr" rid="ref11 ref7">7-10</xref>
        ] use a linear model to solve the NUC
problem:
 Iij = kijI0ij + bij, 
where I0ij and Iij are respectively the brightness of the pixel
at the intersection of the i-th row and the j-th column in the
absence of FPN and in its presence, kij and bij are
respectively multiplicative and additive components of FPN.
      </p>
      <p>The additive component of the FPN bij is mainly
determined by the non-uniformity of the distribution of the
dark current of the PD, therefore, it depends on the
temperature and the exposure time. The multiplicative</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref12">11</xref>
        ] model (1) is simplified to only multiplicative
model:
 Iij = kijI0ij. 
      </p>
      <p>For matrix-type PD (MPD) composed of vertically
arranged PA, the FPN model (1) can be reduced to the
form [7]:
 Iij = kjI0ij + bj, 
where kj and bj are respectively the multiplicative and
additive components of the FPN in the j-th PA of the MPD.</p>
      <p>
        In sources [
        <xref ref-type="bibr" rid="ref13 ref14 ref15 ref16 ref17">12-16</xref>
        ] it was shown that for solving the
NUC problem, model (3) can be further simplified to a
single parameter – the constant displacement in the j-th
column bj:
 Iij = I0ij + bj, 
      </p>
      <p>In this case the compensation of the additive FPN is
simply to subtract its estimates for each j-th column:
 I0ij = Iij – bj, 
which excludes from the processing according to (2) or (3)
the operation of multiplication by the weight coefficient
{1/kij} or {1/kj} respectively.</p>
      <p>The problem of NUC involves the assessment of
parameters of mathematical models (1) – (4). To develop a
NUC algorithm we accept the hypothesis that FPN
mathematical model is described by expression (4).</p>
      <p>III.</p>
      <p>KNOWN METHODS OF FPN EVALUATION AND NUC</p>
    </sec>
    <sec id="sec-2">
      <title>A. Classification of NUC methods</title>
      <p>
        NUC methods are usually divided into two categories
[
        <xref ref-type="bibr" rid="ref17 ref18">16, 17</xref>
        ]: methods based on pre-calibration according to the
test object (Calibration-Based NUC, CBNUC) and methods
based directly on analysis of the observed scene
(SceneBased NUC, SBNUC). The first category includes methods
of single, double, and multipoint calibration [
        <xref ref-type="bibr" rid="ref19">18</xref>
        ]. A classic
representative of CBNUC algorithms is the two-point
calibration method (Two Point NUC, TPNUC procedure),
which involves calibrating the camera in two frames with
uniform brightness:


for mid-wave IR (MWIR, 3..5 μm) and long-wave
IR (LWIR, 8..14 μm) cameras – according to two
images of a black body with different temperatures
(at two temperature points – “cold” and “hot”);
for short-wave IR (SWIR, 0.9..1.7 μm) cameras –
according to two images of a scene with uniform
illumination at two different shutter speeds (usually
0.5 and 5 ms).
      </p>
      <p>For models (1) and (3) this allows to find estimates of
the parameters kij and bij from the solution of systems of
pairs of the corresponding linear equations. At the same
time, the use of CBNUC methods for uncooled thermal
cameras does not allow to perform effective NUC due to the
sensitivity of the parameters kij and bij to changes in the
temperature of the camera body and MPD.</p>
      <p>
        The SBNUC family of methods that operate only with
the statistics of the brightness distribution of the observed
scene and do not require specialized equipment for
calibration don’t have this drawback. Their drawback, in
turn, is NUC artifacts, which appear at the boundaries of
images of extended objects [
        <xref ref-type="bibr" rid="ref15 ref16 ref17 ref18">14-17</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>B. SBNUC methods</title>
      <p>
        When accepting the hypotheses that the FPN model is
described by (4), and the MPD consists of columns of PA,
an effective SBNUC algorithm for NUC is the spatially
adaptive column FPN correction based on 1D horizontal
differential statistics, which was considered in detail in [
        <xref ref-type="bibr" rid="ref16">15</xref>
        ].
This algorithm contains the following basic steps.
      </p>
      <p>
        1) Row-by-row processing of the initial image with a 1D
smoothing filter, which is a guided filter [
        <xref ref-type="bibr" rid="ref20">19</xref>
        ] with a
regularization parameter  = 0.42 (high degree of smoothing
and noise suppression). A guided filter has all the
advantages of a bilateral filter and is without its drawbacks
[
        <xref ref-type="bibr" rid="ref20">19</xref>
        ].
      </p>
      <p>2) Estimation of the horizontal spatial high-frequency
(HF) component of the original image:
 IHF_h = I – ILF,
where I is the original image and ILF is the filtering result of
a 1D Gaussian low-frequency (LF) filter;</p>
      <p>3) Separation of the HF component IHF_h on the HF
signal component IHF and the additive FPN component b:</p>
      <p>IHF_h = IHF + b.</p>
      <p>
        This separation is based on the calculation of the HDS1D
statistics (1D Horizontal Differential Statistics [
        <xref ref-type="bibr" rid="ref16">15</xref>
        ]) in each
i-th row of the image IHF_h for each j-th column:
      </p>
      <p>HDS 1D ( j ) 
1 N h / 2</p>
      <p>
K 1 ( j ) k   N h / 2</p>
      <p> [ I HF j  I HF j  k ]2   I j  k ,
exp  
 2  r12 
where</p>
      <p>N h / 2  [ I H Fj  I HF j  k ] 2 </p>
      <p>
        K 1 ( j )  k  N h / 2 exp   2  r1 2 
is the normalization term, Ij+k is the computed local
gradient in the horizontal direction, Nh is a horizontal
window which defines a set of neighboring pixels of i, and
σr1 is the range weight parameter, that determines the
gravity of the module of the brightness difference of the j-th
and the (j + k)-th columns. In [
        <xref ref-type="bibr" rid="ref16">15</xref>
        ] σr1 is selected 10 times
greater than the standard deviation (SD) of the brightness
gradient I along the row in accordance with the
recommendations of [
        <xref ref-type="bibr" rid="ref21">20</xref>
        ].
      </p>
      <p>4) FPN estimation for each j-th column is based on
calculation of statistic:</p>
      <p>b j 
where
1 N v / 2</p>
      <p>
K 2 ( j ) k   N v / 2</p>
      <p>
exp  


 k 2 </p>
      <p>
         2  I HF j  k ,
HDS 1 D ( j )   2  s 2 
(6)
 K 2 ( j )  k NvN/v2/ 2 exp   HDS 1 D( j )    2 k s22 2   
is the normalization term, χ is a small positive number to
avoid division by zero when HDS1D(j) = 0. The parameters
, s2 and the vertical size of the sliding window Nv in [
        <xref ref-type="bibr" rid="ref16">15</xref>
        ]
are taken to be 0.5, 0.8H and H, respectively, where H is the
frame height in pixels.
      </p>
      <p>The choice of the Nv window height is made for
 compromise reasons: for small Nv the NUC is better
(especially in images with a uniform background), however,
FPN estimate in this case is sensitive to the HF spatial
component of brightness. On the contrary, for large Nv the
statistics (7) is less sensitive to the features of the observed
scene, but the error in the estimation of the FPN is also
higher. This is illustrated in Fig. 1, which shows that the
presence of extended vertical objects leads to the appearance
of NUC compensation artifacts in the image of a SWIR
video camera – the highlighted (halo) areas above the cell
towers and pipes of the building. Moreover, in areas with a
uniform background, FPN is effectively suppressed.
a)
b)</p>
      <p>
        The basic idea of the algorithm developed by the authors
is that based on the principles of estimating FPN [
        <xref ref-type="bibr" rid="ref16">15</xref>
        ] due to
the accumulation of a series of frames it is possible to form
an image with an approximately uniform background, which
will increase the efficiency of FPN estimation and NUC.
      </p>
      <p>IV.</p>
      <p>PROPOSED METHOD OF NUC FOR A SERIES OF</p>
      <p>FRAMES</p>
      <p>
        The idea of the algorithm is based on the adoption of the
hypothesis (4) on the additive character of FPN and the
principle of quasi-optimal detection of burst-pulse signals
against the background of correlated clutters in radar
systems: rejection of LF clutter and accumulation of an HF
signal [
        <xref ref-type="bibr" rid="ref22">21</xref>
        ]. At the same time, we consider that FPN is a
useful signal to be detected, and the scene image is a
correlated clutter.
      </p>
      <p>The main stages of our SBNUC algorithm for a series of
frames are the following.</p>
      <p>1) The frame from IR video camera Ik (at the k-th
moment of time) is received.</p>
      <p>2) In the frame Ik all its rows are randomly permutated
and a frame I*k is formed. As a result of row permutation,
the pixels of the j-th column of the frame Ik corresponding
to one j-th PA of MPD with the vertical direction of reading
charge packets in the frame I*k still remain in the j-th
column.</p>
      <p>3) The auxiliary frame Nk is recurrently evaluated:</p>
      <p>Nk = [(n – 1)Nk-1 + I*k]/n,
where n is the number of previously received frames;
(7)
4) The variance of the brightness gradient Dhk along a
row (in the horizontal direction) over the frame Nk is
estimated:</p>
      <p>Dhk = M {(Nij – Ni,j–1)2},
i , j</p>
      <p>Start</p>
      <sec id="sec-3-1">
        <title>Read frame I from IR camera</title>
      </sec>
      <sec id="sec-3-2">
        <title>Rows permutation in I</title>
      </sec>
      <sec id="sec-3-3">
        <title>Auxiliary frame Nk</title>
        <p>estimation by (8)
Brightness gradient Dhk
estimation</p>
      </sec>
      <sec id="sec-3-4">
        <title>Dhk &gt; Dhmax</title>
        <p>no
yes</p>
        <p>EXPERIMENTAL RESULTS AND DISCUSSION</p>
        <p>
          The experiments were carried out with a Xenics Bobcat
640 SWIR camera. To reduce the amount of computation
when dividing the calibration frame K into LF and HF parts
where M{  } denotes the calculation of the brightness mean.
The background component (correlated clutter) in the
horizontal direction is suppressed by analogy with the
principle of operation of the radar single delay line canceler
[
          <xref ref-type="bibr" rid="ref22">21</xref>
          ].
        </p>
        <p>5) If upon receipt of a new frame k the estimate of the
variance Dhk is greater than its previous maximum value
Dhmax, this means that the FPN-to-background ratio in the
auxiliary frame Nk has grown. Therefore, Nk is written to the
calibration frame K and the value of Dhmax is updated:</p>
        <p>K = Nk, Dhmax = Dhk.</p>
        <p>6) The calibration frame K is divided into two additive
components: LF part KLF (background) and HF part b which
is the estimated FPN:</p>
        <p>K = KLF + b.</p>
        <p>7) NUC is performed according to (5)
subsequent linear contrasting of the result.</p>
        <p>(8)
with the</p>
        <p>
          Random permutation of rows during the forming of I*k
frame with subsequent averaging of such frames ensures
that the background brightness is equalized over the Nk
frame even if there are areas of different brightness in the
scene (for example, for outdoors shooting conditions these
areas are the sky and underlying surface), which eliminates
the SBNUC-like FPN compensation artifacts. With a large
number of averaged frames with randomly rearranged rows,
by virtue of the central limit theorem, it is fair to assume
that the background brightness distribution will tend to
normal. Therefore, the accumulation FPN according to (8) is
equivalent to the problem of incoherent accumulation of a
useful signal against the background of Gaussian noise in
radar systems [
          <xref ref-type="bibr" rid="ref22">21</xref>
          ]. NUC algorithm scheme is shown in
Fig. 2.
        </p>
        <p>Dhmax = Dhk</p>
      </sec>
      <sec id="sec-3-5">
        <title>Calibration frame forming:</title>
        <p>K = N
FPN estimation: b = K – KLF</p>
        <p>NUC by (5)</p>
      </sec>
      <sec id="sec-3-6">
        <title>Linear contrasting</title>
      </sec>
      <sec id="sec-3-7">
        <title>Frame with NUC output End</title>
        <p>
          according to (9), the authors applied the fast low-pass
filtering procedure [
          <xref ref-type="bibr" rid="ref23">22</xref>
          ] with a 32-elements aperture BOX
filter.
        </p>
        <p>Fig. 3-5 show the selective frames (with a 1.25 s time
interval between them) from the original video sequence
with a duration of 15 s and a frame frequency of 50 Hz, the
results of the recursive estimation of the FPN according to
the developed algorithm and the NUC results respectively.</p>
        <p>
          From the results of the experiment it follows that after
approximately 500 frames the asymptotic convergence of
the FPN estimate according to (8) and (9) to the true FPN
value is ensured. Moreover, the NUC result doesn’t contain
halo artifacts specific to SBNUC algorithms [
          <xref ref-type="bibr" rid="ref14 ref15 ref16 ref17">13-16</xref>
          ].
        </p>
        <p>
          The authors also conducted an experiment with a LWIR
Xenics Gobi 384 video camera based on the uncooled
microbolometer in the mode with FPN off and bad pixels
correction, which compared the results of FPN estimation
obtained with a closed defocused camera lens according to
CBNUC method [
          <xref ref-type="bibr" rid="ref24">23</xref>
          ] (Fig. 6-8) and according to the
developed algorithm (Fig. 9).
        </p>
        <p>With the visual similarity of frames with FPN in Fig. 8
and 9, the difference in their SDs approximately 1.5 times is
explained primarily by the distribution (when rows are
randomly permutated) of the irregular gain of the camera
matrix according to model (1) over the entire height of the
frame column (dark left and right frame edges in Fig. 9).
Therefore, despite the forming of a subjectively comfortable
image (without pronounced FPN), the considered NUC
algorithm without preliminary flat field correction does not
compensate not only for the gain irregularity of MPD, but
can even enhance it, which will appear on images with
extended objects of a uniform texture and uniform
brightness.</p>
        <p>Fig. 5. NUC results.</p>
        <p>Fig.6. Raw LWIR frame.</p>
        <p>
          Fig. 9. Our estimation of sensor FPN,
SD = 14.12.
[7]
[
          <xref ref-type="bibr" rid="ref7">8</xref>
          ]
[9]
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>X.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xiang</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          , “
          <article-title>A parallel fusion method of remote sensing image based on NSCT,” Computer Optics</article-title>
          , vol.
          <volume>43</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>123</fpage>
          -
          <lpage>131</lpage>
          ,
          <year>2019</year>
          . DOI:
          <volume>10</volume>
          .18287/
          <fpage>2412</fpage>
          -6179-2019-43-1-
          <fpage>123</fpage>
          -131.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Pertuz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Puig</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.A.</given-names>
            <surname>Garcia</surname>
          </string-name>
          , “
          <article-title>Analysis of focus measure operators for shape-from-focus,” Pattern Recognit</article-title>
          ., vol.
          <volume>46</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>1415</fpage>
          -
          <lpage>1432</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>D.G.</given-names>
            <surname>Asatryan</surname>
          </string-name>
          , “
          <article-title>Gradient-based technique for image structural analysis and applications</article-title>
          ,” Computer Optics, vol.
          <volume>43</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>245</fpage>
          -
          <lpage>250</lpage>
          ,
          <year>2019</year>
          . DOI:
          <volume>10</volume>
          .18287/
          <fpage>2412</fpage>
          -6179-2019-43-2-
          <fpage>245</fpage>
          -250.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>A.V.</given-names>
            <surname>Mingalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.V.</given-names>
            <surname>Belov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.M.</given-names>
            <surname>Gabdullin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.R.</given-names>
            <surname>Agafonova</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.N.</given-names>
            <surname>Shusharin</surname>
          </string-name>
          , “
          <article-title>Test-object recognition in thermal images</article-title>
          ,”
          <source>Computer Optics</source>
          , vol.
          <volume>43</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>402</fpage>
          -
          <lpage>411</lpage>
          ,
          <year>2019</year>
          . DOI:
          <volume>10</volume>
          .18287/
          <fpage>2412</fpage>
          -6179-2019-43-3-
          <fpage>402</fpage>
          -411.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>V.N.</given-names>
            <surname>Borovytsky</surname>
          </string-name>
          , “
          <article-title>Residual error after non-uniformity correction,” Semicond</article-title>
          . Physics, quantum electron. &amp; optoelectron., vol.
          <volume>3</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>102</fpage>
          -
          <lpage>105</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>6th Mediterranean Conf. on Embedded Comput. (MECO)</source>
          , Bar, Montenegro, pp.
          <fpage>159</fpage>
          -
          <lpage>162</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          Fig. 8.
          <string-name>
            <surname>Raw</surname>
            <given-names>LWIR</given-names>
          </string-name>
          <article-title>frame with flat field correction</article-title>
          ,
          <source>SD = 9</source>
          .
          <fpage>25</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>D.L.</given-names>
            <surname>Perry</surname>
          </string-name>
          and
          <string-name>
            <given-names>E.L.</given-names>
            <surname>Dereniak</surname>
          </string-name>
          , “
          <article-title>Linear theory of nonuniformity correction in infrared staring sensors,” Opt</article-title>
          . Eng., vol.
          <volume>32</volume>
          , pp.
          <fpage>1854</fpage>
          -
          <lpage>1859</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>B.M.</given-names>
            <surname>Ratliff and M.M. Hayat</surname>
          </string-name>
          , “
          <article-title>An algebraic algorithm for nonuniformity correction in focal-plane arrays”</article-title>
          ,
          <source>J. Opt. Soc. Am. A.</source>
          , vol.
          <volume>19</volume>
          , no.
          <issue>9</issue>
          , pp.
          <fpage>1737</fpage>
          -
          <lpage>1747</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>P.</given-names>
            <surname>Narendra</surname>
          </string-name>
          , “
          <article-title>Reference-free nonuniformity compensation for IR imaging arrays</article-title>
          ,
          <source>” Proc. SPIE</source>
          , vol.
          <volume>252</volume>
          , pp.
          <fpage>10</fpage>
          -
          <lpage>17</lpage>
          ,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xie</surname>
          </string-name>
          and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Fu</surname>
          </string-name>
          , “
          <article-title>Calibration-based NUC method in real-time based on IRFPA</article-title>
          ,
          <source>” Physics Procedia</source>
          , vol.
          <volume>22</volume>
          , pp.
          <fpage>372</fpage>
          -
          <lpage>380</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.S.</given-names>
            <surname>Bekhtin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.S.</given-names>
            <surname>Gurov</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.N.</given-names>
            <surname>Guryeva</surname>
          </string-name>
          , “
          <article-title>Algorithmic supply of IR sensors with FPN using texture homogeneity levels</article-title>
          ,
          <source>” Proc. 5th Mediterranean Conf. on Embedded Comput. (MECO)</source>
          , Budva, Montenegro, pp.
          <fpage>252</fpage>
          -
          <lpage>255</lpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Hardie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hayat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Armstrong</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Yasuda</surname>
          </string-name>
          , “
          <article-title>Scene-based nonuniformity correction with video sequences and registration</article-title>
          ,” Appl. Opt., vol.
          <volume>39</volume>
          , no.
          <issue>8</issue>
          , pp.
          <fpage>1241</fpage>
          -
          <lpage>1250</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>C.</given-names>
            <surname>Zuo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Sui</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Ren</surname>
          </string-name>
          , “
          <article-title>Improved interframe registration based nonuniformity correction for focal plane arrays,” Infrared Phys</article-title>
          . Technol., vol.
          <volume>55</volume>
          , no.
          <issue>4</issue>
          ., pp.
          <fpage>263</fpage>
          -
          <lpage>269</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          and
          <string-name>
            <surname>C.-L. Tisse</surname>
          </string-name>
          , “
          <article-title>Effective strip noise removal for low-textured infrared images based on 1-D guided filtering</article-title>
          ,
          <source>” IEEE Trans. Circuits Syst. Video Technol.</source>
          , vol.
          <volume>26</volume>
          , pp.
          <fpage>2176</fpage>
          -
          <lpage>2188</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          , “
          <article-title>Spatially adaptive column fixed-pattern noise correction in infrared imaging system using 1D horizontal differential statistics,” IEEE Photonics J</article-title>
          ., vol.
          <volume>9</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Sui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Kuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Gu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Chen</surname>
          </string-name>
          , “
          <article-title>FPN estimation based nonuniformity correction for infrared imaging system</article-title>
          ,
          <source>” Infrared Physics and Technol.</source>
          , vol.
          <volume>96</volume>
          , pp.
          <fpage>22</fpage>
          -
          <lpage>29</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>L.</given-names>
            <surname>Huo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Liu</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>He</surname>
          </string-name>
          , “
          <article-title>Staircase-scene-based nonuniformity correction in aerial point target detection systems</article-title>
          ,” Appl. Opt., vol.
          <volume>55</volume>
          , pp.
          <fpage>7149</fpage>
          -
          <lpage>7156</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>K.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Peng</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , “
          <article-title>Non-uniformity correction based on focal plane array temperature in uncooled long-wave infrared cameras without a shutter,” Appl</article-title>
          . Opt., vol.
          <volume>56</volume>
          , pp.
          <fpage>884</fpage>
          -
          <lpage>889</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          and
          <string-name>
            <given-names>X.</given-names>
            <surname>Tang</surname>
          </string-name>
          , “Guided image filtering,
          <source>” IEEE Trans. on Pattern Anal. and Machine Intell</source>
          ., vol.
          <volume>35</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>1397</fpage>
          -
          <lpage>1409</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>A.</given-names>
            <surname>Buades</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Coll</surname>
          </string-name>
          and
          <string-name>
            <surname>J.-M. Morel</surname>
          </string-name>
          , “
          <article-title>A non-local algorithm for image denoising</article-title>
          ,
          <source>” IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit</source>
          ., San Diego, USA, vol.
          <volume>2</volume>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.R.</given-names>
            <surname>Mahafza</surname>
          </string-name>
          , “
          <article-title>Radar systems analysis and design using MATLAB,” NY: Chapman</article-title>
          &amp; Hall/CRC,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lukin</surname>
          </string-name>
          , “
          <article-title>Tips &amp; tricks: fast image filtering algorithms</article-title>
          ,
          <source>” 17th Int. Conf. on Comput. Graphics “GraphiCon”</source>
          , Moscow, pp.
          <fpage>186</fpage>
          -
          <lpage>189</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [23]
          <string-name>
            <surname>I.I. Kremis</surname>
          </string-name>
          , “
          <article-title>Method of compensating for signal irregularity of photosensitive elements of multielement photodetector</article-title>
          ,
          <source>” patent RU 2449491, date of patent: 27.04</source>
          .
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>