<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>GMM with dynamic management of the number of gaussians based on AIRS</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nebili Wafa</string-name>
          <email>nebili.wafa@univ-guelma.dz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Farou Brahim</string-name>
          <email>farou.brahim@univ-guelma.dz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Seridi Hamid</string-name>
          <email>seridi.hamid@univ-guelma.dz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Labstic Laboratory ,Computer Science department, 8 Mai 1945 Guelma University</institution>
          ,
          <addr-line>BP 401 Guelma</addr-line>
          ,
          <country country="DZ">Algeria</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Background subtraction is an essential step in the process of monitoring videos. Several works have proposed models to dierentiate the background pixels from the foreground pixels. Mixtures of Gaussian (GMM) are among the most popular models for a such problem. However, they suer from certain inconveniences related to the light variations and complex scene due to the use of a xed number of Gaussians. In this paper, we will propose an improvement of the GMM based on the use of the bio-inspired algorithm AIRS (Articial Immune Recognition System) to generate and introduce new Gaussian instead of using a xed number of Gaussians. Our approach is to exploit the robustness of the mutation function in the generation phase of the new ARBs to create new Gaussians. These Gaussians are then ltered into the resource competition phase in order to keep only ones that best represent the background. The system implemented and tested on the Wallower database has proven its eectiveness against other state-of-art methods.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Video surveillance AIRS background subtraction.
Various applications of video surveillance such as the detection and tracking of
moving objects begin with background subtraction phase. Background subtraction
is a binary classication operation that gives each pixel of a video sequence a
label [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], for example: the pixels of the moving objects (foreground) take the value
1 and the pixels of the static objects are labeled by 0.In the real environment,
the variations of pixels are very fast, which requires a robust and adaptable
method to these variations. GMM is one of the most popular methods that
has achieved considerable success in detecting changes in videos. However, this
method has failed in problems related to: lighting changes and hidden areas.
Several studies showed that the number of Gaussians in GMM inuence on
the results quality. The contribution of our work is to manage dynamically the
number of Gaussians based on the AIRS algorithm instead of xing them a priori
by the user. The proposed system stars with a learning phase using the GMM
algorithm and creates several background models for each pixel. These models
are ltered according to resource competition and memory cell development
process of the AIRS algorithm to select only the best models.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Related work</title>
      <p>Subtracting the background from video sequences captured from xed or
nonxed cameras remains a crucial problem due to the diversity of scenes that
represent the background.</p>
      <p>
        In recent years, various approaches, methods and systems have been proposed
and developed to inspect dynamic regions and static regions. One of the most
intuitive approaches is to compute the absolute dierence ( t) either between
two successive frames [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], or between a reference image IR, without any moving
object, and the current image. In order to determine the objects in motion, a bit
mask is applied according to a predened threshold on the pixels of the resulting
image.
      </p>
      <p>
        Another way to subtract the background is to describe the history of the last
n pixel values by a Gaussian probability distribution [
        <xref ref-type="bibr" rid="ref39">39</xref>
        ]. However, modeling by
a single Gaussian is sensitive to fast pixel variations. Indeed, a single Gaussian
can not memorize the old states of the pixel. This requires migration to more
robust and multi-modal approach. The authors in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] propose the rst model
which describes the variance of the recent values of each pixel by a mixture of
the Gaussians. In this model, the Expectation Maximization (EM) algorithm is
used to initialize and estimate the parameters of each Gaussian. In [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] authors
estimate the probability density function of the recent N values of each pixel by
a kernel estimator (KDE).
      </p>
      <p>
        [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] Provides a nonparametric estimation of the background pattern. He uses
the concept of a visual dictionary words to model the pixels of the background.
Indeed, each pixel of the image is represented by a set of three values (visual
word) which describes its current state. these values are initially estimated
during the learning phase and are updated regularly over time to build a robust
modeling.
      </p>
      <p>
        Several works have taken spatial information into consideration. [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] Proposes
a sub-spatial learning based on PCA (SL-PCA). The idea in [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ] is to make a
learning of the N background images by the PCA. Moving objects are identied
according to the input image and the reconstructed image from its projection
in the reduced dimension space. Authors in [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ] provide a quick schema
(SLICA) for background subtraction with Independent Component Analysis (ICA).
Another work [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] presents a decomposition of video content by an incremental
non-negative matrix factorization (NMF). Other methods [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ]
focalised on the selection and combination of good characteristics (the colors ,
texture, outlines) to improve the result quality.
      </p>
      <p>
        Recently, some research works have introduced the fuzzy concept to develop
more ecient and robust methods for modeling the background [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ].
      </p>
      <p>
        Works done in [
        <xref ref-type="bibr" rid="ref40">40</xref>
        ] showed that the GMM oers a good compromise between
quality and execution time compared to other methods. The rst GMM model
was proposed by [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], however Stauer and Grimson [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ] oers a GMM standard
with ecient update equations. Several works and contributions have been proposed
to improve the quality of GMM. Among these methods are those that are focused
on improving the model adaptation speed [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ] [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. Others are interested
in hybrid models such as GMM and K-means [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], GMM and fuzzy logic [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ],
GMM and adaptive background [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ],GMM and Block matching [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], boosted
Gaussian Mixture Model [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], Markov Random Fields [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ], GMM with PSO [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ]
to overcome GMM problems. There are also several works that are invested in
the type of characteristics [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] or in the acquisition material [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ]. In addition
to spatio-temporal methods [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], some researchers have used local contextual
information around a pixel, such as the region [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], the block [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ] and the
cluster [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ].
      </p>
      <p>
        There are also many methods that used deep learning for subtracting the
background FgSegNet_S (FPM) [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], Cascade CNN [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ], DeepBS [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]However,
deep leaning methods require a large number of simples and needs more time
for training.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Proposition</title>
      <p>The in-depth study made on Gaussian mixtures shows the important role of the
number of Gaussians in describing the pixel variations. Following this principle,
we propose a novel mechanism to produce new Gaussians based on the AIRS
algorithm in order to be as faithful as possible to the background model. Indeed,
the idea is to pass from a static model where the number of Gaussians is xed
empirically for all pixels towards a model dynamic and adaptive according to
the environment and the background complexity.</p>
      <p>First, we create a set of Gaussian (gi) representing the background for the
pixel Pt (at time t) that vitries:</p>
      <p>Setbackground = fgi; Pt
i
ui &lt; 2:5g
Each gaussian gi is represented by: the pixel value Pi, the average ui, the variance
i and the weight wi .</p>
      <p>After creating the background model, we choose the Gaussian(mcmatch) that
has the closest distance to the value of the current pixel.</p>
      <p>mcmatch = min_Gaussian(Setbackground)
mcmatch is mutated in the ARBs generation phase. At the end of this phase,
new Gaussian (clones) is created. the number of clones is calculated by the
(1)
(2)
following equation:</p>
      <p>N um_Clones = clonalRate
hyperClonalRate
distance(Pt; mcmatch) (3)
Such that :</p>
      <p>Set_Clones = fgclone1 ; gclone2 ; ::; gcloneNum_Clones g</p>
      <p>gclonei = M utation(mcmatch)</p>
      <p>The clonalRate and the hyperClonalRate are two integer values chosen by
the user.</p>
      <p>New clones must be ltered through the Resource Competition (AIRS) process,
keeping only the best and the correct Gaussians. The ltering operation uses the
condition (6) to remove the least representative Gaussians.</p>
      <p>Pi
(4)
(5)
(6)
(7)
(8)
(9)</p>
      <p>The last step of the AIRS algorithm is to introduce the memory cells mc
from the previous set (Set_Clones). This operation consists of choosing the
most representative Gaussians among the new Gaussians and adding them to
the background model according to the following equation:</p>
      <p>distance(Pt; gclonei ) &lt; distance(Pt; mcmatch)
If the previous condition is veried, we compare the average distance of mcmatch
and gclonei with the anity threshold AT multiplied by the scalar anity threshold
ATS:
meandistance(mcmatch; gclonei ) &lt; AT</p>
      <p>AT S
With : AT : the average distance of all background models generated in the
learning phase.</p>
      <p>If the equation (8) is satised the mcmatch will be deleted from the set of MC.
After these steps and to determine whether the pixel belongs to the background
or foreground, the Gaussians are ordered according to the value of wk;t= k;t. The
Gaussians that represent the state of Pt is the rst distribution that satises
the following equation:
b
= argmin(X wk;t &gt; B)</p>
      <p>k=1</p>
      <p>Where B determines the minimum part of the data corresponding to the
background. wk;t is the weight of the K distribution. Regarding the learning
phase, we applied the same principle of classical GMM.
Our approach is implemented and tested on some videos from the Wallower
database. After several empirical tests, the learning rate , the minimum part
of the data corresponding to the background , clonalRate, hyperClonalRate,
mutationRate, ATS) are respectively xed to 0.01, 0.3, 10, 2, 0.1, 0.2.
The results obtained are compared with the most referenced state of art methods
in the modeling of the background (see Figure 2).</p>
      <p>Our system achieved good results in Foreground Aperture, Camoage, Bootstarp,
Waving trees videos, it ranks in the 1st position compared to the other state of
the art methods, but they have some false detection. However, our system failed
to detect objetcs in scenes that have a large change in illumination.</p>
      <p>The obtained results clearly show that our system exceeds other sate of the
art methods in videos with small variations in the background. However, our
system is sensitive when the scene contains high illumination. This is due to the
nature of the method that uses a pixel-based approach to detect moving objects.</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>In this paper, we have proposed a new approach which allows to reduce the
inconvenience of GMM for background subtraction. The idea is to introduce new
Gaussians using Articial Immune Recognition System. This allows to move from
a static to dynamic approach that can easily adapt the model to nature of the
environment. Results obtained on several videos from a public benchmark showed
the eectiveness of this new process with small variations in the background.
However, our system is sensitive when the scene contains high illumination. This
is due to the nature of the method that uses a pixel-based approach to detect
moving objects.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Azab</surname>
            ,
            <given-names>M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shedeed</surname>
            ,
            <given-names>H.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hussein</surname>
            ,
            <given-names>A.S.:</given-names>
          </string-name>
          <article-title>A new technique for background modeling and subtraction for motion detection in real-time videos</article-title>
          .
          <source>In: Image Processing (ICIP)</source>
          ,
          <year>2010</year>
          17th IEEE International Conference on. pp.
          <fpage>34533456</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Babaee</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dinh</surname>
            ,
            <given-names>D.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rigoll</surname>
          </string-name>
          , G.:
          <article-title>A deep convolutional neural network for video sequence background subtraction</article-title>
          .
          <source>Pattern Recognition</source>
          <volume>76</volume>
          ,
          <issue>635649</issue>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bhaskar</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mihaylova</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maskell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Automatic target detection based on background modeling using adaptive cluster density estimation (</article-title>
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Bouwmans</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>El Baf</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Modeling of dynamic backgrounds by type-2 fuzzy gaussians mixture models</article-title>
          .
          <source>MASAUM Journal of of Basic and Applied Sciences</source>
          <volume>1</volume>
          (
          <issue>2</issue>
          ),
          <volume>265276</volume>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Bucak</surname>
            ,
            <given-names>S.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gunsel</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Video content representation by incremental non-negative matrix factorization</article-title>
          .
          <source>In: Image Processing</source>
          ,
          <year>2007</year>
          .
          <article-title>ICIP 2007</article-title>
          . IEEE International Conference on. vol.
          <volume>2</volume>
          , pp.
          <fpage>II113</fpage>
          . IEEE (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Caseiro</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Henriques</surname>
            ,
            <given-names>J.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Batista</surname>
          </string-name>
          , J.:
          <article-title>Foreground segmentation via background modeling on riemannian manifolds</article-title>
          .
          <source>In: Pattern Recognition (ICPR)</source>
          ,
          <year>2010</year>
          20th International Conference on. pp.
          <fpage>35703574</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Charoenpong</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Supasuteekul</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nuthong</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Adaptive background modeling from an image sequence by using k-means clustering</article-title>
          . In: Electrical Engineering/Electronics Computer Telecommunications and Information
          <string-name>
            <surname>Technology (ECTI-CON)</surname>
          </string-name>
          , 2010 International Conference on. pp.
          <fpage>880883</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8. Collins, R.T.,
          <string-name>
            <surname>Lipton</surname>
            ,
            <given-names>A.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kanade</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fujiyoshi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Duggins</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tsin</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tolliver</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Enomoto</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hasegawa</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burt</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , et al.:
          <article-title>A system for video surveillance and monitoring</article-title>
          .
          <source>VSAM nal report</source>
          pp.
          <volume>168</volume>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Doulamis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalisperakis</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stentoumis</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matsatsinis</surname>
          </string-name>
          , N.:
          <article-title>Self adaptive background modeling for identifying persons' falls</article-title>
          .
          <source>In: Semantic Media Adaptation and Personalization (SMAP)</source>
          ,
          <year>2010</year>
          5th International Workshop on. pp.
          <fpage>5763</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>El Baf</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouwmans</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vachon</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Fuzzy integral for moving object detection</article-title>
          .
          <source>In: Fuzzy Systems</source>
          ,
          <year>2008</year>
          . FUZZ-IEEE
          <year>2008</year>
          .
          <article-title>(IEEE World Congress on Computational Intelligence)</article-title>
          . IEEE International Conference on. pp.
          <fpage>17291736</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>El Baf</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouwmans</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vachon</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Type-2 fuzzy mixture of gaussians model: application to background modeling</article-title>
          .
          <source>In: International Symposium on Visual Computing</source>
          . pp.
          <fpage>772781</fpage>
          . Springer (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>El Baf</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouwmans</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vachon</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Fuzzy statistical modeling of dynamic backgrounds for moving object detection in infrared videos</article-title>
          .
          <source>In: Computer Vision and Pattern Recognition Workshops</source>
          ,
          <year>2009</year>
          . CVPR Workshops
          <year>2009</year>
          . IEEE Computer Society Conference on. pp.
          <fpage>6065</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Elgammal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harwood</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Non-parametric model for background subtraction</article-title>
          .
          <source>In: European conference on computer vision</source>
          . pp.
          <fpage>751767</fpage>
          . Springer (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiong</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>A moving object detection algorithm based on color information</article-title>
          .
          <source>In: Journal of Physics: Conference Series</source>
          . vol.
          <volume>48</volume>
          , p.
          <fpage>384</fpage>
          . IOP Publishing (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Farou</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kouahla</surname>
            ,
            <given-names>M.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seridi</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Akdag</surname>
          </string-name>
          , H.:
          <article-title>Ecient local monitoring approach for the task of background subtraction</article-title>
          .
          <source>Engineering Applications of Articial Intelligence</source>
          <volume>64</volume>
          ,
          <issue>112</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Friedman</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Image segmentation in video sequences: A probabilistic approach</article-title>
          .
          <source>In: Proceedings of the Thirteenth conference on Uncertainty in articial intelligence</source>
          . pp.
          <fpage>175181</fpage>
          . Morgan Kaufmann Publishers Inc. (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Haritaoglu</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harwood</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Davis</surname>
            ,
            <given-names>L.S.:</given-names>
          </string-name>
          <article-title>W4: Real-time surveillance of people and their activities</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis &amp; Machine Intelligence (8)</source>
          ,
          <volume>809830</volume>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Hayman</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Eklundh</surname>
            ,
            <given-names>J.O.</given-names>
          </string-name>
          :
          <article-title>Statistical background subtraction for a mobile observer</article-title>
          . In: null. p.
          <fpage>67</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Jain</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kimia</surname>
            ,
            <given-names>B.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mundy</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          :
          <article-title>Background modeling based on subpixel edges</article-title>
          .
          <source>In: Image Processing</source>
          ,
          <year>2007</year>
          .
          <article-title>ICIP 2007</article-title>
          . IEEE International Conference on. vol.
          <volume>6</volume>
          , pp.
          <fpage>VI321</fpage>
          . IEEE (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Jian</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , Xiao-qing, D., Sheng-jin, W., You-shou, W.:
          <article-title>Background subtraction based on a combination of texture, color and intensity</article-title>
          .
          <source>In: Signal Processing</source>
          ,
          <year>2008</year>
          .
          <source>ICSP</source>
          <year>2008</year>
          . 9th International Conference on. pp.
          <fpage>14001405</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>KaewTraKulPong</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bowden</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>An improved adaptive background mixture model for real-time tracking with shadow detection</article-title>
          .
          <source>In: Video-based surveillance systems</source>
          , pp.
          <fpage>135144</fpage>
          . Springer (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Kristensen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nilsson</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>wall</surname>
          </string-name>
          , V.:
          <article-title>Background segmentation beyond rgb</article-title>
          .
          <source>In: Asian COnference on Computer Vision</source>
          . pp.
          <fpage>602612</fpage>
          . Springer (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Lim</surname>
            ,
            <given-names>L.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keles</surname>
          </string-name>
          , H.Y.:
          <article-title>Foreground segmentation using convolutional neural networks for multiscale feature encoding</article-title>
          .
          <source>Pattern Recognition Letters</source>
          <volume>112</volume>
          ,
          <fpage>256</fpage>
          <lpage>262</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Martins</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carvalho</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Corte-Real</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alba-Castro</surname>
            ,
            <given-names>J.L.</given-names>
          </string-name>
          :
          <article-title>Bmog: boosted gaussian mixture model with controlled complexity for background subtraction</article-title>
          .
          <source>Pattern Analysis and Applications</source>
          <volume>21</volume>
          (
          <issue>3</issue>
          ),
          <volume>641654</volume>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Oliver</surname>
            ,
            <given-names>N.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosario</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pentland</surname>
            ,
            <given-names>A.P.:</given-names>
          </string-name>
          <article-title>A bayesian computer vision system for modeling human interactions</article-title>
          .
          <source>IEEE transactions on pattern analysis and machine intelligence</source>
          <volume>22</volume>
          (
          <issue>8</issue>
          ),
          <volume>831843</volume>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Pokrajac</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Latecki</surname>
            ,
            <given-names>L.J.:</given-names>
          </string-name>
          <article-title>Spatiotemporal blocks-based moving objects identication and tracking. IEEE Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS</article-title>
          ) pp.
          <volume>7077</volume>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Power</surname>
            ,
            <given-names>P.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schoonees</surname>
            ,
            <given-names>J.A.</given-names>
          </string-name>
          :
          <article-title>Understanding background mixture models for foreground segmentation</article-title>
          .
          <source>In: Proceedings image and vision computing New Zealand</source>
          . vol.
          <year>2002</year>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Schindler</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          :
          <article-title>Smooth foreground-background segmentation for video processing</article-title>
          .
          <source>In: Asian Conference on Computer Vision</source>
          . pp.
          <fpage>581590</fpage>
          . Springer (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Seki</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Okuda</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hashimoto</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hirata</surname>
          </string-name>
          , N.:
          <article-title>Object modeling using gaussian mixture model for infrared image and its application to vehicle detection</article-title>
          .
          <source>Journal of Robotics and Mechatronics</source>
          <volume>18</volume>
          (
          <issue>6</issue>
          ),
          <volume>738</volume>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Setiawan</surname>
            ,
            <given-names>N.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seok-Ju</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jang-Woon</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chil-Woo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Gaussian mixture model in improved hls color space for human silhouette extraction</article-title>
          .
          <source>In: Advances in Articial Reality and Tele-Existence</source>
          , pp.
          <fpage>732741</fpage>
          . Springer (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <surname>Shimada</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nonaka</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nagahara</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taniguchi</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>i</year>
          .:
          <article-title>Case-based background modeling: associative background database towards low-cost and high-performance change detection</article-title>
          .
          <source>Machine vision and applications</source>
          <volume>25</volume>
          (
          <issue>5</issue>
          ),
          <volume>11211131</volume>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <surname>Sigari</surname>
            ,
            <given-names>M.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mozayani</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pourreza</surname>
          </string-name>
          , H.:
          <article-title>Fuzzy running average and fuzzy background subtraction: concepts and application</article-title>
          .
          <source>International Journal of Computer Science and Network Security</source>
          <volume>8</volume>
          (
          <issue>2</issue>
          ),
          <volume>138143</volume>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <surname>Stauer</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grimson</surname>
            ,
            <given-names>W.E.L.</given-names>
          </string-name>
          :
          <article-title>Adaptive background mixture models for real-time tracking</article-title>
          . In: cvpr. p.
          <fpage>2246</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>Tsai</surname>
            ,
            <given-names>D.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lai</surname>
            ,
            <given-names>S.C.</given-names>
          </string-name>
          :
          <article-title>Independent component analysis-based background subtraction for indoor surveillance</article-title>
          .
          <source>IEEE Transactions on image processing 18(1)</source>
          ,
          <volume>158167</volume>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>Valentine</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Apewokin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wills</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wills</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>An ecient, chromatic clusteringbased background model for embedded vision platforms</article-title>
          .
          <source>Computer Vision and Image Understanding</source>
          <volume>114</volume>
          (
          <issue>11</issue>
          ),
          <volume>11521163</volume>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36.
          <string-name>
            <surname>Varadarajan</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
          </string-name>
          , H.:
          <article-title>Spatial mixture of gaussians for dynamic background modelling</article-title>
          .
          <source>In: Advanced Video and Signal Based Surveillance (AVSS)</source>
          ,
          <year>2013</year>
          10th IEEE International Conference on. pp.
          <fpage>6368</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luo</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jodoin</surname>
            ,
            <given-names>P.M.:</given-names>
          </string-name>
          <article-title>Interactive deep learning method for segmenting moving objects</article-title>
          .
          <source>Pattern Recognition Letters</source>
          <volume>96</volume>
          ,
          <issue>6675</issue>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>White</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shah</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Automatically tuning background subtraction parameters using particle swarm optimization</article-title>
          .
          <source>In: Multimedia and Expo</source>
          , 2007 IEEE International Conference on. pp.
          <fpage>18261829</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>Wren</surname>
            ,
            <given-names>C.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Azarbayejani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Darrell</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pentland</surname>
            ,
            <given-names>A.P.</given-names>
          </string-name>
          : Pnder:
          <article-title>Real-time tracking of the human body</article-title>
          .
          <source>IEEE Transactions on pattern analysis and machine intelligence</source>
          <volume>19</volume>
          (
          <issue>7</issue>
          ),
          <volume>780785</volume>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Qian</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Object kinematic model: A novel approach of adaptive background mixture models for video segmentation</article-title>
          .
          <source>In: Intelligent Control and Automation (WCICA)</source>
          ,
          <year>2010</year>
          8th World Congress on. pp.
          <fpage>62256228</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , H.,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Fusing color and texture features for background model</article-title>
          .
          <source>In: Fuzzy Systems and Knowledge Discovery: Third International Conference, FSKD</source>
          <year>2006</year>
          ,
          <article-title>Xi'an, China</article-title>
          ,
          <source>September 24-28</source>
          ,
          <year>2006</year>
          . Proceedings 3. pp.
          <fpage>887893</fpage>
          . Springer (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bouwmans</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>A fuzzy background modeling approach for motion detection in dynamic backgrounds</article-title>
          .
          <source>In: Multimedia and signal processing</source>
          , pp.
          <fpage>177185</fpage>
          . Springer (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>