<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Integrated Analysis and Synthesis of Pedestrian Dynamics: First Results in a Real World Case Study</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Sultan D. Khan, Luca Crociani, Giuseppe Vizzari Complex Systems and Artificial Intelligence research center Universita` degli Studi di Milano-Bicocca</institution>
          ,
          <addr-line>Milano</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>-The paper introduces an agent-based model for the simulation of crowds of pedestrians whose main innovative element is the representation and management of an important type of social interaction among the pedestrians: members of groups, in fact, carry out of a form of interaction (by means of verbal or non-verbal communication) that allows them to preserve the cohesion of the group even in particular conditions, such as counter flows, presence of obstacles or narrow passages. The paper formally describes the model and presents its application to a real world scenario in which an analysis of the impact of groups on the overall observed system dynamics was performed. The simulation results are compared to empirical data and they show that the introduced model is able to produce quantitatively plausible results in situations characterised by the presence of groups of pedestrians.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>The simulation of pedestrians and crowds is a consolidated
and successful application of research results in the more
general area of computer simulation of complex systems.
Relevant contributions to this area come from disciplines
ranging from physics and applied mathematics to computer
science, often influenced by anthropological, psychological,
sociological studies. The quality of the results provided by
simulation models was sufficient to lead to the design and
development of commercial software packages, offering useful
functionalities to the end user (e.g. CAD integration,
CADlike functionalities, advanced visualisation and analysis tools)
in addition to a simulation engine1.</p>
      <p>The last point is a crucial and critical element of this kind
of research effort: computational models represent a way to
formally and precisely define a computable form of theory
of pedestrian and crowd dynamics. However, these theories
must be validated employing field data, acquired by means
of experiments and observations of the modeled phenomena,
before the models can actually be used for sake of prediction.
This paper represents a step in this direction, since it presents
the application of methods from computer vision field for
performing automated analysis on pedestrian dynamics, which
are mainly aimed at the validation of an agent-based model
for its simulation. The paper breaks down as the following.
Description of the state-of-art of modelling and analysis of
crowd dynamics is presented in Sec.II. Experimental methods
1See http://www.evacmod.net/?q=node/5 for a large list of pedestrian
simulation models and tools.
for the automated analysis are described in Sec. III), while the
real world case study used for their application is described
in Sec. V-A. Then, the agent-based model for the simulation
is presented in Sec. IV). First results of the analysis methods
are described in Sec. V, while a discussion on the possibilities
to exploit these data for sake of validation of the simulation
results are presented in Sec. VI. Conclusions and future
developments end the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>II. RELATED WORKS</title>
      <sec id="sec-2-1">
        <title>A. Synthesis</title>
        <p>
          Pedestrian models can be roughly classified into three main
categories that respectively consider pedestrians as particles
subject to forces, particular states of cells in which the
environment is subdivided in Cellular Automata (CA) approaches,
or autonomous agents acting and interacting in an
environment. The most widely adopted particle based approach is
represented by the social force model [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], which implicitly
employs fundamental proxemic concepts like the tendency of a
pedestrian to stay away from other ones while moving towards
his/her goal. Cellular Automata based approaches have also
been successfully applied in this context: in particular, the
floor-field model [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ], in which the cells are endowed with
a discretised gradient guiding pedestrians towards potential
destinations. Finally, works like [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] essentially extend CA
approaches, separating the pedestrians from the environment
and granting them a behavioural specification that is generally
more complex than what is generally represented in terms of
a simple CA transition rule, but they essentially adopt similar
methodologies. The resulting models are agent–based, since
pedestrians are not merely states of cell. Along this direction
of endowing models of more complicated behavioural
models, relevant innovative studies regard social aspects and the
transfer of emotions in crowds (see, e.g., [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]).
        </p>
        <p>
          A recent survey of the field by [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] and by a report
commissioned by the Cabinet Office by [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] made clear that, even after
the substantial research that has been carried out in this area,
there is still much room for innovations in models improving
their performances both in terms of effectiveness in modelling
pedestrians and crowd phenomena, in terms of expressiveness
of the models (i.e. simplifying the modelling activity or
introducing the possibility of representing phenomena that
were still not considered by existing approaches), and in terms
of efficiency of the simulation tools. Research on models
able to represent and manage phenomena still not considered
or properly managed is thus still lively and important. One
of the aspects of crowds of pedestrians that has only been
recently considered is represented by the implications of the
presence of groups. A small number of recent works represent
a relevant effort towards the modeling of groups, respectively
in particle-based [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] (extending the social force model), in
CA-based [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] (with ad-hoc approaches) and in agent-based
approaches [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] (introducing specific behavioral rules for
managing group oriented behaviors): in all these approaches,
groups are modeled by means of additional contributions to the
overall pedestrian behaviour representing the tendency to stay
close to other group members. However, the above approaches
only mostly deal with small groups in relatively low density
conditions; those dealing with relatively large groups (tens of
pedestrians) were not validated against real data.
        </p>
        <p>B. Automated Analysis</p>
        <p>1) Dominant Flows Motion Detection: Crowd flow
segmentation has multiple benefits: 1) enables clutter free
visualization of moving groups 2) independence from detection
and tracking 3) provide input for the pedestrian simulation
models. Automatic analysis of the crowd has become the
center of focus for most of researchers in computer vision.
Detecting pedestrians and tracking are traditional ways of crowd
analysis. Most algorithms developed for object detection and
tracking work well in low density crowds where the number
of people are less than twenty but in density crowds where the
amount of people exceeds hundreds of thousand, detection and
tracking of individuals are almost impossible due to multiple
occlusions. Therefore, the research has focused on gathering
global motion information at higher scale. Global analysis of
dense group of moving people is often based on optical flow
analysis.</p>
        <p>
          A survey about the crowd analysis methods employed in
computer vision is presented in [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. An interdisciplinary
framework for crowd analysis to improve simulation models
of pedestrian flows is also presented in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>
          In [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] lagrangian coherent structures are detected by
calculating finite-time scalar Lyapunov Exponent (FTLE) field
over the phase space; these coherent structures represent
different crowd motion patterns generating by moving in
different directions. In [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] SIFT feature were instead used to
detect dominant motion flows: flow vectors of SIFT features
are calculated and then motion flow map is divided into
small regions of equal size; in each region, dominant motion
flows are estimated by clustering flow vectors. Crowd flow
is estimated using multiple visual features reported in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]
where flow is estimated by the number of persons passing
through a virtual trip wire and accumulate the total number of
foreground pixels. Novel region growing scheme is adopted
in [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] for crowd flow segmentation where translation flow is
used to approximate the motion of crowd and region growing
scheme is employed to segment the crowd flow. Min-cut/max
flow algorithm is used in [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] for crowd flow segmentation.
Histogram based crowd flow segmentation is reported in [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]
where angle matrix of foreground pixels is segmented instead
of optical flow foreground. The derivative curve of histogram
is used to segment the flow.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2) People Counting in High Density Crowds: Estimating</title>
        <p>
          Crowd density and counting people is an important factor in
crowd management. The increase of number of people in small
areas may create problems like physical injury and fatalities.
Hence early detection of the crowd can avoid these problems.
Counting of the people moving in the crowd can provide
information about the blockage at some point or even stampede.
[
          <xref ref-type="bibr" rid="ref19">19</xref>
          ] proposed Bayesian model based segmentation to segment
and count people but this method is not appropriate for high
density crowds. [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] proposed blob features of moving objects
to eliminate background and shadow from the image. [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]
showed classification accuracy of 95% when crowd density
is classified into four classes by using wavelet descriptors.
[
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] used texture descriptors called advanced local binary
pattern descriptors to estimate crowd density estimation. [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]
proposed a system that calculate the directional movement of
the crowd and count the people as they cross some virtual line.
[
          <xref ref-type="bibr" rid="ref24">24</xref>
          ] used specialized imaging system using infra-red imaging
to count the people in the crowd. [
          <xref ref-type="bibr" rid="ref25">25</xref>
          ] have discussed in
detail the concept of crowd monitoring using image processing
through visual cameras. [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ] used simple background
subtraction from the static images to estimate the crowd density. Some
other researchers [
          <xref ref-type="bibr" rid="ref27">27</xref>
          ], [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ], [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ] have also used the concept of
background removal to estimate the crowd area. To estimate
the crowd density using image processing, many researchers
have used the information of texture, edges or some global or
local features [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ], [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ].
        </p>
        <p>III. EXPERIMENTAL METHOD FOR AUTOMATED ANALYSIS</p>
      </sec>
      <sec id="sec-2-3">
        <title>A. Motion Flow Characterisation</title>
        <p>
          In this paper, we use Horn &amp; Schunck [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ] to calculate
the dense optical flow field. The dense optical field calculated
also contains the background information. We remove the
background optical flow vectors by setting up a threshold. The
optical flow vectors that are coherent and are a part of same
flow are clustered. After clustering, some blobs appear which
are removed by blob absorption method.
        </p>
        <p>
          1) Motion Flow Composition: The motion flow field is a set
of independent flow vectors in each frame and each flow vector
is associated with its respective spatial location. The motion
flow field is calculated by using optical flow methods. Given
two images, Ft and Ft+1 as input, we use Horn and Schunck
[
          <xref ref-type="bibr" rid="ref32">32</xref>
          ] to compute dense optical flow. Consider a feature point
i in Ft, its flow vector Zi includes its location Xi = (xi; yi)
and its velocity Vi = (vxi ; vyi ), i. e. Zi = (Xi; Vi). We
denote by Ri(Zi) as the magnitude of a flow vector and
i its angle or direction. The vector Mi = (Zi; Ri; i)
summarises all the information associated to a feature i. Then
fM1; M2; : : : ; Mkg is the motion flow field of all the points
of an image comprising r c features such that r c = k,
with r the number of rows and c the number of columns.
        </p>
        <p>When computing dense optical flow we calculate the
movement of all the pixels of an image, so it is usually the best
choice to remove background, to reduce computational costs
without losing much information. To do so, a magnitude
threshold is set to eliminate features characterised by a low
magnitude that are considered background noise and that are
not taken into account. This technique can also be used in
computing coarse optical flow.</p>
        <p>
          2) Motion Flow Field Segmentation: The motion flow field
fM1; M2; : : : ; Mng is a matrix where each flow vector
represents motion in specific direction as shown in Figure 1. Figure
1 does not show dominant motion patterns, so we can not infer
any meaningful information about flow. Therefore, we need a
method that automatically analyses the similarity among the
flow vectors. We compute similarity among the flow vectors by
applying similarity measure approach [
          <xref ref-type="bibr" rid="ref33">33</xref>
          ]. Similar vectors are
grouped together to represent specific motion pattern by using
clustering techniques [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ]. This process of grouping vectors
that represent specific motion pattern is called segmentation.
After segmentation process, motion field is divided into small
segments. Flow vectors that have similarity among them are
clustered.
        </p>
        <p>
          These blobs represent small clusters and resulted due to
following reasons. First, if the objects move slowly, the inside
and outside flow vectors of the objects are not same and as a
result are classified into two different flows. Second, if the two
opposite optical flow intersect, the optical flow at intersection
point is ambiguous. Usually these small clusters or blobs
are not the part of dominant motion flows. We adopt blob
absorption approach, where these blobs are either absorbed
by dominant cluster or by background. Let blob Bi of color
Ci is the blob to be absorbed. Then find the edges of the blob
by using Canny et al. [
          <xref ref-type="bibr" rid="ref33">33</xref>
          ] edge detector. For any point p(x; y)
on the edge of blob, we search 2 2 neighborhood of edge
point. If any of neighborhood points has different color Cj that
represents the dominant motion or background, then change
the color of blob Bi to Cj , the color that represents dominant
flow or background.
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>B. People Counting in a High Density Situations</title>
        <p>In this paper, we have proposed a framework to count people
in the extremely dense crowd where people are moving at
different speeds. Foreground segmentation is done by various
methods of background subtraction namely, approximate
median, and frame difference and mixture of Gaussian method.
Time complexity is calculated for these techniques and
approximate median technique is selected which fast and accurate.
Blob analysis is done to count the people in the crowd and
blob area is optimized to get the best counting accuracy. In this
paper, we extract the foreground by using Gaussian mixture
model and optical flow. After getting foreground objects we
use blob analysis method and optimize the blobs area by
comparing it with ground truth data. Experimental results
shows 90% accuracy of our results.</p>
        <p>
          1) Motion Segmentation: Motion segmentation is the most
important pre-processing step for detecting the moving objects
from the video. Traditionally in video surveillance with a fixed
camera, researchers tend to find some sort of motion in the
video. There are two part of such of videos, background and
foreground part. The object in motion is the foreground part of
the video and the rest static part is the background. Motion
detection is used to extract foreground part from the video. Such
kind of extraction is useful for detecting, tracking and
understanding the behavior of the object. Traditionally, background
subtraction method is used for extracting moving objects from
the video frame where pixels in the currents frame that deviate
significantly from the background are considered as part of
moving objects. Such kinds of methods are usually prone to
errors due to unpredicted and changing behavior of the pixels.
In addition, this method cannot accurately detect fast moving
or slow moving as well as multiple objects. Also these methods
are affected by change in illumination in the video frame.
Sometime change in illumination in static background will
be detected as part of moving object. Such errors and noise
must be removed from the foreground objects before applying
blob analysis. In order to extract valid and accurate foreground
objects, we employed both Gaussian mixture model and Horn
and Schank optical flow[
          <xref ref-type="bibr" rid="ref32">32</xref>
          ].
        </p>
        <p>2) Blob Area Optimisation: Blobs are the connected
regions in a binary image. For blob detection, image is first
converted to binary image. Then next step is finding the
connected components in the binary image. After finding
connected components in binary image, the next step is to
measure the properties of each connected component (object)
in a binary image. In this paper, we are interested in measuring
the ‘Area’ of each connected components. Area is the number
of pixels in the region. Each binary image has a lot of
connected components of variable size. We are interested in
finding those connected components having area greater than
some specific value. Area of the connected component differs
depending upon the distance of camera from the scene. If the
distance between the camera and crowd is less, greater will be
number of pixels in a connected component and hence greater
will be the blob size of the object. Hence the first step in people
counting is to decide the optimal area of connected component.
For this purpose, we have used four initial frames whose
ground truth is available. In the iterative approach, we change
the area of the blob size and count the people. This count
is then compared with the ground truth of the frame (actual
number of people in the frame). For each frame, optimal area
is found for which the people count error was minimum.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>IV. PEDESTRIAN SIMULATION MODEL</title>
      <p>In this section the formalisation of the agent-based
computational model will be discussed, by focusing on the definition
of its three main elements: environment, update mechanism
and pedestrian behaviour.</p>
      <sec id="sec-3-1">
        <title>A. Environment</title>
        <p>
          The environment is modelled in a discrete way by
representing it as a grid of squared cells with 40 cm2 size
(according to the average area occupied by a pedestrian [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ]).
Cells have a state indicating the fact that they are vacant or
occupied by obstacles or pedestrians: State(c) : Cells !
fF ree; Obstacle; OneP edi; T woP edsij g.
        </p>
        <p>The last two elements of the definition point out if the cell
is occupied by one or two pedestrians respectively, with their
own identifier: the second case is allowed only in a controlled
way to simulate overcrowded situations, in which the density
is higher than 6:25 m 2 (i.e. the maximum density reachable
by our discretisation).</p>
        <p>The information related to the scenario2 of the simulation
are represented by means of spatial markers, special sets of
cells that describe relevant elements in the environment. In
particular, three kinds of spatial markers are defined: (i) start
areas, that indicate the generation points of agents in the
scenario. Agent generation can occur in block, all at once, or
according to a user defined frequency, along with information
on type of agent to be generated and its destination and group
2It represents both the structure of the environment and all the information
required for the realization of a specific simulation, such as crowd
management demands (pedestrians generation profile, origin-destination matrices) and
spatial constraints.
membership; (ii) destination areas, which define the possible
targets of the pedestrians in the environment; (iii) obstacles,
that identify all the non-walkable areas as walls and zones
where pedestrians can not enter.</p>
        <p>
          Space annotation allows the definition of virtual grids of the
environment, as containers of information for agents and their
movement. In our model, we adopt the floor field approach [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ],
that is based on the generation of a set of superimposed grids
(similar to the grid of the environment) starting from the
information derived from spatial markers. Floor field values
are spread on the grid as a gradient and they are used to support
pedestrians in the navigation of the environment, representing
their interactions with static object (i.e., destination areas and
obstacles) or with other pedestrians. Moreover, floor fields can
be static (created at the beginning and not changed during the
simulation) or dynamic (updated during the simulation). Three
kinds of floor fields are defined in our model: (i) path field,
that indicates for every cell the distance from one destination
area, acting as a potential field that drives pedestrians towards
it (static). One path field for each destination point is generated
in each scenario; (ii) obstacles field, that indicates for every
cell the distance from neighbour obstacles or walls (static).
Only one obstacles field is generated in each simulation
scenario; (iii) density field, that indicates for each cell the
pedestrian density in the surroundings at the current time-step
(dynamic). Like the previous one, the density field is unique
for each scenario.
        </p>
        <p>
          Chessboard metric with p2 variation over corners [
          <xref ref-type="bibr" rid="ref36">36</xref>
          ] is
used to produce the spreading of the information in the path
and obstacle fields. Moreover, pedestrians cause a modification
to the density field by adding a value v = d12 to cells whose
distance d from their current position is below a given
threshold. Agents are able to perceive floor fields values in their
neighbourhood by means of a function Val (f; c) (f represents
the field type and c is the perceived cell). This approach to the
definition of the objective part of the perception model moves
the burden of its management from agents to the environment,
which would need to monitor agents anyway in order to
produce some of the simulation results.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>B. Pedestrians and Movement</title>
        <p>Formally, our agents are defined by the following
triple: P ed = hId; Group; Statei; where State =
hposition; oldDir; Desti, with their own numerical identifier,
their group (if any) and their internal state, that defines the
current position of the agent, the previous movement and the
final destination, associated to the relative path field.</p>
        <p>Before describing agent behavioural specification, it is
necessary to introduce the formal representation of the nature and
structure of the groups they can belong to, since this is an
influential factor for movement decisions.</p>
        <p>1) Social Interactions: To represent different types of
relationships, two kinds of groups have been defined in the model:
a simple group indicates a family or a restricted set of friends,
or any other small assembly of persons in which there is a
strong and simply recognisable cohesion; a structured group
is generally a large one (e.g. team supporters or tourists in
an organised tour), that shows a slight cohesion and a natural
fragmentation into subgroups, sometimes simple.</p>
        <p>
          Between members of a simple group it is possible to identify
an apparent tendency to stay close, in order to guarantee
the possibility to perform interactions by means of verbal
or non-verbal communication [
          <xref ref-type="bibr" rid="ref37">37</xref>
          ]. On the contrary, in large
groups people are mostly linked by the sharing of a common
goal, and the overall group tends to maintain only a weak
compactness, with a following behaviour between members.
In order to model these two typologies, the formal
representation of a group is described by the following: Group :
hId; [SubGroup1; : : : ; SubGroupm]; [P ed1; ; P edn]i.
        </p>
        <p>In particular, if the group is simple, it will have an empty
set of subgroups, otherwise it will not contain any direct
references to pedestrians inside it, which will be stored in
the respective leafs of its three structure. Differences on the
modelled behavioural mechanism in simple/structured groups
will be analysed in the following section, with the description
of the utility function.</p>
        <p>2) Agent Behaviour: Agent behaviour in a single
simulation turn is organised into four steps: perception, utility
calculation, action choice and movement. The perception
step provides to the agent all the information needed for
choosing its destination cell. In particular, if an agent does
not belong to a group (from here called individual), in
this phase it will only extract values from the floor fields,
while in the other case it will perceive also the positions
of the other group members within a configurable distance,
for the calculation of the cohesion parameter. The choice
of each action is based on an utility value assigned to
every possible movement according to the function U (c) =
gG(c)+ obOb(c)+ sS(c)+ cC(c)+ iI(c)+ dD(c)+ ovOv(c) .</p>
        <p>d</p>
        <p>Function U (c) takes into account the behavioural
components considered relevant for pedestrian movement, each one
is modelled by means of a function that returns values in
range [ 1; +1], if it represents an attractive element (i.e. its
goal), or in range [ 1; 0], if it represents a repulsive one
for the agent. For each function a coefficient has been
introduced for its calibration: these coefficients, being also able
to actually modulate tendencies based on objective information
about agent’s spatial context, complement the objective part
of the perception model allowing agent heterogeneity. The
purpose of the function denominator d is to constrain the
diagonal movements, in which the agents cover a greater
distance (0:4 p2 instead of 0.4) and assume higher speed
with respect to the non-diagonal ones.</p>
        <p>The first three functions exploit information derived by
local floor fields: G(c) is associated to goal attraction whereas
Ob(c) and S(c) respectively to geometric and social
repulsion. Functions C(c) and I(c) are linear combinations of the
perceived positions of members of agent group (respectively
simple and structured) in an extended neighbourhood; they
compute the level of attractiveness of each neighbour cell,
relating to group cohesion phenomenon. Finally, D(c) adds
a bonus to the utility of the cell next to the agent according
to his/her previous direction (a sort of inertia factor), while
Ov(c) describes the overlapping mechanism, a method used
to allow two pedestrians to temporarily occupy the same cell
at the same step, to manage high-density situations.</p>
        <p>As we previously said, the main difference between simple
and structured groups resides in the cohesion intensity, which
in the simple ones is significantly stronger. Functions C(c)
and I(c) have been defined to correctly model this difference.
Nonetheless, various preliminary tests on benchmark scenarios
show us that, used singularly, function C(c) is not able
to reproduce realistic simulations. Human behaviour is, in
fact, very complex and can react differently even in simple
situation, for example by allowing temporary fragmentation
of simple groups in front of several constraints (obstacles or
opposite flows). Acting statically on the calibration weight,
it is not possible to achieve this dynamic behaviour: with a
small cohesion parameter several permanent fragmentations
have been reproduced, while with an increase of it we obtained
no group dispersions, but also an excessive and unrealistic
compactness.</p>
        <p>In order to face this issue, another function has been
introduced in the model, to adaptively balance the calibration
weight of the three attractive behavioural elements, depending
on the fragmentation level of simple groups:</p>
        <p>Balance(k) =
8 1 k + ( 23 k DispBalance) if k = kc
&gt;&lt; 13 k + ( 2 k (1</p>
        <p>3 3
&gt;:k
Area(Group) , ki, kg and kc are the weighted parameters of
jGroupj
U (c), is the calibration parameter of this mechanism and
Area(Group) calculates the area of the convex hull defined
using positions of the group members. Fig. 2 exemplifies
both the group dispersion computation and the effects of
the Balance function on parameters. The effective utility
computation, therefore, employs calibration weights resulting
from this computation, that allows achieving a dynamic and
adaptive behaviour of groups: cohesion relaxes if members
are sufficiently close to each other and it intensifies with the
growth of dispersion.</p>
        <p>After the utility evaluation for all the cells in the
neighbourhood, the choice of action is stochastic, with the probability
to move in each cell c as (N is the normalization factor):
P (c) = N eU (c). On the basis of P (c), agents move in
the resulted cell according to their set of possible actions,
defined as list of the eight possible movements in the Moore
neighbourhood, plus the action to keep the position (indicated
as X ): A = fN W; N; N E; W; X; E; SW; S; SEg.</p>
      </sec>
      <sec id="sec-3-3">
        <title>C. Time and Update Mechanism</title>
        <p>
          Time is also discrete: an initial definition of the duration
of a time step was set to 0:31 s. This choice, considering the
side of the cell (40 cm), generates a linear pedestrian speed of
about 1:3 m=s, which is in line with the data from the literature
representing observations of crowd in normal conditions [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ].
        </p>
        <p>
          Regarding the update mechanism, three different strategies
are usually considered in this context [
          <xref ref-type="bibr" rid="ref38">38</xref>
          ]: ordered sequential,
shuffled sequential and parallel update. The first two strategies
are based on a sequential update of agents, respectively
managed according either to a static list of priorities that reflects
their order of generation or a dynamic one, shuffled at each
time step. On the contrary, the parallel update calculates the
choice of movement of all the pedestrians at the same time,
actuating choices and managing collisions in a latter stage.
In the model we adopted the parallel update strategy, that
is usually considered more realistic due to consideration of
conflicts arisen for the movement in a shared space [
          <xref ref-type="bibr" rid="ref39">39</xref>
          ], [
          <xref ref-type="bibr" rid="ref40">40</xref>
          ].
        </p>
        <p>With this update strategy, the agents life-cycle must consider
that, before carrying out the movement execution, potential
conflicts3 must be solved. The overall simulation step therefore
follows a three step procedure: (i) update of choices and
conflicts detection for each agent; (ii) conflicts resolution,
that is the resolution of the detected conflicts between agent
intentions; (iii) agents movement, that is the update of agent
positions exploiting the previous conflicts resolution, and field
update, that is the computation of the new density field
according to the updated positions of the agents.</p>
        <p>
          The resolution of conflicts employs an approach essentially
based on the one introduced in [
          <xref ref-type="bibr" rid="ref40">40</xref>
          ], based on the notion
of friction. Let us first consider that conflicts can involve
two or more pedestrians: in case more than two pedestrians
involved in a conflict for the same cell, the first step is to
block all but two of them, randomly chosen, reducing the
problem to a simple case. To manage a simple conflict, another
random number 2 [0; 1]is generated and compared to two
thresholds, frict l and frict h, with 0 &lt; frict l &lt; frict h 1: the
outcome can be that all agents are blocked when the extracted
number is lower than frict l, only one agent moves (chosen
randomly) when the extracted number is between frict l and
frict h included, or even two agents move when the number is
higher than frict h (in this case pedestrian overlapping occurs).
For our tests, the values of the thresholds make it quite
relatively unlikely the resolution of a simple conflict with one
agent moving and the other blocked, and much less likely their
overlapping.
        </p>
        <p>3essentially related to the simultaneous choice of two (or more) pedestrians
to occupy the same cell</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>V. EXPERIMENTAL RESULTS</title>
      <p>
        This section discuss about the qualitative analysis of the
results obtained from experiments. We carried out our
experiments on a PC of 2.6 GHz (Core i5) with 4.0 GB memory
and video by Bandini et al [
        <xref ref-type="bibr" rid="ref41">41</xref>
        ], whose scenario is described
with the following.
      </p>
      <sec id="sec-4-1">
        <title>A. The Analysed Scenario</title>
        <p>The survey was performed the last 24th of November 2012
from about 2:50 pm to 4:10 pm. It consisted in the
observation of the bidirectional pedestrian flows within the Vittorio
Emanuele II gallery (see Fig. 4), a popular
commercialtouristic walkway situated in the Milan city centre (Italy).
The gallery was chosen as a crowded urban scenario, given
the large amount of people that pass through it during the
weekend for shopping, entertainment and visiting
touristichistorical attractions in the centre of Milan.</p>
        <p>The team performing the observation was composed of
four people. Several preliminary inspections were performed
to check the topographical features of the walkway. The
balcony of the gallery, that surrounds the inside volume of
the architecture from about ten meters in height, was chosen
as location thanks to possibility to (i) position the equipment
for video footages from a quasi-zenithal point of view and
(ii) to avoid as much as possible to influence the behaviour
of observed subjects, thanks to a railing of the balcony partly
hiding the observation equipment. The equipment consisted
of two professional full HD video cameras with tripods. The
existing legislation about privacy was consulted and complied
in order to comply with ethical issues about the privacy of
people recorded within the pedestrian flows.</p>
      </sec>
      <sec id="sec-4-2">
        <title>B. Automated Analysis Experimental Results</title>
        <p>We computed optical flow at a coarser resolution to reduce
computational time as shown in Figure 1(a); let the output
of optical flow be a binary image Fb. The gaussian mixture
model was applied to the same sample frame to compute
foreground; let the output of the Gaussian mixture model be
called Fgm. Later on, Fgm and Fb were put into logical AND
to extract foreground image Ff as shown in Figure 3(b). After
computing optical flow and generating foreground image,
similarity among the flow vectors in Fb was determined by
using a similarity measure. Similar flow vectors were clustered
to represent a specific motion pattern. Blob absorption method
was applied to remove small clusters. Blob analysis method
was applied on foreground image (Ff ) to count the number
of people.</p>
        <p>Figure 3(c) and (d) show clustering result, flow vectors
which satisfy similarity measures are combined into one
cluster. Each cluster in the figure is color coded representing
different motion patterns. Some blobs appear due to problems
discussed in section III-A which are removed by the blob
absorption method. Figure 3(d) shows a more refined version
of motion patterns. As we can see from the Figure 3(c) and
(d), there is a dominant motion towards North, a minor but
still significant motion towards South, little motion towards
West and almost negligible motion towards East.</p>
        <p>Such little motion makes small clusters, which are
usually absorbed by the blob absorption method, as shown in
Figure 3(d). The results shows that large number of people
moving towards North while there is less movement in other
directions which is further illustrated in Figure 5.</p>
        <p>Figure 5(a) shows ground truth calculated manually by
using the Ground Truth Annotation (GTA) tool: 62 people
were manually counted. After blob analysis, instead, 52 people
were automatically detected in the whole scene as shown in
Figure 5(b). Figure 5(c) shows instead the count of people
moving in different directions: while studying dominant
direction of crowd, analysis of speeds of crowd is important
to understand overall crowd dynamics. Figure 5(d) shows the
speed magnitude of all flow vectors. We use color codes to
represent the magnitude of speeds. The bar scale represents
different speeds where dark color or magnitude of 1 represent
high speed while blue region shows zero magnitude. Figure
5(d) leads us to an important observation that the people
moving alone are moving with high speed while others moving
in groups move relatively slower. This kind of observations is
important and provides a useful input to pedestrian simulation
models.</p>
        <p>Our proposed algorithm detects dominant motion patterns
in the scene. Once motion pattern are detected, we can find
the source and sink of each pattern. Sources refer to locations
where objects appear and sinks are the location where objects
disappear. Most of the scenes contain multiple sources and
sinks: for example, a market place where multiple groups of
pedestrians move in distinct directions, originating multiple
sources and sinks; similarly we could analyse flows in train
stations or large floors of malls. The analysed video considers
a situation in which, however, the flow of pedestrians is mostly
on the North-South axis (referring to the video orientation). By
analyzing sources and sinks of multiple motion patterns we can
achieve information about the mostly visited or most attractive
areas in the scene that can help us in understanding the
behaviour of different pedestrian groups. In a transportation
scenario, we could go as far as producing a so called origin–
destination matrix which is an essential input for the creation
of simulation scenarios. So, a generalisation of the presented
work on dominant flows is object of current and future works.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>VI. DISCUSSION</title>
      <p>The above described results of automated video analysis
represent a first step in the direction of a more
comprehensively integrated overall study of pedestrians and crowd
dynamics. As of this moment, they still require some
improvement to be directly applied, but in this section we will clarify
some of the most immediate ways to exploit these results to
support modeling and simulation.</p>
      <p>A first employment of the above results is related to the
actual configuration of the simulation scenario: qualitative
analyses characterising the flow direction segmentation clarify
that in the analysed portion of the environment most pedestrian
movements are along the North-South axis (i.e. some
pedestrians actually stop by a windows in one of the borders of the
scenario or actually enter a shop, but their number is very low
compared to the overall pedestrian flow). So, when designing
the simulation environment we can exclude the presence of
points of interest / attraction along the Eastern and Western
borders of the environment: this was not obvious, since the
analysed scenario comprises shops along the borders and since
pedestrian behaviour in other areas of the gallery are quite
different.</p>
      <p>In particular, based on these data, we configured the
environment as a large corridor with size 12.8 m 13.6 m. At
each end, one start area is placed for the agents generation,
respecting the frequency of arrival observed in the videos;
corridor ends also comprise a destination area corresponding
to the start area positioned on the other end.</p>
      <p>
        A second way to exploit data resulting from automated
video analysis is represented by pedestrian counting and
density estimation: the indication of the average number of
pedestrians present in the simulated portion of the
environment is actually important in configuring the start areas.
In particular, in order to reproduce the counted number of
pedestrians, also characterised according to their direction,
we configured two different frequency profiles for the start
areas which lead to achieve, respectively, 30 and 50
pedestrians in the environment on average. Should the automated
analysis be able to discriminate different types of groups (i.e.
individuals, couples, triples, etc.) this characterisation could
further improve the starting areas configuration. The count
of pedestrians in areas of the environment, and in particular
portions of the overall analysed scene, could help generating
Cumulative Mean Density maps [
        <xref ref-type="bibr" rid="ref42">42</xref>
        ], a heat-map diagram
that can be used to validate simulation results. Examples of
this type of result on the side of simulation (in the Galley
scenario) are shown in Fig. 6, with an “instantaneous” (first
60 steps of the simulation, corresponding to 25 seconds of
simulated time) calculation of the average perceived4 local
densities in the simulated environment, divided in the two
directions of flow (i.e. North and South-bound). Even if it is
only an example, this results is already able to provide useful
information about the space utilization: while the flow towards
North has remained relatively compact by forming one large
lane, the one on the opposite direction has been divided in
more lanes where little jamming are arisen, easily identifiable
by the level of densities.
      </p>
      <p>
        Finally, a third way to employ data resulting from automated
video analysis is still related to the validation of simulation
results and it still employs people counting data as a primary
source. In particular, assuming that most of the counted
pedestrians is actually in motion to enter or exit the monitored
area (that could be a portion of the overall scene, like the
northern border), we could estimated the instantaneous flow
of pedestrians and average this value for certain time frames.
The achieved measure can be compared to simulation results
and this is particularly interesting in scenarios in which the
density conditions have significant variations, since it could be
possible to validate several points in the flow-density profile
characterising the so called fundamental diagram [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] that can
4Values of local densities in each cell, contained in the density grid, are
stored for the average calculation only when a pedestrian is located inside it.
be achieved by means of simulation results.
      </p>
      <p>Additional ways of employing other results of computer
vision techniques for helping a simulation project, not
necessarily discussed here or already employed for this analyses,
could be done. For instance, once a source–sink analysis has
been carried out, one could track a number of pedestrians
completing a certain path and average out the travel time
to have a reference value for evaluating simulations.
However, the applicability and accuracy of tracking and many
other techniques heavily depend on contextual factors like
lightning conditions, changes in velocities and directions,
but also crowding: a high density of pedestrians is very
frequently always causing occlusions that can mislead the
tracking algorithm. This work represents one first step in a
more general research work aimed at contributing a fruitful
interaction between pedestrian simulation and computer vision
research producing (i) vertical results, namely techniques and
case studies in specific contexts, and (ii) guidelines for the
adoption of the most appropriate technique for a given context
and situation.</p>
    </sec>
    <sec id="sec-6">
      <title>VII. CONCLUSIONS</title>
      <p>This paper has introduced the first results of a research effort
putting together techniques of automated analysis of pedestrian
and crowd dynamics and approaches towards the synthesis of
these kind of phenomena. While in the present form the results
of automated analysis mostly provide qualitative indications to
the modeler, in the future they will represent also a quantitative
empirical data for the initialization, calibration and validation
of simulation models. The main contribution of the present
work is represented by a systematic collaboration of automated
analysis and simulation approaches in a specific and
challenging real-world scenario, already producing useful indications
that, however, will soon be improved for an even smoother and
more quantitative integration. The main future directions are,
on one hand, aimed at a more thorough quantification of the
results of analyses and, on the other, at the identification and
understanding the behaviour of pedestrian groups in the scene
(e.g. source-sink analysis, converging to a point or dispersing,
circling around points of reference).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Helbing</surname>
          </string-name>
          and P. Molna´r, “
          <article-title>Social force model for pedestrian dynamics</article-title>
          ,
          <source>” Phys. Rev. E</source>
          , vol.
          <volume>51</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>4282</fpage>
          -
          <lpage>4286</lpage>
          , May
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Burstedde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Klauck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schadschneider</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Zittartz</surname>
          </string-name>
          , “
          <article-title>Simulation of pedestrian dynamics using a two-dimensional cellular automaton,” Physica A: Statistical Mechanics and its Applications</article-title>
          , vol.
          <volume>295</volume>
          , no.
          <issue>3 - 4</issue>
          , pp.
          <fpage>507</fpage>
          -
          <lpage>525</lpage>
          ,
          <year>2001</year>
          . [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0378437101001418
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Henein</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>White</surname>
          </string-name>
          , “
          <article-title>Agent-based modelling of forces in crowds.” in Multi-Agent and Multi-Agent-</article-title>
          <string-name>
            <surname>Based</surname>
            <given-names>Simulation</given-names>
          </string-name>
          , Joint Workshop MABS 2004, New York, NY, USA, July
          <volume>19</volume>
          ,
          <year>2004</year>
          , Revised Selected Papers, ser. Lecture Notes in Computer Science, P. Davidsson,
          <string-name>
            <given-names>B.</given-names>
            <surname>Logan</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          K. Takadama, Eds., vol.
          <volume>3415</volume>
          . Springer-Verlag,
          <year>2005</year>
          , pp.
          <fpage>173</fpage>
          -
          <lpage>184</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Bosse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hoogendoorn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C. A.</given-names>
            <surname>Klein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Treur</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. N.</surname>
          </string-name>
          <article-title>van der Wal, and</article-title>
          <string-name>
            <surname>A. van Wissen</surname>
          </string-name>
          , “
          <article-title>Modelling collective decision making in groups and crowds: Integrating social contagion and interacting emotions, beliefs</article-title>
          and intentions,” Autonomous Agents and
          <string-name>
            <surname>Multi-Agent</surname>
            <given-names>Systems</given-names>
          </string-name>
          , vol.
          <volume>27</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>52</fpage>
          -
          <lpage>84</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Schadschneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Klingsch</surname>
          </string-name>
          , H. Klu¨pfel, T. Kretz,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rogsch</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Seyfried</surname>
          </string-name>
          , “
          <article-title>Evacuation dynamics: Empirical results, modeling and applications,” in Encyclopedia of Complexity and Systems Science</article-title>
          , R. A. Meyers, Ed. Springer,
          <year>2009</year>
          , pp.
          <fpage>3142</fpage>
          -
          <lpage>3176</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Challenger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. W.</given-names>
            <surname>Clegg</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Robinson</surname>
          </string-name>
          , “
          <article-title>Understanding crowd behaviours: Supporting evidence</article-title>
          ,” http://www.cabinetoffice.gov. uk/news/understanding-crowd-behaviours, University of Leeds,
          <source>Tech. Rep.</source>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Moussa</surname>
          </string-name>
          <article-title>¨ıd</article-title>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Perozo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Garnier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Helbing</surname>
          </string-name>
          , and G. Theraulaz, “
          <article-title>The walking behaviour of pedestrian social groups and its impact on crowd dynamics,” PLoS ONE</article-title>
          , vol.
          <volume>5</volume>
          , no.
          <issue>4</issue>
          , p.
          <year>e10047</year>
          ,
          <year>04 2010</year>
          . [Online]. Available: http://dx.doi.org/10.1371%2Fjournal.
          <source>pone.0010047</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sarmady</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Haron</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. Z. H.</given-names>
            <surname>Talib</surname>
          </string-name>
          , “
          <article-title>Modeling groups of pedestrians in least effort crowd movements using cellular automata</article-title>
          ,” in Asia International Conference on Modelling and Simulation,
          <string-name>
            <surname>D. AlDabass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Triweko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Susanto</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <surname>A</surname>
          </string-name>
          . Abraham, Eds. IEEE Computer Society,
          <year>2009</year>
          , pp.
          <fpage>520</fpage>
          -
          <lpage>525</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Rodrigues</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. de Lima Bicho</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Paravisi</surname>
            ,
            <given-names>C. R.</given-names>
          </string-name>
          <string-name>
            <surname>Jung</surname>
            ,
            <given-names>L. P.</given-names>
          </string-name>
          <string-name>
            <surname>Magalha</surname>
          </string-name>
          <article-title>˜es, and</article-title>
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Musse</surname>
          </string-name>
          , “
          <article-title>An interactive model for steering behaviors of groups of characters</article-title>
          ,
          <source>” Applied Artificial Intelligence</source>
          , vol.
          <volume>24</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>594</fpage>
          -
          <lpage>616</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Tsai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Fridman</surname>
          </string-name>
          , E. Bowring,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brown</surname>
          </string-name>
          , S. Epstein,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Kaminka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marsella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ogden</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Rika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sheel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Taylor</surname>
          </string-name>
          , X.
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Zilka</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Tambe</surname>
          </string-name>
          , “
          <article-title>Escapes - evacuation simulation with children, authorities, parents, emotions, and social comparison,”</article-title>
          <source>in Proc. of 10th Int. Conf. on Autonomous Agents and Multiagent Systems - Innovative Applications Track (AAMAS</source>
          <year>2011</year>
          ), Tumer, Yolum, Sonenberg, and Stone, Eds.,
          <year>2011</year>
          , pp.
          <fpage>457</fpage>
          -
          <lpage>464</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. N.</given-names>
            <surname>Monekosso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Remagnino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Velastin</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.-Q.</given-names>
            <surname>Xu</surname>
          </string-name>
          , “
          <article-title>Crowd analysis: a survey,”</article-title>
          <source>Mach. Vis. Appl.</source>
          , vol.
          <volume>19</volume>
          , no.
          <issue>5-6</issue>
          , pp.
          <fpage>345</fpage>
          -
          <lpage>357</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Butenuth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Burkert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hinz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hartmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kneidl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Borrmann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Sirmac</surname>
          </string-name>
          ¸ek, “
          <article-title>Integrating pedestrian simulation, tracking and event detection for crowd analysis,” in ICCV Workshops</article-title>
          . IEEE,
          <year>2011</year>
          , pp.
          <fpage>150</fpage>
          -
          <lpage>157</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ali</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Shah</surname>
          </string-name>
          , “
          <article-title>A lagrangian particle dynamics approach for crowd flow segmentation and stability analysis,” in CVPR</article-title>
          .
          <source>IEEE Computer Society</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ozturk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Yamasaki</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Aizawa</surname>
          </string-name>
          , “
          <article-title>Detecting dominant motion flows in unstructured/structured crowd scenes,” in ICPR</article-title>
          . IEEE,
          <year>2010</year>
          , pp.
          <fpage>3533</fpage>
          -
          <lpage>3536</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Srivastava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. K.</given-names>
            <surname>Ng</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Delp</surname>
          </string-name>
          , “
          <article-title>Crowd flow estimation using multiple visual features for scenes with changing crowd densities,” in AVSS</article-title>
          .
          <source>IEEE Computer Society</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>60</fpage>
          -
          <lpage>65</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          , and H.-S. Wong, “
          <article-title>Crowd flow segmentation using a novel region growing scheme,” in PCM, ser</article-title>
          . Lecture Notes in Computer Science, P. Muneesawang,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Kumazawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roeksabutr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Liao</surname>
          </string-name>
          , and
          <string-name>
            <given-names>X.</given-names>
            <surname>Tang</surname>
          </string-name>
          , Eds., vol.
          <volume>5879</volume>
          . Springer,
          <year>2009</year>
          , pp.
          <fpage>898</fpage>
          -
          <lpage>907</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>H.</given-names>
            <surname>Ullah</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Conci</surname>
          </string-name>
          , “
          <article-title>Crowd motion segmentation and anomaly detection via multi-label optimization,”</article-title>
          <source>in ICPR workshop on Pattern Recognition and Crowd Analysis</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>C.</given-names>
            <surname>Shiyao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nianqiang</surname>
          </string-name>
          , and L. Zhen,
          <article-title>“Multi-directional crowded objects segmentation based on optical flow histogram,” in Image and Signal Processing (CISP</article-title>
          ),
          <year>2011</year>
          4th International Congress on,
          <source>vol. 1</source>
          . IEEE,
          <year>2011</year>
          , pp.
          <fpage>552</fpage>
          -
          <lpage>555</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>T.</given-names>
            <surname>Zhao</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Nevatia</surname>
          </string-name>
          , “
          <article-title>Bayesian human segmentation in crowded situations,” in CVPR (2)</article-title>
          .
          <source>IEEE Computer Society</source>
          ,
          <year>2003</year>
          , pp.
          <fpage>459</fpage>
          -
          <lpage>466</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yoshinaga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shimada</surname>
          </string-name>
          , and R.-i. Taniguchi, “
          <article-title>Real-time people counting using blob descriptor,” Procedia-Social and Behavioral Sciences</article-title>
          , vol.
          <volume>2</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>143</fpage>
          -
          <lpage>152</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xiaohua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lansun</surname>
          </string-name>
          , and L. Huanqin, “
          <article-title>Estimation of crowd density based on wavelet and support vector machine,” Transactions of the Institute of Measurement and Control</article-title>
          , vol.
          <volume>28</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>299</fpage>
          -
          <lpage>308</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>W.</given-names>
            <surname>Ma</surname>
          </string-name>
          , L. Huang, and C. Liu, “
          <article-title>Advanced local binary pattern descriptors for crowd estimation,” in PACIIA (2)</article-title>
          .
          <source>IEEE Computer Society</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>958</fpage>
          -
          <lpage>962</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>D.</given-names>
            <surname>Yoshida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Terada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Oe</surname>
          </string-name>
          , and
          <string-name>
            <surname>J.-I. Yamaguchi</surname>
          </string-name>
          , “
          <article-title>A method of counting the passing people by using the stereo images,” in ICIP (2</article-title>
          ),
          <year>1999</year>
          , pp.
          <fpage>338</fpage>
          -
          <lpage>342</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>K.</given-names>
            <surname>Hashimoto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Morinaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yoshiike</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kawaguchi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Matsueda</surname>
          </string-name>
          , “
          <article-title>People count system using multi-sensing application,” in Solid State Sensors</article-title>
          and Actuators,
          <year>1997</year>
          . TRANSDUCERS'97 Chicago., 1997 International Conference on, vol.
          <volume>2</volume>
          . IEEE,
          <year>1997</year>
          , pp.
          <fpage>1291</fpage>
          -
          <lpage>1294</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>A.</given-names>
            <surname>Davies</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Yin</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Velastin</surname>
          </string-name>
          , “
          <article-title>Crowd monitoring using image processing,” Electronics Communication Engineering Journal</article-title>
          , vol.
          <volume>7</volume>
          , no.
          <issue>1</issue>
          , pp.
          <fpage>37</fpage>
          -
          <lpage>47</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>D.</given-names>
            <surname>Roqueiro</surname>
          </string-name>
          and
          <string-name>
            <given-names>V. A.</given-names>
            <surname>Petrushin</surname>
          </string-name>
          , “
          <article-title>Counting people using video cameras</article-title>
          ,
          <source>” IJPEDS</source>
          , vol.
          <volume>22</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>193</fpage>
          -
          <lpage>209</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>S.</given-names>
            <surname>Velastin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Davies</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vicencio-Silva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Allsop</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Penn</surname>
          </string-name>
          , “
          <article-title>Analysis of crowd movements and densities in built-up environments using image processing,” in Image Processing for Transport Applications, IEE Colloquium on</article-title>
          .
          <source>IET</source>
          ,
          <year>1993</year>
          , pp.
          <fpage>8</fpage>
          -
          <lpage>1</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28] --, “
          <article-title>Automated measurement of crowd density and motion using image processing,” in Road Traffic Monitoring</article-title>
          and Control,
          <year>1994</year>
          ., Seventh International Conference on.
          <source>IET</source>
          ,
          <year>1994</year>
          , pp.
          <fpage>127</fpage>
          -
          <lpage>132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>A.</given-names>
            <surname>Marana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Velastin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Costa</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Lotufo</surname>
          </string-name>
          , “
          <article-title>Estimation of crowd density using image processing,” in Image Processing for Security Applications</article-title>
          (Digest No.:
          <year>1997</year>
          /074),
          <source>IEE Colloquium on. IET</source>
          ,
          <year>1997</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>1</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>R.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Huang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Tian</surname>
          </string-name>
          , “
          <article-title>On pixel count based crowd density estimation for visual surveillance</article-title>
          ,
          <source>” in Cybernetics and Intelligent Systems</source>
          ,
          <source>2004 IEEE Conference on, vol. 1</source>
          . IEEE,
          <year>2004</year>
          , pp.
          <fpage>170</fpage>
          -
          <lpage>173</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>S.-F.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          , and H.
          <string-name>
            <surname>-X. Chao</surname>
          </string-name>
          , “
          <article-title>Estimation of number of people in crowded scenes using perspective transformation</article-title>
          ,
          <source>” IEEE Transactions on Systems, Man, and Cybernetics</source>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>A</given-names>
          </string-name>
          , vol.
          <volume>31</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>645</fpage>
          -
          <lpage>654</lpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>B. K. P.</given-names>
            <surname>Horn</surname>
          </string-name>
          and
          <string-name>
            <given-names>B. G.</given-names>
            <surname>Schunck</surname>
          </string-name>
          , “
          <article-title>”determining optical flow”: A retrospective,” Artif</article-title>
          . Intell., vol.
          <volume>59</volume>
          , no.
          <issue>1-2</issue>
          , pp.
          <fpage>81</fpage>
          -
          <lpage>87</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>J.</given-names>
            <surname>Canny</surname>
          </string-name>
          , “
          <article-title>A computational approach to edge detection,”</article-title>
          <source>IEEE Trans. Pattern Anal. Mach</source>
          . Intell., vol.
          <volume>8</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>679</fpage>
          -
          <lpage>698</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>A. K. Jain</surname>
            ,
            <given-names>M. N.</given-names>
          </string-name>
          <string-name>
            <surname>Murty</surname>
            , and
            <given-names>P. J.</given-names>
          </string-name>
          <string-name>
            <surname>Flynn</surname>
          </string-name>
          , “
          <article-title>Data clustering: A review,” ACM Comput</article-title>
          . Surv., vol.
          <volume>31</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>264</fpage>
          -
          <lpage>323</lpage>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>U.</given-names>
            <surname>Weidmann</surname>
          </string-name>
          , “
          <article-title>Transporttechnik der fussga¨nger - transporttechnische eigenschaftendes fussga¨ngerverkehrs (literaturstudie),” Institut fu¨er Verkehrsplanung, Transporttechnik, Strassen- und Eisenbahnbau IVT an der ETH Zu¨rich</article-title>
          ,
          <source>Literature Research 90</source>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kretz</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>Bo¨nisch, and</article-title>
          <string-name>
            <given-names>P.</given-names>
            <surname>Vortisch</surname>
          </string-name>
          , “
          <article-title>Comparison of various methods for the calculation of the distance potential field</article-title>
          ,
          <source>” in Pedestrian and Evacuation Dynamics</source>
          <year>2008</year>
          ,
          <string-name>
            <given-names>W. W. F.</given-names>
            <surname>Klingsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rogsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Schadschneider</surname>
          </string-name>
          , and M. Schreckenberg, Eds. Springer Berlin Heidelberg,
          <year>2010</year>
          , pp.
          <fpage>335</fpage>
          -
          <lpage>346</lpage>
          . [Online]. Available: http: //dx.doi.
          <source>org/10.1007/978-3-642-04504-2 29</source>
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>M.</given-names>
            <surname>Costa</surname>
          </string-name>
          , “Interpersonal distances in group walking,
          <source>” Journal of Nonverbal Behavior</source>
          , vol.
          <volume>34</volume>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>26</lpage>
          ,
          <year>2010</year>
          ,
          <volume>10</volume>
          .1007/s10919-009-0077-y. [Online]. Available: http://dx.doi.org/10.1007/s10919-009-0077-y
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>H.</given-names>
            <surname>Klu</surname>
          </string-name>
          <article-title>¨pfel, “A cellular automaton model for crowd movement</article-title>
          and egress simulation,
          <source>” Ph.D. dissertation</source>
          , University Duisburg-Essen,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kirchner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Namazi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nishinari</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Schadschneider</surname>
          </string-name>
          , “
          <article-title>Role of Conflicts in the Floor Field Cellular Automaton Model for Pedestrian Dynamics,”</article-title>
          <source>in 2nd International Conference on Pedestrian and Evacuation Dynamics</source>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kirchner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Nishinari</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Schadschneider</surname>
          </string-name>
          , “
          <article-title>Friction effects and clogging in a cellular automaton model for pedestrian dynamics</article-title>
          ,
          <source>” Phys. Rev. E</source>
          , vol.
          <volume>67</volume>
          , p.
          <fpage>056122</fpage>
          , May
          <year>2003</year>
          . [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevE.67.056122
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bandini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gorrini</surname>
          </string-name>
          , and G. Vizzari, “
          <article-title>Towards an integrated approach to crowd analysis and crowd synthesis: a case study and first results,” CoRR</article-title>
          , vol.
          <source>abs/1303.5029</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>C.</given-names>
            <surname>Castle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Waterson</surname>
          </string-name>
          , E. Pellissier, and
          <string-name>
            <given-names>S.</given-names>
            <surname>Bail</surname>
          </string-name>
          , “
          <article-title>A comparison of grid-based and continuous space pedestrian modelling software: Analysis of two uk train stations,” in Pedestrian and Evacuation Dynamics</article-title>
          ,
          <string-name>
            <given-names>R. D.</given-names>
            <surname>Peacock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. D.</given-names>
            <surname>Kuligowski</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Averill</surname>
          </string-name>
          , Eds. Springer US,
          <year>2011</year>
          , pp.
          <fpage>433</fpage>
          -
          <lpage>446</lpage>
          . [Online]. Available: http: //dx.doi.
          <source>org/10.1007/978-1-4419-9725-8 39</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>