<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Victim Localization and
Assessment System for Emergency Responders, Journal of Computing in Civil Engineering</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>A Framework of a Multi-User Voice-Driven BIM-Based Navigation System for Fire Emergency Response</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Hui Zhou</institution>
          ,
          <addr-line>Mun On Wong, Huaquan Ying, Sanghoon Lee</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2017</year>
      </pub-date>
      <volume>30</volume>
      <issue>2</issue>
      <fpage>158</fpage>
      <lpage>167</lpage>
      <abstract>
        <p>Navigation support is of significant importance for fire evacuation and rescue due to the complexity of building indoor structures and the uncertainty of fire emergency. This paper presents a framework of a multi-user voice-driven building information model (BIM)-based navigation system for fire emergency response. Classes of the navigation system is first defined being consistent with the open BIM data standard (i.e. Industry Foundation Classes, IFC). A string-matching method is then developed to generate a navigation query from each voice navigation request based on the Levenshtein distance and Burkhard and Keller (BK)-tree of a fire navigation associated lexicon. With the semantic information of the location in the navigation query, the spatial geometric information of the location is extracted from the BIM model and visibility graph-based route plans for multiple users are generated. To deliver the route planning to building users in an intuitive and direct manner, patterns of different voice prompts will be designed to automatically broadcast the navigation route step by step. Finally, the proposed navigation system will be validated with virtual reality (VR) based experiment.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>system to support indoor navigation in complex buildings. In such mechanisms, users are
required to use mobile devices to scan the sensing tags or capture the signals delivered by
position sensors. However, it may also cause dangers of eye-off-road distraction for responders
on the path (Zhang et al., 2017). Moreover, sensor-based devices may be lacking in current
buildings or be easily broken by fires or high temperatures, and the field of vision to find a
sensor tag may be blurred by smoke. In addition, it is ignored that responders under emergencies
can take an initiative to send evacuation or rescue queries with their location information using
direct and prompt natural language speech. In regard to voice-based localization, Ivanov (2017)
applied natural voice-based queries to retrieve relevant location data to develop a navigation
system for visually impaired people. The navigation system was implemented with simulations
in a 2D environment where a 2D escape route was generated as the only navigation assistance.
However, such a route representation is not convenient enough for fire responders, especially
for trapped occupants who may not be able to keep rational cognition and judgment under
emergencies. To address these issues, voice-driven navigation queries and commands can be
applied as a potential method because they are more natural and intuitive for users under fire
emergencies. Moreover, scenarios for collaborative fire response among multiple users were
seldom discussed in traditional fire emergency navigation systems.</p>
      <p>Therefore, this paper aims to propose a framework for a multi-user BIM-based voice-driven
navigation system for fire emergency response. Virtual Reality (VR) technology will be
adopted to validate the proposed systems since it enables people to be immersed in the virtual
environment (Zou et al., 2017). The rest of this paper is organized as follows. Classes of the
proposed navigation system are defined in Section 2. Section 3 elaborates the two modules of
the navigation system. Specifically, Section 3.1 presents the solutions to voice recognition and
navigation query generation for voice navigation requests. Section 3.2 shows the developed
approach of the BIM-based navigation model and the potential ways for voice navigation
command generation among multiple users. Section 4 introduces the probable design of
VRbased experimental validation for the proposed system. Finally, the paper is concluded in
Section 5 and future research work is also presented.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Classes of the Proposed Navigation System</title>
      <p>The classes of the proposed navigation system are composed of two aspects: users and facilities
(as presented in Figure 1), for which objects are identified based on the information
requirements for each module of the navigation system. The class of users contains objects of
residential occupants, firefighters, and facility managers. For facilities, classes for building
elements, transport elements, firefighting devices, hazardous materials, and embedded sensors
are defined. The defined classes are identified by the attribute “ID” and have the general
attribute “Location” or “Geometry Info”, which refers to a 3D coordinate data in the global
coordinate system of a building. Meanwhile, specific attributes are required for some classes.
For the space and transport element classes, the attribute “Accessibility” is used to capture the
information whether a space or transport element is accessible or not under emergency
situations. The composite material information of each building element (e.g. fire resistance
rating of composite materials) are specified in their attributes. For firefighting devices, the
device type and quantities of devices are required. For hazardous materials, material
information such as material type and toxicity, and the storage volume is defined within the
system.</p>
      <p>The semantics and topology relationships between the defined classes are consistent with those
in the Industry Foundation Classes (IFC) schema, which is an open standard BIM data format,
in order to use an IFC BIM model as a data hub of the proposed navigation system. In this
paper, IFC4 is used as the base specification. Figure 2 presents the IFC data structure that
supports the required information extraction for the defined classes. It is noted that most of the
defined classes can be mapped with the IFC instances, with reference to IfcBuilding,
IfcBuildingStorey, IfcSpace, IfcBuildingElement, IfcTransportElement, and
IfcDistributionElement. However, the remaining classes such as the fire extinguisher in the
firefighting device class and the hazardous material class, have no corresponding definitions in
the IFC schema. In this case, the missing information is either stored in the database or added
as an extension of IFC entity.</p>
      <p>Building Element</p>
      <p>Wall
Slab
Column
Beam
Roof
-Element ID
-Composite Material Info
-Geometry Info
Transport Element</p>
      <p>Elevator
Escalator
-Element ID
-Accessibility
-Geometry Info</p>
      <p>Facility
-ID
Building
-Building ID
-Geometry Info
Building Storey
-Storey ID
-Geometry Info
Space</p>
      <p>Room
Corridor
-Space ID
-Accessibility
-Geometry Info
User</p>
      <p>Residential Occupant
Firefighter
Facility Manager
-UID
-Type
-Location
Opening Element</p>
      <p>Door
Window
-Accessibility
Vertical Passageway</p>
      <p>Stair</p>
      <p>Ramp
IfcBuilding
- GlobalID</p>
      <p>RelatingObject
IfcRelAggregates</p>
      <p>RelatedObjects
IfcBuildingStorey
- GlobalID</p>
      <p>RelatingObject
IfcRelAggregates
IfcSpace
- GlobalID
Legend：</p>
      <p>RelatedObjects RelatingStructure</p>
      <p>IfcEntity</p>
      <p>Objectified relationship</p>
      <p>TypeEnum</p>
      <p>Firefighting Device</p>
      <p>Fire Exinguisher
Hydrants
Sprinkler system
-Device ID
-Device type
-Location
-Quantity
Hazardous material
-Material ID
-Material Info
-Storage Volume
-Location
Sensor</p>
      <p>Fire Detector
-Sensor ID
-Fire Location</p>
      <p>IfcDoor</p>
      <p>IfcWindow
IfcBuildingElement (ABS)
- GlobalID</p>
      <p>IfcSensor
IfcDistribution</p>
      <p>ControlElement
IfcDistributionElement
- GlobalID
Legend：</p>
      <p>Inherit</p>
      <p>Associate</p>
      <p>Aggregate</p>
      <p>Class
- Attribute
The presented framework consists of two modules, as shown in Figure 3. The first module aims
to recognize users’ voice inputs and generate corresponding navigation queries. The second
module is designed to retrieve the information required by each query from the BIM-based data
hub and generate navigation models for different users. Details for each module are elaborated
in the following sub-sections.
3.1 Module 1: Voice Recognition and Navigation Query Generation</p>
    </sec>
    <sec id="sec-3">
      <title>3.1.1 Voice Recognition</title>
      <p>
        Once a fire emergency occurs, users can communicate with the navigation system in natural
English speech through mobile electronics which can record and play audio. The Google
speech-to-text (STT) API (https://cloud.google.com/speech-to-text/) is then adopted to
recognize and convert the audio streams received from users into sequences of words/sentences
automatically. Based on its cloud storage, the Google STT API can provide synchronous voice
recognition with significant accuracy
        <xref ref-type="bibr" rid="ref2">(Harsur and Chitra, 2017; Këpuska, and Bohouta, 2017)</xref>
        .
The converted words/sentences are saved as .txt files and further used for navigation associated
information extraction in Section 3.1.2. In this paper, it is hypothesized that effective network
connectivity is available all the time to ensure real-time human-to-machine interaction.
      </p>
    </sec>
    <sec id="sec-4">
      <title>3.1.2 Navigation Query Generation</title>
      <p>To generate navigation queries from the recognized speeches, a fire emergency
navigationoriented lexicon is first constructed as a dataset for textual information processing. The
predetermined vocabulary includes four categories: users, locations, request details, and
interaction dialogs. Each category is composed of one to several sub-categories, which are
further labelled under each category with a prefix of the category index, as shown in Table 1.
Particularly, the location vocabulary represents the semantics of building components and is
labelled sequentially according to the building components’ relationships, which helps to
enable more accurate indoor localization by organizing the location vocabulary from coarse to
fine (e.g. from a building storey level to a space level).</p>
      <p>
        Based on the constructed navigation-oriented lexicon, converted sentences are then input for
detecting and extracting navigation-associated keywords using the string-matching approach
based on “Levenshtein distance” (Levenshtein, 1966; Salehinejad et al., 2017). To reduce the
search scope for each target word, the predefined navigation associated vocabulary is organized
in the structure of a BK-tree
        <xref ref-type="bibr" rid="ref1">(Burkhard and Keller, 1973)</xref>
        . Concepts of the Levenshtein distance
and BK-tree are briefly introduced as follows.
For two strings  1 and  2 with the lengths | 1| and | 2|, respectively,  ( 1,  2) indicates the
distance between the first  character of  1 and the first  character of  2 , such that
Where
 ( 1,  2 ) = { ( 1−1,  2−1),   1 =  2
      </p>
      <p>1 +  ,</p>
      <p>ℎ
 = min( ( 1−1,  2 ),  ( 1,  2
 −1),  ( 1−1,  2−1))
(1)
(2)
Specifically,  ( 1,  20) =  and  ( 10,  2 ) =  , while   0 represents an empty string.  ( 1,  2 )

is the</p>
      <p>minimum number of single-character operations (i.e. insertions, deletions, or
substitutions) needed to match  1 and  2 .</p>
      <p>An arbitrary element  is selected as the root of BK-Tree.  ( ,  ) is defined as a distance
function for each pair of elements of a set of objects. The m-th sub-tree branch is recursively
built of all elements  such that  ( ,  ) =  . Figure 4(b) shows the constructed BK-tree of a
set of navigation associated vocabularies with the word “floor” as the root. The number on
edges indicates the Levenshtein distance between two strings. The root word can be determined
more reasonably in accordance with the frequency of words used for indoor navigation.
A two-step approach is developed for the navigation query generation, based on principles of
the Levenshtein distance and BK-tree. As shown in Figure 4(a), the first step extracts the
associated information from converted sentences for navigation purpose. In detail, the sentences
are first tokenized with word tags which represent their part of speech (POS) (e.g., noun, verb,
preposition, etc.). Subsequently, noun phrases are extracted by predefined patterns (e.g. noun
phrase: determiner + adjective + noun). Next step executes string matching based on the
BKtree word dictionary for nouns inside the extracted noun phrases. For a given word query, the
distance  from the current root is first calculated. Then the words located in the  th sub-tree
branch are determined as the candidate list, while the sub-tree root is changed into the current
root. Progressively, the target word can be detected when the distance from the query word to
the current root equals to 0. For fuzzy string matching, a tolerance   is induced which means
to search similar strings with the distance  ±   to the query string. As words in the BK-tree
dictionary have been predefined with labels, the detected nouns can be organized accordingly.
Word vocabulary of request detail is used to distinguish the current location from the navigation
destination in each navigation request. For example, “from” can be connected with an origin,
while “to” can be added before a destination. One example of the navigation query generation
is illustrated in Figure 4(c).
3.2 Module 2: Navigation Model and Voice Navigation Command Generation</p>
    </sec>
    <sec id="sec-5">
      <title>3.2.1 BIM-based Navigation Model</title>
      <p>
        Based on the semantic information of the origin and the destination in each navigation query,
corresponding 3D spatial information and facilities’ location information are extracted from the
BIM model or the developed database. Figure 5(a) presents the IFC data structure for extraction
of the required data. IFC processing tools such as IFC Engine, IfcOpenShell, and xBIM Toolkit
are available for IFC data extraction. For the network construction of the route map, the
gridbased matrix method and the visibility graph (VG) method have been studied
        <xref ref-type="bibr" rid="ref4">(Cheng et al.,
2018)</xref>
        . However, for the grid-based matrix method, discretized spaces represented in the N × N
grid matrix increase the number of nodes on the planned route, which makes the method
complex or even impossible to navigate users via voice prompts, as they may not be familiar
with the building structure. The VG method is thus adopted in this paper as it can provide
roomlevel details of buildings, which can avoid confusions or interpretation difficulties for voice
navigation (Li et al., 2014). Dijkstra’s algorithm is used to dynamically plan the shortest route
from the origin to the destination based on the graph. Figure 5(b) shows the workflow of the
navigation model generation.
      </p>
      <p>For navigation in high-rise buildings, stairs are represented by a walking line starts from one
floor to another floor. For obstacles, it is hypothesized that users have the cognition of general
furniture and can avoid them autonomously. Therefore, only conspicuous objects such as huge
cupboards and reception corners need to be specifically considered. These obstacles can be
simply represented as footprint polygons. Dangers in emergencies such as heat convection,
toxicity, smoke coverage are considered as obstacles as well. To distinguish the risk level of
approaching to obstacles, a buffer is built around the obstacle such as the dashed box in Figure
6 (b). If a buffer is created, the node in the visibility graph is also changed accordingly. Spaces
are checked whether they are occupied by obstacles. If there are no obstacles, the visibility edge
between the door and the space centroid is created. This indicates a navigation prompt for
entering a space from a door and no guidance is needed for walking into an “empty” space. If
obstacles exist, the connections between the vertices of obstacles or buffers and the doors are
created. The accessibility of the graph nodes is changed in accordance with the dynamic
information in fire emergencies, such as people distribution, fire and smoke spread trends, etc.
Figure 6 presents an example of a BIM-based navigation for fire response in a sample building
based on the proposed method.</p>
    </sec>
    <sec id="sec-6">
      <title>3.2.2 Voice Navigation Command Generation</title>
      <p>Patterns of voice prompts for navigation to the route nodes are designed based on the types of
graph nodes and their connections in the visibility graph (i.e. door-to-space, door-to-door, and
door-to-obstacle). As shown in Figure 7, the voice prompt for navigation to a door could be
“GO THROUGH THE”  th “DOOR ON THE LEFT” or “FRONT DOOR”. The pattern is
automatically determined for each route node, as the uniform word vocabulary for the semantic
information of the location has been defined with unique labels in Module 1. The patterns for
voice navigation to other locations such as spaces, obstacles, and stairs are also defined in the
similar manner. Based on the designed navigation patterns, the optimal route from the origin to
the destination is automatically translated into textual navigation commands with the support
from the developed vocabulary in Module 1. Then, the text is input into a voice synthesizer tool
called “eSpeak” to generate voice navigation commands.</p>
      <p>As the navigation model is initially developed to enable multi-user navigation services, a
communication mechanism between firefighters, facility managers, and residential occupants
need to be predefined. Potential interactions could be rescue requests from the trapped people,
hazardous materials relocation, and properties protection, etc. Figure 8 explains the potential
interactions between multiple users with two navigation scenarios. The first scenario is that a
trapped occupant sends a help request signal with his/her location to the navigation system
which then delivers the message to firefighters and further plan the shortest route for rescue.
Also, facility managers and residential occupants inform the navigation system of the locations
of potential hazards and important properties in the building. This helps to update the building
information in the navigation system and further plan shorter routes for users. Within these
interactions, the navigation system requests users to confirm the query for the current location
and ultimate destination, and it sends a message delivery confirmation to users. For such cases,
sentences of general dialogs (e.g. your message is sent to firefighters) are defined and sent at
the appropriate time.</p>
    </sec>
    <sec id="sec-7">
      <title>4. VR-based Experimental Validation</title>
      <p>Based on Modules 1 and 2, a multi-user BIM-based voice-oriented prototype for fire emergency
navigation will be developed. Unity, a high-performance game development platform, will then
be adopted to build a VR-based environment where people can be engaged in the fire
emergency scenarios and interact with the navigation system as building users, as shown in
Figure 9. To present full-scale immersive fire emergency scenarios, the VR-based environment
will further be implemented in a CAVE system, as presented in Figure 9 (a).
The scenarios will be designed based on a preliminary study of potential reasons for several
disastrous fire emergency cases in Hong Kong. Two groups of participants, namely the
experimental group and the control group will be invited to finish designed tasks within the
comparative experiment, assisted by the proposed navigation system or not, respectively. To
realize voice navigation for multiple users, a multi-player High-Level API (HLAPI) as a Unity
plug-in will be adopted for building the network of users’ interaction. A Unity package named
Tridify (https://www.tridify.com/) will also be used to ensure uniform coordinates between the
VR environment and BIM models. Thus, the navigation system tested in the VR environment
will further obtain the accurate position of participants in buildings through the tracking sensor.
To evaluate the performance of the proposed navigation system, assessment criteria will be
built in terms of key module functions. Specifically, for navigation query generation, it can be
assessed by whether the navigation system can recognize the semantic information of the users’
locations and the navigation destinations. With regard to the BIM-based navigation model
generation for multiple users, the proposed system will be tested to check whether it can
recognize and localize the fire, users and navigation destinations based on the semantic
information of locations in the navigation queries. For voice navigation command generation,
the distance and direction guide between two adjacent nodes on the generated navigation route
will be examined. Meanwhile, navigation efficiency and clarity are essential to the safe
evacuation and rescue under fire emergencies. Figure 9 (b) illustrates the hierarchy of the
evaluation criteria of the proposed navigation system. The corresponding measurement for each
sub-criterion is also specified in Figure 9 (b). For instance, the time from the origin to the
destination consumed by the experiment participants will be recorded to evaluate the navigation
efficiency of the system.</p>
    </sec>
    <sec id="sec-8">
      <title>5. Conclusions and Future Work</title>
      <p>This paper proposes a framework of a multi-user voice-driven BIM-based navigation system
for fire emergency response. This framework is expected to extend traditional BIM-based
navigation approaches with intuitive voice-based localization and navigation prompt delivery.
Specifically, a string-matching method is introduced to generate the navigation query from each
voice navigation request based on the Levenshtein distance and BK-tree of a fire navigation
associated lexicon. Approaches for dynamic route planning and intuitive navigation command
generation for multiple users are proposed with the multi-user interaction definition and
patterns of voice prompts. Therefore, the multi-user navigation system will be not only a
navigation tool for planning the shortest route, but also an information sharing platform for
involved participants to collaborate with each other. However, the approaches or algorithms
involved in each module needs to be implemented with more fire emergency scenarios in
realworld buildings. In addition, to avoid invalid voice input, the proposed navigation system
hypothesizes that building users will only input navigation requests. For more general cases,
sentiment analysis of the recognized voice information should be required. Moreover, deep
learning methods for the navigation query generation and the voice navigation command
generation can be integrated with the proposed system to ensure great efficiency in navigation
services.</p>
    </sec>
    <sec id="sec-9">
      <title>6. Acknowledgments</title>
      <p>The work described in this paper was supported by a grant from Graduate Collaborative
Research Awards funded by Universitas 21.</p>
      <p>Gökdemir, N. (2011). Identification and representation of information items required for vulnerability assessment
and multi-hazard emergency response operations. Middle East Technical University.</p>
      <p>Harsur, A. and Chitra, M. (2017). Voice based navigation system for blind people using ultrasonic sensor.
IJRITCC, 3, pp.4117-4122.</p>
      <p>Ivanov, R. (2017). An approach for developing indoor navigation systems for visually impaired people using
Building Information Modeling. Journal of Ambient Intelligence and Smart Environments, 9(4), pp.449-467.
Këpuska, V. and Bohouta, G. (2017). Comparing speech recognition systems (Microsoft API, Google API and
CMU Sphinx). International Journal of Engineering Research and Applications, 7, pp.20-24.
Lertlakkhanakul, J., Li, Y., Choi, J. and Bu, S. (2009). GongPath: Development of BIM based indoor pedestrian
navigation system, NCM 2009 - 5th International Joint Conference on INC, IMS, and IDC, pp. 382–388.
Levenshtein, V.I. (1966), February. Binary codes capable of correcting deletions, insertions, and reversals. In
Soviet physics doklady, 10(8), pp. 707-710).</p>
      <p>Li, N., Yang, Z., Ghahramani, A., Becerik-Gerber, B. and Soibelman, L. (2014). Situational awareness for
supporting building fire emergency response: Information needs, information sources, and implementation
requirements. Fire safety journal, 63, pp.17-28.</p>
      <p>Lin, Y.H., Liu, Y.S., Gao, G., Han, X.G., Lai, C.Y. and Gu, M. (2013). The IFC-based path planning for 3D indoor
spaces. Advanced Engineering Informatics, 27(2), pp.189-205.</p>
      <p>Purser, D.A. and Bensilum, M. (2001). Quantification of behaviour for engineering design standards and escape
time calculations. Safety science, 38(2), pp.157-182.</p>
      <p>Rueppel, U. and Stuebbe, K.M. (2008). BIM-based indoor-emergency-navigation-system for complex buildings.
Tsinghua science and technology, 13(S1), pp.362-367.</p>
      <p>Salehinejad, H., Barfett, J., Aarabi, P., Valaee, S., Colak, E., Gray, B. and Dowdell, T. (2017), October. A
convolutional neural network for search term detection. In 2017 IEEE 28th Annual International Symposium on
Personal, Indoor, and Mobile Radio Communications (PIMRC) (pp. 1-6). IEEE.</p>
      <p>Tashakkori, H., Rajabifard, A. and Kalantari, M. (2015). A new 3D indoor/outdoor spatial model for indoor
emergency response facilitation. Building and Environment, 89, pp.170-182.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Burkhard</surname>
            ,
            <given-names>W.A.</given-names>
          </string-name>
          and Keller, R.M. (
          <year>1973</year>
          ).
          <article-title>Some approaches to best-match file searching</article-title>
          .
          <source>Communications of the ACM</source>
          ,
          <volume>16</volume>
          (
          <issue>4</issue>
          ), pp.
          <fpage>230</fpage>
          -
          <lpage>236</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Cheng</surname>
          </string-name>
          , M.Y.,
          <string-name>
            <surname>Chiu</surname>
            ,
            <given-names>K.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hsieh</surname>
            ,
            <given-names>Y.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>I.T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chou</surname>
            ,
            <given-names>J.S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>Y.W.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>BIM integrated smart monitoring technique for building fire prevention and disaster relief</article-title>
          .
          <source>Automation in Construction, 84</source>
          , pp.
          <fpage>14</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Cloud</surname>
          </string-name>
          Speech-to-
          <source>Text. Accessed 30 March</source>
          <year>2019</year>
          , &lt;https://cloud.google.com/speech-to-text/&gt;.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Cheng</surname>
            ,
            <given-names>J.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tan</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Song</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mei</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gan</surname>
            ,
            <given-names>V.J.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>Developing an evacuation evaluation model for offshore oil and gas platforms using BIM and agent-based model</article-title>
          .
          <source>Automation in Construction, 89</source>
          , pp.
          <fpage>214</fpage>
          -
          <lpage>224</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>