<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Multimodal Immersive Learning with Artificial Intelligence for Robot and Running application cases</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Fernando P. Cardenas-Hernandez</string-name>
          <email>cardenas@dipf.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gianluca Romano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hendrik Drachsler</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Information Center for Education</institution>
          ,
          <addr-line>DIPF</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Leibniz Institute for Research and Information in Education</institution>
          ,
          <addr-line>60323 Frankfurt am Main</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In research different MMLA applications were presented that provide a solution for a particular psychomotor learning task, e.g. CPR or table tennis. A common limitation in all applications is that they are domain specific. In this sense, we present the MILKI-PSY project, whose main goal is to provide a onefor-all system across different domains. Inherently given that different psychomotor learning tasks across domains have certain aspects in common, it would make a one-for-all system possible. Additionally, we present ideas for the MMLA data collection through different sensors and its respective storage, annotation, preparation, and exploitation. The proposed ideas are with respect to two learning tasks: running in the field of sports and collaborative montage in the field of human-robot interaction. Further, we suggest that the system must give to the user the freedom to decide what sensor data to use, and which feedback to receive. Ultimately, we opt for a scalable solution that can be provided to a larger audience.</p>
      </abstract>
      <kwd-group>
        <kwd>Sensors</kwd>
        <kwd>Multimodal Interaction</kwd>
        <kwd>Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        MMLA stands for Multimodal Learning Analytics which refers to the use of multiple
sensory inputs at the same time to collect, analyze, and evaluate data related to the
users’ learning process in order to understand and enhance the learning environment in
the field of educational technology. Taking this into account, we must enable the
acquisition of input data of our system within an educational environment, thus, the
incorporation of sensors is needed. The sensors involved in this system must be able to
offer enough information to build a reliable infrastructure which provides guidance
during the teaching and learning process of psychomotor skills. By using multimodal
information it is expected to obtain an efficient, non-redundant, and highly significant
understanding of the incoming sensory data. Normally, the use of multiple and different
types of sensors lead to solving the problem of finding an efficient way to synchronize
the data and making it compatible for its further analysis and representation within the
system. At the end, the main objective is to allow users to be free to choose the
psychomotor skill they want to learn in a self-taught way through the identification and
analysis of the characteristics these psychomotor activities share in common. To carry
out this objective, the MMLA Pipeline approach [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is taken as a starting point because
it offers well-structured methodology which eases the research of multimodal
experiments by designing accurate setups for the improvement of learning activities. It
is also worth taking into account previous related work in this area such as the
cardiopulmonary resuscitation (CPR) tutor [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], table tennis tutor [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and the
presentation trainer [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] that considered the MMLA Pipeline in their design.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Data Collection</title>
      <p>The use of sensors is vital for the development of multimodal systems because they
allow the data collection. Sensors used in the development of the two application cases
are grouped based on their outputs. A description of their main role or application
within the system is also done.</p>
      <p>
        For the development of a multimodal system for the robot application case, sensors
are highly flexible and efficient to react to the unpredictable human actions that occur
in human-robot collaboration tasks [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. To test the level of acceptance and comfort of
the users, we recommend using both physical and contactless sensors for human-robot
interactions. Sensors generate inputs to the system and they must be placed on the robot
in order to equip it with the ability to collaborate and interact with and similar to
humans. Given that the main goal is the learning, the robot’s action sequence may be
repeated or skipped considering the user’s learning performance or feedback.
      </p>
      <p>Vision sensors: These types of sensors include cameras whose images can be
processed to extract valuable data to control the robot to avoid collision with the user
or objects, and to communicate with the users by their emotion interpretation.</p>
      <p>Audition sensors: The robot can be equipped with one or multiple microphones in
charge of acquiring the verbal command dictated by the user.</p>
      <p>Touch sensors: They allow the interaction between humans and robots by
measuring the forces involved in their communication.</p>
      <p>For the running application design, the sensors must fulfill the following
requirements: they must be waterproof because sweat can damage non well isolated
sensors; they must not influence significantly the normal movement of the users,
otherwise, the collected learning data could not be reliable; they must have a low data
delivery latency; finally, sensors worn by the learner must be easy-wearable and
portable.</p>
      <p>Vision sensors: Cameras allow to capture videos to obtain information about the
learner’s performance during the training and to record the environment where the
learning process takes place.</p>
      <p>Audition sensors: Microphones can register verbal commands and sounds
associated with the current physical condition of the runners, for instance, coughing,
gasping and exhalation. These sounds may indicate the level of fatigue and discomfort
of the user.</p>
      <p>Motion sensors: These sensors permit recording the persons’ movements by placing
them on different parts of the body, e.g., head, chest, waist, wrists, arms, thighs, ankles
and toes. Accelerometers and gyroscopes are included in this group.</p>
      <p>
        Physiological sensors: By measuring the body temperature, pulse rate, respiration
rate and blood pressure denominated as the human body’s most basic functions [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], we
can gain precise information of the current physical condition of the runners. This type
of sensor is not used in the collaborative robot interaction case as this case does not
involve an extenuating activity that could alter the physical condition of the people.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Data Storage</title>
      <p>
        This step of the MMLA Pipeline corresponds to the organization of the diverse
incoming multimodal data which normally require high storage capacity due to its big
size. The managing and exchanging of the stored information are also parts of this step.
Ideally, a data storage system must have an easy access and a minimal latency after
read and write accesses, as well as, the availability to keep up with growth and the
flexibility and efficiency to handle a wide range of formats coming from different
sources [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Having these features in mind, hard drive storage and removable storage devices are
discarded because they store data locally making the data sharing among users a time
consuming task and their storage capacities tend not to be sufficient for systems
receiving a large amount of data. Moreover, in case of a hardware malfunction the
stored data is susceptible to loss. Network storage and online storage can deal with the
drawbacks that local storage devices have.</p>
      <p>Network storage: It is a network used to store data in a way by which it can be
accessible to a group of devices on the network; it also manages copies of the data
across the network as a backup. Normally, network storage technology is classified as
Storage Area Network and Network Attached Storage.</p>
      <p>Online storage (also called cloud storage): It permits users to delegate the storage
of their data, its management, maintenance and security to online data storage services
offered on the internet.</p>
      <p>To avoid investing financial and human resources in the maintenance of the storage
infrastructure using direct network storage, it is strongly recommended to use cloud
storage. As there are plenty of cloud storage providers, it is very important to decide
the right one(s) based on their accessibility, costs and support.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Data Annotation</title>
      <p>
        Data has to be annotated to draw meaningful insights out of it with e.g. Machine
Learning (ML) techniques. In MMLA applications different sensors are used to
communicate with the learner, decoding and encoding messages between the physical
and digital world [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Annotation can be automatically, manually or
semiautomatically.
      </p>
      <p>
        For MMLA applications the work of [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposes the Visual Inspection Tool (VIT)
which allows data reading which is gathered from different sources, and annotating
them. The VIT uses Meaningful Learning Task (MLT) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] session files to store
annotated data for different sensors.
      </p>
      <p>
        The authors of [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] state that the VIT supports MMLA researchers in (i) triangulating
multimodal data with video recordings, (ii) segmenting the multimodal data into
timeintervals and adding annotations to them, and (iii) downloading the annotated dataset
and using it for multimodal data analysis.
      </p>
      <p>For this project we plan to use the VIT to annotate multimodal data because we
expect to collect multimodal data from cameras, depth sensors and other devices. The
annotated data files are directly uploaded using cloud storage. For further usage the
data can be downloaded to train ML models with.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Data Processing</title>
      <p>This stage deals with the extraction of the most relevant and representative features
from the raw data in order to clean and reduce the amount of data that will be processed,
transformed or integrated, in a way that the redundancy of information can be avoided
which also leads to a reduction in the overall processing time.</p>
      <p>Signal processing plays an important role in this step. For example, the filtering of
incoming raw sensor signals can be needed to separate them from other unwanted
signals coming from external sources. The filtering can be done via hardware (electrical
circuits) or software (computational algorithms) methods. Data cleaning is another
preparation technique involving processes like outlier removal or data normalization.
Speaking of data transformation, audio signals are frequently evaluated in the frequency
domain because it offers a good alternative to obtain more meaningful information.
Computer vision (CV) and machine learning (ML) algorithms can solve the need to
extract features to classify, merge or integrate data. For instance, principal component
analysis (PCA) is an algorithm for feature extraction that reduces the high dimensional
data to improve their interpretation without losing relevant information.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Data Exploitation</title>
      <p>
        Data is meaningless without further analysis or exploitation. Thus, methods have to be
applied to gain meaningful insights on the data for the learners and their experiences.
In the MMLA Pipeline, [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] state three different exploitation approaches: predictions,
patterns, and historical reports. It is hard to tell which prediction techniques are better
than others from the start. Unique properties of the learning tasks need to be considered
to see which techniques can be applied.
      </p>
      <p>For example, running forward shows a periodic pattern when repeatedly moving
your legs. How does velocity or acceleration affect this pattern? Consequently, sensors
are needed that correctly acquire the data.</p>
      <p>For the other learning tasks, robots do not only have to understand humans when
working together but they also have to apply a sequence of actions in response.
Essentially, this is communication. A learner’s performance might be captured by the
amount of corrective feedback of the robot over time. A robot could have the following
modules: a speech-to-text/text-to-speech module to communicate, and a detection
module that e.g. uses Neural Networks to detect wrong behavior. As a consequence,
there are a lot of tasks the robot has to fulfill for proper communication. Thus, multiple
models and sensors are required.</p>
    </sec>
    <sec id="sec-7">
      <title>One-for-all System</title>
      <p>Perhaps, one of the most challenging goals in our proposal for the MILKI-PSY project
is the “one-for-all system across different domains” feature because every learning task
has its own distinctiveness. Conversely, they may also share some common attributes
with other tasks. For instance, hip rotation is involved in a lot of different psychomotor
skills like dancing and martial arts. In the collaborative robot and running cases
orientation of the natural head position can be used as a common aspect.</p>
      <p>However, the way to break down the domain difference relies on finding the
similarities to design or implement psychomotor learning tasks. A concise description
and exemplification of the tasks, the automatization of the appropriate activities, the
definition of the real-time-feedback, the classification and analysis of the tasks based
on the expertise levels can help to achieve the necessary abstraction to deal with diverse
domains.</p>
      <p>The collaborative robot and running cases will function to begin extracting
information to create an abstract and common framework. This framework will be
gradually tested on two other similar domains (for example, running with a ball and
painting) in order to obtain their common framework and to enrich the level of
abstraction of the previous one. Subsequently, other two different domains will be used
to extract its common framework and added to the previous one in order to update and
gain more abstraction of the original framework. This iterative process is repeated until
the original framework is robust enough for most of the learning tasks.</p>
      <p>The quality of the annotations for all cases can be achieved involving two or more
human annotators and computer inter-rater reliability score. Similarly the quality of the
processing can be achieved by comparing the new trained model to some baselines,
which is either some previously trained models or some standard existing ones.</p>
      <p>As it is expected the system keeps growing in data volume or complexity, its
scalability will require cloud computing strategies like auto-scaling.</p>
      <p>Figure 1 shows the proposed common MMLA Pipeline steps used for the research
of a single domain or psychomotor skill and figure 2 displays the suggested iterative
steps to reach a common framework for different learning tasks.
The workshop can give us the opportunity to exchange ideas with other participants in
order to bring solutions to the most challenging tasks and strengthen the
interdisciplinary collaboration. Besides, we expect to answer some particular questions
as:
●
●
●
●</p>
      <p>EQ1: What sensors can be used and how many of each type?
EQ2: How many sensors do users allow (acceptance of the user)?
EQ3: What sensors are good to evaluate the learning process correctly?
EQ4: How to support compatibility between different sensor manufactures and
software producers on a technological level? For instance, users could just want
to plug in their cameras and not care about the involved software.</p>
      <p>For EQ1, we plan to let the workshop participants think about possible sensors for
our scenario. This will help us in gathering ideas on how to use different sensors that
we did not imagine before. For EQ2, we plan to attach “fake” sensors to the participants
until the point they do not feel comfortable anymore and experience the attached sensor
as invasive and disruptive. For EQ3 we present the audience with one of our scenarios
and think together what feedback a user wishes for. Finally, for EQ4, we present an
idea of integrating sensors into a system related to different feedback.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Di</given-names>
            <surname>Mitri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Klemke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Specht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Drachsler</surname>
          </string-name>
          , H.:
          <article-title>Multimodal Pipeline: A generic approach for handling multimodal data for supporting learning</article-title>
          .
          <source>In: First workshop on AI-based Multimodal Analytics for Understanding Human Learning in Real-world Educational Contexts, China</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Di</given-names>
            <surname>Mitri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Trebing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Sopka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Specht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            , and
            <surname>Drachsler</surname>
          </string-name>
          , H.:
          <article-title>Real-Time Multimodal Feedback with the CPR Tutor</article-title>
          . In: Artificial Intelligence in Education; Bittencourt,
          <string-name>
            <given-names>I.I.</given-names>
            ,
            <surname>Cukurova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Muldner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Luckin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Millán</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. (eds.) AIED</surname>
          </string-name>
          <year>2020</year>
          .
          <article-title>LNCS (LNAI)</article-title>
          , vol.
          <volume>12163</volume>
          , pp.
          <fpage>141</fpage>
          -
          <lpage>152</lpage>
          . Springer, Switzerland (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Mat Sanusi</surname>
            <given-names>K. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Di Mitri</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Limbu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Klemke</surname>
          </string-name>
          , R.:
          <source>Table Tennis Tutor: Forehand Strokes Classification Based on Multimodal Data and Neural Networks, Sensors</source>
          ,
          <volume>21</volume>
          (
          <issue>9</issue>
          ):
          <fpage>3121</fpage>
          ,
          <string-name>
            <surname>Switzerland</surname>
          </string-name>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Schneider</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Börner</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosmalen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Specht</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Presentation Trainer, your Public Speaking Multimodal Coach</article-title>
          .
          <source>In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction</source>
          , vol.
          <volume>17</volume>
          , pp.
          <fpage>539</fpage>
          -
          <lpage>546</lpage>
          , USA (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Cherubini</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Navarro-Alarcon</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Sensor-Based Control for Collaborative Robots: Fundamentals, Challenges, and Opportunities</article-title>
          . Frontiers in Neurorobotics, vol.
          <volume>14</volume>
          , p.
          <volume>113</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Johns</given-names>
            <surname>Hopkins Medicine Homepage</surname>
          </string-name>
          , https://www.hopkinsmedicine.org/health/conditionsand-diseases/
          <article-title>vital-signs-body-temperature-pulse-rate-respiration-rate-blood-pressure</article-title>
          ,
          <source>last accessed</source>
          <year>2021</year>
          /07/12.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Strohbach</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Daubert</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ravkin</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Lischka</surname>
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Big data storage</article-title>
          . In:
          <article-title>New Horizons for a Data-Driven Economy; Cavanillas</article-title>
          <string-name>
            <given-names>J.M.</given-names>
            ,
            <surname>Curry</surname>
          </string-name>
          <string-name>
            <given-names>E.</given-names>
            ,
            <surname>Wahlster</surname>
          </string-name>
          , W. (eds.), pp.
          <fpage>119</fpage>
          -
          <lpage>141</lpage>
          , 1st edn., Springer, Switzerland (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>Di</given-names>
            <surname>Mitri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Specht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            , and
            <surname>Drachsler</surname>
          </string-name>
          , H.:
          <article-title>From signals to knowledge: A conceptual model for multimodal learning analytics</article-title>
          ,
          <source>Journal of Computer Assisted Learning</source>
          <volume>34</volume>
          (
          <issue>4</issue>
          ),
          <fpage>338</fpage>
          -
          <lpage>349</lpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Di</given-names>
            <surname>Mitri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Klemke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            ,
            <surname>Specht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Drachsler</surname>
          </string-name>
          , H.:
          <article-title>Read Between the Lines: An Annotation Tool for Multimodal Data for Learning</article-title>
          .
          <source>In: Proceedings of the 9th International Conference on Learning Analytics &amp; Knowledge - LAK19</source>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>60</lpage>
          .
          <article-title>Association for Computing Machinery</article-title>
          , USA (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Schneider</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Di</given-names>
            <surname>Mitri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            ,
            <surname>Limbu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            , and
            <surname>Drachsler</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.</surname>
          </string-name>
          ,
          <article-title>Multimodal Learning Hub: A Tool for Capturing Customizable Multimodal Learning Experiences</article-title>
          . In: Lifelong TechnologyEnhanced Learning,
          <string-name>
            <surname>Pammer-Schindler</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pérez-Sanagustín</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Drachsler</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Elferink</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Scheffel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , (eds.), pp.
          <fpage>45</fpage>
          -
          <lpage>58</lpage>
          , Springer, Switzerland, (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>