<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>How to Capitalise on Mobility, Proximity and Motion Analytics to Support Formal and Informal Education?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roberto Martinez-Maldonado</string-name>
          <email>Roberto.Martinez-Maldonado@uts.ed.au</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vanessa Echeverria</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kalina Yacef</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Augusto Dias Pereira Dos Santos</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mykola Pechenizkiy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Eindhoven University of Technology</institution>
          ,
          <country country="NL">The Netherlands</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>The University of Sydney</institution>
          ,
          <country country="AU">Australia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>University of Technology Sydney</institution>
          ,
          <country country="AU">Australia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Learning Analytics and similar data-intensive approaches aimed at understanding and/or supporting learning have mostly focused on the analysis of students' data automatically captured by personal computers or, more recently, mobile devices. Thus, most student behavioural data are limited to the interactions between students and particular learning applications. However, learning can also occur beyond these interface interactions, for instance while students interact face-to-face with other students or their teachers. Alternatively, some learning tasks may require students to interact with non-digital physical tools, to use the physical space, or to learn in different ways that cannot be mediated by traditional user interfaces (e.g. motor and/or audio learning). The key questions here are: why are we neglecting these kinds of learning activities? How can we provide automated support or feedback to students during these activities? Can we find useful patterns of activity in these physical settings as we have been doing with computer-mediated settings? This position paper is aimed at motivating discussion through a series of questions that can justify the importance of designing technological innovations for physical learning settings where mobility, proximity and motion are tracked, just as digital interactions have been so far.</p>
      </abstract>
      <kwd-group>
        <kwd>physical spaces</kwd>
        <kwd>wearables</kwd>
        <kwd>indoor localisation</kwd>
        <kwd>sensors</kwd>
        <kwd>mobility</kwd>
        <kwd>motor learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Data-intensive approaches aimed at understanding and supporting learning, such as
Learning Analytics, Educational Data Mining, Intelligent Tutoring Systems and
Artificial Intelligence in Education, have mostly been focused on the analysis of students’
interactions with particular learning systems and applications
        <xref ref-type="bibr" rid="ref18 ref8">(Khalil &amp; Ebner, 2016;
Roll &amp; Wylie, 2016)</xref>
        . The student behavioural data that are commonly logged and
analysed mostly correspond to the interactions captured by personal computers or,
more recently, mobile devices. Although mobile and emerging pervasive technologies
have extended capabilities to sense some aspects of the usage context, most student
data used to model students’ behaviours/strategies or to provide automated feedback
are still limited to the interactions between students and learning applications.
However, learning goes beyond students’ interactions with user digital interfaces. Learning
may for example occur while students interact face-to-face with other students or with
their teachers. Alternatively, some learning tasks may require students to interact with
an ecology of non-digital physical tools, to use the physical space indoors and/or
outdoors; or to learn in different ways that cannot be mediated by traditional user
interfaces (e.g. motor and/or audio-visual learning)
        <xref ref-type="bibr" rid="ref19">(Santos, 2016)</xref>
        . Multimodal learning
analytics (MMLA) initiatives have been the most robust approach for considering the
complexity of learning tasks
        <xref ref-type="bibr" rid="ref1 ref22">(Blikstein, 2013)</xref>
        . Multimodal approaches have focused
on methods to integrate data corresponding to alternative dimensions of student
activity besides clickstreams and keystrokes. For example, multimodal learning
analytics have included approaches for automatically analyse speech, handwriting, sketch,
gesture, affective states and neurophysiological signals. However, although there have
been numerous advances in this area, most of the MMLA studies have been
conducted under controlled laboratory conditions
        <xref ref-type="bibr" rid="ref2">(Blikstein &amp; Worsley, 2016)</xref>
        . There is still
much work needed to find ways in which these multimodal approaches can solve
challenges in more realistic, mainstream learning scenarios.
      </p>
      <p>This paper raises the question of how learning analytics can be created for physical
learning spaces and learning tasks that include physical activities. This includes the
characteristics of the infrastructure needed, the new features and dimensions of
student data that need to be created. The key overarching questions motivate this
position paper are: How can we envisage the provision of automated support or feedback
to students for tasks where physicality has an important place? How can we sense
student usage of the physical spaces and objects? How can we sense students’
mobility in the learning space? Can we find patterns of learners’ interactions in these
physical settings as we have been doing in computer-mediated settings? If so, what
particular techniques are appropriate for analysing and making sense of the data? Are there
any particular ethical implications or risks in exploring data from physical settings
that were not present with computer-mediated learning systems? The paper is aimed
at motivating our discussion through a series of questions that justify the importance
of designing physical learning analytics innovations. These questions emerged from
recent literature in learning analytics, technology, enhanced learning and
humancomputer interaction, more broadly. We focus our position particularly on
understanding the possible preliminary avenues of research where mobility, proximity and
motion analytics can help us respond questions about or support both learning in
formal and informal educational contexts where the physicality of the space, the task or
the learning may be paramount.</p>
    </sec>
    <sec id="sec-2">
      <title>Why are Mobility, Proximity and Motion Analytics</title>
    </sec>
    <sec id="sec-3">
      <title>Important?</title>
      <p>In this section we discuss a number of learning tasks, modalities and/or educational
activities where physicality of interactions or learning processes can be supported by
learning analytics approaches.
2.1</p>
      <sec id="sec-3-1">
        <title>F-formations in Face-to-face Collaboration.</title>
        <p>
          Learning from others and with others involves physicality to a great extent. When
collaborating face-to-face, people do not only communicate verbally but also through
gestures, postures, presence and other non-verbal cues
          <xref ref-type="bibr" rid="ref21">(Walther et al., 2005)</xref>
          . In
addition to these non-verbal communication modes, people also may use the space or
multiple artefacts and objects in the collaborative setting. Kendon (1990) defined that
a key spatial aspect in face-to-face collaboration refers to the physical arrangement
that group members assume around devices or among themselves. These socially and
physically situated arrangements are known as f-formations. F-formations are
concerned with the proximity and body orientation that collaborators feature during
collaborative sessions, which can be indicative of how people position themselves as and
within a group. A recent example of this aspect studied from a learning analytics
perspective was presented by Thompson et al. (2016) who used a computer vision
technique based on video recordings to track collaborators working in a Design Studio.
This study suggests that the mobility trajectories of people in the learning space can
reflect higher order patterns of collaboration. For example, the most engaged
collaborators may show more complex mobility patterns for tasks that require the interaction
of collaborators with multiple devices. By contrast, for tasks that require initial
planning and discussion, mobility patterns can highlight groups that skip this phase and go
straight to hands-on work. Similarly, the first author and colleagues are investigating
mobility data of training nurses around medical beds during simulation labs
          <xref ref-type="bibr" rid="ref12">(Martinez-Maldonado et al., 2017)</xref>
          . In this case, the students are tracked using a depth
sensor. The mobility data was wrangled to generate heatmaps of activity around the
patient’s bed. By analysing the heatmaps using time series, some initial visually
assessed patterns emerged and suggested the presence of patterns that can be associated
with distinct types of epistemic approaches to the task. Raca et al. (2014) also
explored how motion data obtained with computer vision algorithms can provide
insights about student’s actions (and those of student’s neighbours) during a lecture.
Some questions that may be followed up in this area include:
 What are the kinds of tasks and learning scenarios where various
fformations naturally emerge among collaborators?
 How can we measure or evaluate the impact (if it exists) of f-formations on
group performance, learning and collaboration?
 How can we capture and integrate other behavioural data while group
members collaborate face-to-face?
 How can we link and synchronise mobility data about collaborators with
other activity data that is already being captured (e.g. from the online
learning system, social networks, etc.)?
        </p>
        <p>How can we incorporate contextual information (e.g. aspects of the
learning/cognitive process, epistemic approaches, behavioural cues) to location
data to enhance the sense making process?
2.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Micromobility in co-present device ecologies.</title>
        <p>A second aspect that can be tracked in co-present collaboration corresponds to the
concept of Micromobility. This describes how people orient and tilt objects or devices
towards one another to share information or jointly reflect based on specific data.
Being able to track, analyse and visualise behavioural data linked to this concept can
critical for face-to-face learning or reflection scenarios (the latter, where a group of
students/educators need to make sense of their own data for example). An example of
this approach was presented by Marquardt et al. (2012) who used kinects and
accelerometers to capture information about both f-formations and micromobility.
Although these authors provided collaborators with non-learning related, quite controlled
tasks, they found very distinctive patterns among groups, particularly in the different
ways collaborators interact with objects and share information. This demonstrates that
even small data points captured by the digital devices in use, such as tilting a screen to
allow others to look at the same information, may be indicative of key moments in
collaboration. This is an area that does not seem to have been explored in learning
contexts yet. Some questions that may be followed up in this area include:
 Is it possible to distinguish explicit student’s actions and intentions from
implicit micro-actions and micro-interaction data captured from the devices
(e.g. accelerometer data and angle of the device)?
 What data processing techniques would be needed to merge and pre-process
these data?
 What algorithms and approaches would be needed to classify micro-mobility
actions effectively?
 What are the ethical and technical implications of pervasively tracking these
micro data?
2.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Social interaction, peer communication and networking.</title>
        <p>
          Pentland and colleagues
          <xref ref-type="bibr" rid="ref4 ref9">(Eagle &amp; Pentland, 2006; Kim et al., 2008)</xref>
          pioneered in the
exploration of using data mining techniques to look for patterns within social
networks in physical environments. For tracking face-to-face interactions at a wider scale
(e.g. within an organisation, at a conference or in public events), they developed the
sociometric badges. These sensors can track basic aspects of social interaction such as
whether two people were talking to each other, levels of voice, and movement. We
are exploring the feasibility of understanding the social networks formed by students
when learning to dance. They are aiming to use mobile technologies and indoor
localisation technologies to understand how students interact with other students, with
different levels of dancing expertise, and how these interactions shape their own
learning paths according to their intrinsic motivations. We envisage that these kinds
of social interaction data can be exploited through social network analysis for
generating understanding in learning environment where collaboration happens not only in
small groups, but also through small and heterogeneous interactions within the
community. Additionally, it may be possible to learn from the more mature area within
learning analytics that have explored patterns within digital social networks. Some
questions that may be followed up in this area include:
 Which learning scenarios would benefit from mapping the physical world
social networks that students interact in?
 What alternative technological solutions could be used to capture social,
physical interaction data in a sustainable manner?
 Can social network analysis techniques be applied to physical social net
analysis? What are the ethical issues of tracking activity from students’
physical social networks?
2.4
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>Teacher analytics in the classroom.</title>
        <p>
          To a large extent, classrooms still play a critical role for building lifelong skills for
21st Century learners
          <xref ref-type="bibr" rid="ref14">(O'Flaherty &amp; Phillips, 2015)</xref>
          . Besides the diversity in
architectural formats, the classroom still basically allows educators to interact with students
and provide feedback in situ. The physicality of the classroom is an aspect in
education that has been quite overlooked by most learning analytics initiatives in all
educational levels. The analysis of mobility of the teacher or the students in the classroom
may provide new insights about things that occur in the classroom such as the
provision of feedback, the communication among students and with the teacher, or the
identification of inactive students. One example of the potential of this type of
analytics was suggested by Martinez-Maldonado et al. (2015) who demonstrated the
usefulness of manually tracking the teacher’s mobility in the classroom in order to
understand the impact of the feedback that the teacher provided to the students working in
small teams. Other approaches have focused on analysing teacher’s actions using
video analysis and other computer vision approaches
          <xref ref-type="bibr" rid="ref13 ref5">(Echeverría et al., 2014)</xref>
          . More
recently, Prieto et al. (2016) presented a more elaborated approach to collect teaching
analytics automatically using accelerometer data, EEG, audio, video and eye trackers’
data to create, what authors call, ‘orchestration graphs’. These can potentially be
effective indicators of the kinds of learning and teaching processes that occur in
face-toface classrooms. Some questions that may be followed up in this area include:
 In which learning scenarios it would be important to know the actions
performed by the teacher (besides small group collaboration classrooms)?
 What are the implications of teaching analytics for learning design or for
measuring instructional performance?
 What are the ethical implications of using these data for evaluation (of the
teacher)?
 What technological innovations would be needed to implement in regular
classrooms to perform teaching analytics at scale?
2.5
        </p>
      </sec>
      <sec id="sec-3-5">
        <title>Motor learning.</title>
        <p>
          The acquisition of psychomotor or kinaesthetic skills is crucial for many kinds of
tasks associated with both formal and informal learning
          <xref ref-type="bibr" rid="ref6">(Harrow, 1972)</xref>
          . Examples
include learning to play a musical instrument, learning a sign language, dancing,
improving handwriting, drawing, training surgical or clinical interventions, improving
the technique in sports, practicing martial arts, etc. Santos (2016) has recently
highlighted both the importance of supporting these types of widely diverse and important
educational tasks and also the potential that data and analytics can offer to leverage
motor learning. This is becoming feasible because of the widespread emergence of
pervasive sensors (e.g. wearable devices); more advanced and less expensive
computer vision devices (e.g. depth/infrared cameras); and more reliable computer vision
algorithms. From a multimodal learning analytics perspective, motor learning has
started to be addressed through action and gesture analysis
          <xref ref-type="bibr" rid="ref2">(Blikstein &amp; Worsley,
2016)</xref>
          . Representative examples of this approach include the recognition of human
activity using computer vision
          <xref ref-type="bibr" rid="ref23">(e.g. [Yilmaz and Shah, 2005])</xref>
          or identifying gestures
that differentiate experts from novices
          <xref ref-type="bibr" rid="ref1 ref22">(e.g. [Worsley and Blikstein, 2013])</xref>
          . Key
questions in this area that remain unanswered include:
 What motor learning or hybrid learning tasks could be supported using
mobility, proximity or motion analytics?
 What particular pedagogical/epistemic stance would be required?
 Is motor learning a whole different domain of learning that should be
supported differently by emerging learning analytics, or is it just another
dimension of human activity that can be tackled through multimodal approaches?
 What kinds of analytics may be useful for informal education scenarios that
involve the development of motor skills?
2.6
        </p>
      </sec>
      <sec id="sec-3-6">
        <title>Learning in and from physical spaces.</title>
        <p>
          The areas discussed above are not necessarily comprehensively covering all the
possible learning tasks that can be supported by using mobility, proximity and motion
analytics. Other examples include learning tasks that require field work and that are
more commonly being supported by mobile
          <xref ref-type="bibr" rid="ref3">(e.g. [Carvalho and Freeman, 2016])</xref>
          or
augmented reality (e.g. [Muñoz-Cristóbal et al., 2014]) technologies. In these
scenarios, students can be encouraged to explore the physical space, which can be in the
school, in natural areas or in the city, to complete tasks. These may not only require
the student to access information or content online but also make sense of it and
associate it with the physical context where s/he is. Even it would be possible that students
need to access information through embodied interaction modes (e.g. perform tasks or
gain access to information depending on their physical location or proximity). Data
obtained from localisation and usage logs, and the application of learning analytics
techniques, could unveil patterns of the processes that students follow or generate
while learning in the physical space. Some questions that may be followed up in this
area include:
 What formal and informal educational tasks invite or require students to
explore and interact with the physical space where the learning activity
unfolds?
 What kind of data, besides indoor/outdoor localisation, can be captured in
physical spaces?
 What kind of sensemaking can be performed on location data?
 What kind of analytics innovations could improve learning in physical
spaces?
        </p>
        <p>What are the ethical implications and risks of exploiting these location data
for learning analytics?
3</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>
        This position paper aims at starting a discussion about the current approaches and the
future potential of learning analytics for supporting learning across physical spaces.
The learning analytics field and related fields have paid much attention to cognitive or
intellectual domains. There has also been a strong interest in supporting the affective
domain
        <xref ref-type="bibr" rid="ref17">(Rogaten et al., 2016)</xref>
        . It is now time to start supporting psychomotor skills
and/or the physicality aspects of a traditional intellectual domian, which are crucial
for the full development of a life-long learner. The lack of interest in this domain may
be affected by the regular pedagogies and the curricula which may not explicitly
include this into the learning tasks. This is the reason why we need to also look at (the
so-called) informal learning activities, which have an important role in
complementing the more ‘thinking-oriented’ formal education. Nonetheless, the paper highlights
some examples of learning analytics innovations that are tackling this domain. The
questions posed for each area aim to trigger discussion and motivate formal studies to
support psychomotor learning through Mobility, Proximity and Motion Analytics.
Current and future work by authors is aiming to illustrate the feasibility and potential
of performing this kind of analytics through three case studies in three different
contexts, including: i) health simulation labs; ii) a dance education studio; and iii) regular
small-group collaboration classrooms.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Blikstein</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Multimodal learning analytics</article-title>
          .
          <source>In Proceedings of the Proceedings of the Third International Conference on Learning Analytics and Knowledge</source>
          , (pp.
          <fpage>102</fpage>
          -
          <lpage>106</lpage>
          ). Leuven, Belgium. 2460316: ACM.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Blikstein</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Worsley</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Multimodal Learning Analytics and Education Data Mining: using computational technologies to measure complex learning tasks</article-title>
          .
          <source>Journal of Learning Analytics</source>
          ,
          <volume>3</volume>
          (
          <issue>2</issue>
          ),
          <fpage>220</fpage>
          -
          <lpage>238</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Carvalho</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Freeman</surname>
            ,
            <given-names>C. G.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>CmyView: Walking together apart</article-title>
          .
          <source>Paper presented at the Proceedings of the 10th International Conference on Networked Learning</source>
          <year>2016</year>
          , (pp.
          <fpage>313</fpage>
          -
          <lpage>321</lpage>
          ). Unknown.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Eagle</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Pentland</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Reality mining: sensing complex social systems</article-title>
          .
          <source>Personal and Ubiquitous Computing</source>
          ,
          <volume>10</volume>
          (
          <issue>4</issue>
          ),
          <fpage>255</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Echeverría</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Avendaño</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chiluiza</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vásquez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ochoa</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Presentation skills estimation based on video and kinect data analysis</article-title>
          .
          <source>Paper presented at the Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge</source>
          , (pp.
          <fpage>53</fpage>
          -
          <lpage>60</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Harrow</surname>
            ,
            <given-names>A. J.</given-names>
          </string-name>
          (
          <year>1972</year>
          ).
          <article-title>A taxonomy of the psychomotor domain: A guide for developing behavioral objectives: Addison-Wesley Longman Ltd</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Kendon</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>1990</year>
          ).
          <article-title>Spatial organization in social encounters: The F-formation system</article-title>
          .
          <source>Conducting interaction: Patterns of behavior in focused encounters</source>
          ,
          <fpage>209</fpage>
          -
          <lpage>238</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Khalil</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ebner</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>What is Learning Analytics about? A Survey of Different Methods Used in 2013-2015</article-title>
          . arXiv preprint arXiv:
          <volume>1606</volume>
          .
          <fpage>02878</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chang</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , Holland,
          <string-name>
            <given-names>L.</given-names>
            , &amp;
            <surname>Pentland</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. S.</surname>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Meeting mediator: enhancing group collaborationusing sociometric feedback</article-title>
          .
          <source>In Proceedings of the International Conference on Computer Supported Cooperative Work</source>
          <year>2008</year>
          (CSCW
          <year>2008</year>
          ), (pp.
          <fpage>457</fpage>
          -
          <lpage>466</lpage>
          ). San Diego, CA, USA. ACM.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Marquardt</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinckley</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Greenberg</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Cross-device interaction via micromobility and f-formations</article-title>
          .
          <source>In Proceedings of the 25th ACM Symposium on User Interface Software and Technology</source>
          , (pp.
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          ). Cambridge, Massachusetts, USA. 2380121: ACM.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Martinez-Maldonado</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Clayphan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yacef</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kay</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>MTFeedback: providing notifications to enhance teacher awareness of small group work in the classroom</article-title>
          .
          <source>IEEE Transactions on Learning Technologies</source>
          ,
          <volume>8</volume>
          (
          <issue>2</issue>
          ),
          <fpage>187</fpage>
          -
          <lpage>200</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Martinez-Maldonado</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Power</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hayes</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abdipranoto</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vo</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Axisa</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Buckingham-Shum</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Analytics Meet Patient Manikins: Challenges in an Authentic Small-Group Healthcare Simulation Classroom</article-title>
          .
          <source>In Proceedings of the International Conference on Learning Analytics and Knowledge</source>
          ,
          <source>(LAK</source>
          <year>2017</year>
          ), Vancouver, Canada.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Muñoz-Cristóbal</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prieto</surname>
            ,
            <given-names>L. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Asensio-Pérez</surname>
            ,
            <given-names>J. I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Martínez-Monés</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>JorrínAbellán</surname>
            ,
            <given-names>I. M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Dimitriadis</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Deploying learning designs across physical and web spaces: Making pervasive learning affordable for teachers</article-title>
          .
          <source>Pervasive and Mobile Computing</source>
          ,
          <volume>14</volume>
          (Special Issue on Pervasive Education),
          <fpage>31</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>O</given-names>
            <surname>'Flaherty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            , &amp;
            <surname>Phillips</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>The use of flipped classrooms in higher education: A scoping review</article-title>
          .
          <source>The Internet and Higher Education</source>
          ,
          <volume>25</volume>
          ,
          <fpage>85</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Prieto</surname>
            ,
            <given-names>L. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sharma</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dillenbourg</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Jesús</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Teaching analytics: towards automatic extraction of orchestration graphs using wearable sensors</article-title>
          .
          <source>Paper presented at the Proceedings of the Sixth International Conference on Learning Analytics &amp; Knowledge</source>
          , (pp.
          <fpage>148</fpage>
          -
          <lpage>157</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Raca</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tormey</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Dillenbourg</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Sleepers' lag-study on motion and attention</article-title>
          .
          <source>Paper presented at the Proceedings of the Fourth International Conference on Learning Analytics And Knowledge</source>
          , (pp.
          <fpage>36</fpage>
          -
          <lpage>43</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Rogaten</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rienties</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Whitelock</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cross</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Littlejohn</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>A multi-level longitudinal analysis of 80,000 online learners: Affective-Behaviour-Cognition models of learning gains.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Roll</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wylie</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Evolution and Revolution in Artificial Intelligence in Education</article-title>
          .
          <source>International Journal of Artificial Intelligence in Education</source>
          ,
          <volume>26</volume>
          (
          <issue>2</issue>
          ),
          <fpage>582</fpage>
          -
          <lpage>599</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Santos</surname>
            ,
            <given-names>O. C.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Training the Body: The Potential of AIED to Support Personalized Motor Skills Learning</article-title>
          .
          <source>International Journal of Artificial Intelligence in Education</source>
          ,
          <volume>26</volume>
          (
          <issue>2</issue>
          ),
          <fpage>730</fpage>
          -
          <lpage>755</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Thompson</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Howard</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ma</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Mining video data: tracking learners for orchestration and design</article-title>
          .
          <source>In Proceedings of the Australian Society for Computers in Learning in Tertiary Education (ASCILITE' 17)</source>
          , Adelaide, Australia.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Walther</surname>
            ,
            <given-names>J. B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Loh</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Granka</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Let Me Count the Ways</article-title>
          .
          <source>Journal of Language and Social Psychology</source>
          ,
          <volume>24</volume>
          (
          <issue>1</issue>
          ),
          <fpage>36</fpage>
          -
          <lpage>65</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Worsley</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Blikstein</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Towards the development of multimodal action based assessment</article-title>
          .
          <source>Paper presented at the Proceedings of the third international conference on learning analytics and knowledge</source>
          , (pp.
          <fpage>94</fpage>
          -
          <lpage>101</lpage>
          ). ACM.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Yilmaz</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Shah</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Actions sketch: A novel action representation</article-title>
          .
          <source>Paper presented at the Computer Vision and Pattern Recognition</source>
          ,
          <year>2005</year>
          .
          <article-title>CVPR 2005</article-title>
          . IEEE Computer Society Conference on, (pp.
          <fpage>984</fpage>
          -
          <lpage>989</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>