=Paper= {{Paper |id=Vol-3276/SSS-22_FinalPaper_130 |storemode=property |title=Monitoring and Maintaining Student Online Classroom Participation Using Cobots, Edge Intelligence, Virtual Reality, and Artificial Ethnographies |pdfUrl=https://ceur-ws.org/Vol-3276/SSS-22_FinalPaper_130.pdf |volume=Vol-3276 |authors=Ana Djuric, Meina Zhu, Weisong Shi, Thomas Palazzolo, Robert G. Reynolds }} ==Monitoring and Maintaining Student Online Classroom Participation Using Cobots, Edge Intelligence, Virtual Reality, and Artificial Ethnographies== https://ceur-ws.org/Vol-3276/SSS-22_FinalPaper_130.pdf
     Monitoring and Maintaining Student Online Classroom Participation
       Using Cobots, Edge Intelligence, Virtual Reality, and Artificial
                              Ethnographies
               Ana Djuric1, Meina Zhu1, Weisong Shi1, Thomas Palazzolo1, Robert G. Reynolds1
                                                       1
                                                           Wayne State University, Detroit MI 48202, USA



                                     Abstract                                             3. Cooperation: The human and robot work on the same task
In this project Virtual World technology and Edge Intelligence to                            at the same and are both in motion.
produce a shared social landscape for the society of learners. The                        4. Responsive collaboration: The robot responds in real-time
idea is to create a Virtual World in which learners can participate                          to the actions of its human counterpart.
and interact. One that is parallel to the learning environment or
classroom. This can be viewed as an online multi-user
environment such as “second-life” where on-line learners can                              It is this latter category that is of concern here. This project
interact and construct their own spaces. Their ability to work in                         is concerned with the development of a Human Robot team
that space is governed by input from their robot mentor (Human                            (Human Robot Learning Unit) that is able to participate in a
Robot Learning Unit). Skills in the Classroom Virtual World are                           society of online learners. The motivation behind this that
provided as a result of a student’s behavior in the learning
                                                                                          one way to maintain a learner’s attention is to have a
environment. The Virtual World can persist after the learning
session is concluded so it provided an incentive for learners to do                       “paraprofessional” monitor their activity online. However, it
well in the learning session so that they can acquire points that                         is difficult for a single human to closely monitor a large
translate into skills in the corresponding Virtual World. That                            group of learners especially since individuals have different
Virtual World can be shared by several learning sessions or classes                       learning styles and learning rates. In addition, a learner can
to provide a more comprehensive learning environment. An online
                                                                                          simply turn off their audio and visual and fly under the radar.
ethnography of the interactions of learners and instructors can be
produced as suggested by McCarthy and Wright (3).                                         The classic case is where a student thought that they had
                                                                                          switched off their audio and video so the observer was able
Keywords: Cobots, Human-Robet Learning Units, Edge                                        to see them playing video games in the background the entire
Computing, Artificial Ethnographies, Virtual World,                                       session.
Learner Focus.                                                                               In this project the use of Virtual World technology and
                                                                                          Artificial Intelligence to produce a shared social landscape
                       Motivation and Vision                                              for the society of learners. The idea is to create a Virtual
                                                                                          World Classroom in which learners can participate and
A Cobot is a robot intended for direct human interaction                                  interact. One that is extension of the learning environment or
within a shared space. Unlike traditional industrial robots that                          classroom. This can be viewed as an online multi-user
whose actions are isolated from their human counterparts.                                 environment such as “second-life” where on-line learners can
[1]. Cobots were invented in 1994 by J. Edward Colgate and                                interact and construct their own spaces. Their ability to work
Michael [8]. Cobots can be used in variety of situations                                  in that space is governed by input from their robot mentor.
including public spaces Plishkin providing informational                                  Skills in the Virtual World are provided as a result of a
services. [5]. This is the context in which we view them here.                            student’s behavior in the learning environment. The Virtual
The International Federation of Robotics [4] has identified 4                             World can persist after the learning session is concluded so
different categories of Cobots [7]:                                                       it provided an incentive for learners to do well in the learning
1. Coexistence: The human and the robot work along each                                   session so that they can acquire points that translate into
  other with a partition but have no shared workspace.                                    skills in the corresponding Virtual World. That Virtual World
2. Sequential collaboration: The human and the robot are                                  can be shared by several learning sessions or classes to
  both active within a shared workspace but their actions are                             provide a more comprehensive learning environment. The
  sequential and they don’t work at the same time.                                        experiences can be combined to produce an online
___________________________________
In T. Kido, K. Takadama (Eds.), Proceedings of the AAAI 2022 Spring Symposium
“How Fair is Fair? Achieving Wellbeing AI”, Stanford University, Palo Alto, California,
USA, March 21–23, 2022. Copyright © 2022 for this paper by its authors. Use permitted
under Creative Commons License Attribution 4.0 International (CC BY 4.0).



                                                                                                                                                   24
ethnography similar to that generated by McCarthy and                 Fig. 1. Losing focus affects online learning activities
Wright for the online game “Second Life” [3]. They
proposed a four-part framework through which to interpret
the users’ subjective experiences:                                   The factors that can possibly lead to losing focus of
1. The impact of the experience on the senses. The                attention can be categorized into the external state (see Figure
   experiences concrete and visceral impact.                      2) and internal state (see Figure 3). The students' external
2. The emotional and affective impact of the experience.          state reflects the impact that their learning environment has
3. The compositional of the sequence of actions that              on their cognition. engagement, tiredness, overload,
   comprise an event.                                             loneliness, and lack of communication with classmates and
4. The spatial and temporal context of the experience.            instructors [8] (see Figure 3).

Although there are many dimensions to the learning activity
that can be studied the system described here addresses the
most fundamental aspect of learning, how a learner
maintains focus in their environment. Other qualities can be
added in down the road. The challenges that online learners
face in terms of focus will be discussed in the next section.


  Challenges to the Focus of Online Learners
The loss of online students’ attention to learning is a common
and severe problem. Due to the COVID-19 pandemic, more
than 200 million students, consisting of 12.5% of total
enrolled students worldwide, were influenced by the
university and school closures in December 2020 [1]. It is
clear that the Pandemic has accelerated the process of
shifting     courses from a traditional face-to-face format to
an online one [2]. Online learning offers students more
choices and flexibility in necessary coursework, which
requires increased skills to plan, monitor, and manage
learning [4] [5]. However, online education is challenging for                      Fig. 2.      External state
both students and teachers. The loss of focus of attention and
engagement in online learning is one of the primary
challenges of online education [6]. Given that attention
comes prior to cognitive learning, staying focused and
engaged is vital to cognitive learning activities [7]. Losing
focus affects lectures, labs, tests, quizzes, group activities,
and projects in online education (see Figure 1).




                                                                                 Fig. 3.      The Internal state.




                                                                                                                           25
Robotic technologies have played a significant role in
education. Research has indicated that online pedagogical
agents can promote effective instruction [9] [10]. For
example, robots have taken diverse roles in education, such
as addressing absenteeism [11], enhancing motivation [12],
supporting students’ emotions [13], triggering productive
conversation in language education [14], promoting
collaboration [15] [16], fostering computational thinking
[17] [18], and enhancing creative thinking and problem-
solving skills [19]. However, a majority of the agents were
virtual robotics or physical robotics for classroom teaching.
Little research has focused on the use of physical robots as
participants in an online students’ learning environment. In
other words, each student would have a robot mentor that will
help monitor the student’s progress and provide feedback to
the instructor. The instructor can then use that information at
a meta-level to make strategic decisions about class                 Fig. 4. Misty robot (https://www.mistyrobotics.com/)
trajectories.
   The vision of this project is to exploit the synergistic       The Robots contribution to the HRLU can be as follows:
potential of the robot student team. That is, humans can          5. First, robots can provide pre-scheduled learning activities
perform certain tasks better than robots and vice versa. The         during the entire semester in order to support students'
goal is to exploit the complementarity nature of their               time-management.
relationship in order to produce a true marriage of minds.        6. Second, the robots can monitor the students' learning
This Human-Robot-Learning-Unit (HRLU) is the                         behavior through eye-tracking and monitoring facial
fundamental building block upon which to scaffold a new              expressions and gestures during synchronous classroom
framework for online learning. In the next section the basic         and related meeting sessions. Based upon learned patterns
structure of the HRLU will be discussed along with the               in the students' behavioral data, the robot can track
information that can be passed to the Supervisor. The                students' learning progress and provide interventions to
Supervisor will then use that information to update the              facilitate students' cognition and meta-cognition.
Virtual World based on learner’s performances and update          7. Third, robots can facilitate formative assessment and
their ethnography. The updated ethnography will be the basis         provide immediate feedback to students in online learning.
for adjusting the HRLU components for the next learning           8. Fourth, robots can communicate not only with students
session.                                                             but also with the Supervisor. The Superviso Unit (ILRU)
                                                                     will facilitate communication between the HRLUs and
                                                                     with the Virtual World.
              HRLU Methodology
Robots are used as teaching and learning tools to be              The HRLUs communicate in the Virtual Classroom with
manipulated and operated by students in many schools. For         other HRLUs. See Figure 5. The communication will be
online teaching, the robot assistants will be located at online   arranged such that each student is communicating with a
learner’s homes. Because of that, we made a comparison            personalized robot, while all robots are communicating in the
between different teaching robots based on their suitability      network, and the instructor (IRLU) is communicating with
for such an application. The factors that are compared            all robots in the network. It is possible that the Instructor will
include their functionality, price, weight, software, hardware,   have their own intelligent agent learning unit.
etc. Therefore, this research uses a robot, like Misty               In order to control the online HRLU classroom,
(https://www.mistyrobotics.com/), to facilitate students' self-   instructor(s) (IRLU) will be using provide scripts for
regulation in the online learning environment (see Figure 4).     interactions (e.g., questions) prepared using their previous
                                                                  teaching experience. See Figure 5. The control flow can be
                                                                  as following:
                                                                  1. Input - scripted interactions that are designed to get
                                                                  information about the students’ internal state)
                                                                  2. Output - Collecting answers from students
                                                                  3. Output - Analysis of student’s answers




                                                                                                                            26
4. The Supervisor (IRLU) updates the Virtual World               Krithika & GG, 2016; Su et al., 2014; Sharma et al., 2019)
parameters based upon the student robot interactions. The        have utilized sensor technology to capture students’
Virtual World is referred to as the Virtual Classroom Matrix     behavior, including eye movement, facial expressions, and
in Figure 5 as a reference to the “Matrix” in the                body movement. Through students’ learning behavior, we
corresponding films                                              can detect and indicate to what extent students stay focused
5. Data Analytics of the updated VCM in order are performed      on online learning scenarios. Prior research was primarily
by the IRLU to adjust the state of the Virtual World.            focused on traditional face-to-face education settings or
6. The Supervisor ILRU updates the Ethnography Classroom         capture the video data only. For intelligent agent-based
Matrix (ECM) of the Virtual World using the adjusted VR          approaches, prior studies used to train one single model and
parameters from 5 above.                                         deploy it for all users without considering the personalized
7. Express ECM and VCM parameters in a graphical update          factors. In the early detection phrase in our system, we move
using a GUI. This GUI will be used for generating a virtual      forward to include two factors that are usually neglected by
classroom map using Machine Learning techniques such as          the community: one is environmental noise, which is a
Evolutionary and Deep Learning. The interface represents an      passive factor that can affect the concentration; another is
indicator for controlling students’ focus of attention. The      personalized behavior, as different students will demonstrate
instructor(s) will use this display to improve students’ self-   different distraction behavior and expression. To this end, we
regulation skills, motivation, and learning outcomes.            propose a cloud-edge collaborative system to provide
8. Calculate the error between expectations and outcomes in      personalized detection based on multi-dimensional data. We
order to produce new scripts for the HRLUs and repeat the        jointly combine video and audio data for Focus Index (FI)
cycle.                                                           detection. Our pro system encapsulates detection objects in
This two tiered framework is ideally suited for an Edge          module units and provides APIs for third-party integration.
Computing framework How the framework can be used to             Beyond that, we propose the idea to leverage edge
support the workflow above will be the subject of the next       intelligence for personalized model training and serving.
section.                                                            Edge computing (Shi et al., 2016) has become the most
                                                                 popular computing paradigm with the development of the
                                                                 Internet of Things and other devices located at the edge of
                                                                 the network. Statistics show that these devices will generate
                                                                 60% of the data in the future, reaching PB level data volume.
                                                                 One typical data generation scenario is HRLU, where
                                                                 cameras are highly used to help detect the distraction degree
                                                                 of one student. Each camera generates a considerable volume
                                                                 of video data every day (in GB level). In cloud computing,
                                                                 all the video data has to be sent to the cloud for processing,
                                                                 which poses considerable pressure on the bandwidth and
                                                                 workload of the data center. Edge computing can offload data
                                                                 from the cloud to the process units near the data source or
                                                                 even offload tasks to the camera itself.
                                                                    There are two main factors that inspire us to leverage edge
                                                                 computing in HRLU: (a) Large data volume. Uploading all
                                                                 the generated data to the cloud is impossible and is also a
                                                                 waste of bandwidth, transmission resources, and cloud
                                                                 storage resources. Edge computing can help to pre-process
  Fig. 5. Graphical representation of the dynamic virtual
                                                                 and filter the valuable data before sending it to the cloud for
                classroom matrix (VCM).
                                                                 centralized control or offload the whole task. (b) Reliable
                                                                 performance. The distraction of students is expected to be
                                                                 detected in a timely fashion. If the detection relies on cloud
                                                                 processing, its performance will be affected by many
 Using Edge intelligence to Support the HRLU                     uncertainties: network connection, data center status, to name
               and IRLU Cycle.                                   a few. Especially when online learning already takes a
                                                                 considerable bandwidth, edge computing is more reliable to
Capturing students’ real-time learning status is vital to        guarantee near-real-time processing with capable hardware
effective online learning. Sensor technology can objectively     equipped.
gather students’ learning behaviors. Prior research and             Artificial Intelligence (AI) has been greatly developed in
educators (Daniel & Kamioka, 2017; Hwang et al., 2011;           this decade thanks to hardware development. The




                                                                                                                         27
convolutional neural network (CNN) promotes the                    The early detection system is presented in Figure 6. It is a
development of Computer vision (Krizhevsky et al., 2017),          cloud-edge collaborative system for FI prediction based on
and the Transformer network promotes the development of            personalized multi-dimensional data. To provide a reliable
Natural Language Processing (Vaswani et al., 2017). Spoken         and solid detection for a valid intervention, the cloud is
language processing is also accelerating its momentum with         responsible for training a general detection model with a
deep neural networks (Amodei et al., 2016). AI-related             large amount of labeled data. The cloud collects the video
services usually rely on the computation resources on the          and audio data in oder to obtain the students’ focus
cloud to provide service. Recently, with the development of        information and predict FI scores based on the trained
lightweight AI models, edge-oriented hardware and                  detection model. Considering the scale of the dataset, the
software, edge devices, and platforms gain the capability to       intelligent model generated by the cloud will be expensive
execute AI algorithms, i.e., Edge Intelligence (Zhang et al.,      for edges to compute and store. To fit the developed
2019).                                                             intelligent model to resource-constraint edge nodes, some
   Edge intelligence not only inherits the advantages from         model efficiency methods will be taken (Han et al., 2015a).
edge computing, where offloading the processing from the           For example, model pruning (Han et al., 2015b), quantization
cloud; it also brings intelligence to the edge devices and         (Gong et al., 2014), knowledge distillation (Hinton et al.,
demonstrates a huge potentiality to serve the real world. In       2015), network architecture search (Cai et al., 2018) can all
the HRLU, we propose an edge intelligence system for the           contribute to effective pruning of the model. The processed
robot, which is designed to detect the student's state and         efficient F1 evaluation model is then deployed on each robot
intervene when necessary for online learning. Considering          through transfer learning. With the built-in camera and
the functionality of the robot, which is equipped with a           microphone array, each robot can capture video and audio as
microphone array, 4K camera, HIFI speakers, it is capable of       the input of the efficient model to compute the FI for the
capturing input data in different dimensions and deploying         students and assessed. Every so often the models
different types of AI models to make decisions jointly.            performance in F1 detection can be assessed and the data
   The following section describes how the Focus detection         used to update the model in the cloud.
HRLU prototype can be expressed in terms of the Edge
Computation Environment.
                                                                                          Conclusion
                                                                   In this paper the use of Virtual World technology and
        HRLU System Design on the Edge.
                                                                   Artificial Intelligence are employed to produce a shared
To quantify the distraction degree of the students, we develop     social landscape for the society of learners. The idea is to
a Focus Index (FI) to represent the focus degree of a student      create a Virtual Classroom World in which learners can
that ranges from 0 to 100. This score is translated into points    participate and interact. One that is parallel to the learning
that can be used by the learners in order to participate in the    environment or classroom. This can be viewed as an online
Virtual World Classroom. The points can be exchanged for           multi-user environment such as “second-life” where on-line
tools and objects that allow them to interact with others in the   learners can interact and construct their own spaces. Their
Virtual Classroom.                                                 ability to work in that space is governed by input from their
                                                                   robot mentor. Skills in the Virtual World are provided as a
                                                                   result of a student’s behavior in the learning environment.
                                                                   The Virtual World can persist after the learning session is
                                                                   concluded so it provided an incentive for learners to do well
                                                                   in the learning session so that they can acquire points that
                                                                   translate into skills in the corresponding Virtual World.
                                                                   That Virtual World can be shared by several learning
                                                                   sessions or classes to provide a more comprehensive
                                                                   learning environment. This shared experience can be
                                                                   documented as an online ethnography of the Virtual
                                                                   Classroom.


                                                                                          References

 Figure 6. Cloud-edge collaborated early detection system.          1. UNESCO, "COVID-19 Educational Disruption and
                                                                       Response," 22 December 2020. [Online]. Available:




                                                                                                                          28
    https://en.unesco.org/themes/education-                               Performance in a Robotics Tournament," Journal of
    emergencies/coronavirus-school-closures.                              Engineering Education, pp. 564-584, 2017.
 2. L. Gardener, "COVID-19 Has Forced Higher Edu to Pivot to          17. G. Chen, J. Shen, L. Barth-Cohen, S. Jiang, X. Huang and M.
    Online Learning. Here Are 7 Takeaways So Far," The                    Eltoukhy, "Assessing Elementary Students' Computational
    Chronicles of Higher Education, 2020.                                 Thinking in Everyday Reasoning and Robotics
 3. J. McCarthy and P. Wright, "Technology as Experience,"                Programming," Computers & Education, pp. 162-175, 2017.
    Interactions, vol. 11, no. 5, pp. 42-43, 2004.                    18. J. Leonard, A. Buss, R. Gamboa, M. Mitchell, O. S. Fashola,
 4. M. Ally, "Foundations of Educational Theory for Online                T. Hubert and S. Almughyirah, "Using Robotics and Game
    Learning," in Theory and Practice of Online Learning,                 Design to Enhance Children's Self-Efficacy, STEM Attitudes,
    Athabasca, Athabasca University, 2004.                                and Computational Thinking Skills," Journal of Science
                                                                          Education and Technology, pp. 860-876, 2016.
 5. J. C. Y. Sun and R. Rueda, "Situational Interest, Computer
    Self-Efficacy and Self-Regulation: Their Impact on Student        19. E. Z. F. Liu, C. H. Lin, P. Y. Liou, H. C. Feng and H. T. Hou,
    Engagement in Distance Education," British Journal of                 "An Analysis of Teacher-Student Interaction Patterns in a
    Educational Technology, pp. 191-204, 2021.                            Robotics Course for Kindergarten Children: A Pilot Student,"
                                                                          Turkish Online Journal of Educational Technology-TOJET,
 6. J. Y. Wu, "The Indirect Relationship of Media Multitasking            pp. 9-18, 2013.
    Self-Efficacy on Learning Performance Within the Personal
    Learning Environment: Implications from the Mechanisms of         20. W. Shi, J. Cao, Q. Zhang, Y. Li and L. Xu, "Edge Computing:
    Perceived Attention Problems and Self-Regulation                      Vision and Challenges," IEEE Internet of Things Journal, pp.
    Strategies," Computers & Education, pp. 56-72, 2017.                  637-646, 2016.
 7. S. E. Petersen and M. I. Posner, "The Attention System of the     21. A. Krizhevsky, I. Sutskever and G. E. Hinton, "Imagenet
    Human Brain: 20 Years After," Annual Review of                        Classification with Deep Convolutional Neural Networks,"
    Neuroscience, pp. 73-89, 2012.                                        Communications of the ACM, pp. 84-90, 2017.
 8. Y. C. Kuo, A. E. Walker, K. E. Schroder and B. R. Belland,        22. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.
    "Interaction, Internet Self-Efficacy, and Self-Regulated              N. Gomez and I. Polosukhin, "Attention is All You Need,"
    Learning as Predictors of Student Satisfaction in Online              Advances in neural Information Processing Systems, pp.
    Education Courses," The Internet and Higher Education, pp.            5998-6008, 2017.
    35-50, 2014.                                                      23. D. Amodei, S. Ananthanarayanan, R. Anubhai, J. Bai, E.
 9. B. Heller and M. Procter, "Embodied and Embedded                      Battenberg, C. Case and Z. Zhu, "Deep Speech 2: End-to-End
    Intelligence: Actor Agents on Virtual Stages," Intelligent and        Speech Recognition in English and Mandarin," International
    Adaptive Learning Systems: Technology Enchanged Support               Conference on Machine Learning, pp. 173-182, 2016.
    for Students and Teachers, pp. 280-292, 2012.                     24. X. Zhang, Y. Wang, S. Lu, L. Liu and W. Shi, "OpenEI: An
10. A. Pereira, C. Martinho, I. Leite and A. Paiva, "iCat, The            Open Framework for Edge Intelligence," 2019 IEEE 39th
    Chess Player: The Influence of Embodiment in the Enjoyment            International Conference on Distributed Computing Systems,
    of a Game.," Proceedings of the 7th International Joint               pp. 1840-1851, 2019.
    Conference on Autonomous Agents and Multiagent Systems,           25. X. Zhang, M. Qiao, L. Liu, Y. Xu and W. Shi, "Collaborative
    pp. 1253-1256, 2008.                                                  Cloud-Edge Computation for Personalized Driving Behavior
11. M. A. Mac Iver and D. J. Mac Iver, ""STEMming" the Swell              Modeling," Proceedings of the 4th ACM/IEEE Symposium on
    of Absenteeism in Urban Middle Grade Schools: Impacts of              Edge Computing, pp. 209-221, 2019.
    a Summer Robotics Program," Society for Research on               26. S. Han, J. Pool, J. Tran and W. Dally, "Learning Both Weights
    Educational Effectiveness, 2014.                                      and Connections for Efficient Neural Network," Advances in
12. A. Gomoll, C. E. Hmelo-Silver, S. Sabanovic and M.                    Neural Information Processing Systems, pp. 1135-1143,
    Francisco, "Dragons, Ladybugs, and Softballs: Girls' STEM             2015.
    Engagement with Human-Centered Robotics," Journal of              27. S. Han and H. D. W. J. Mao, "Deep Compression:
    Science and Education and Technology, pp. 899-914, 2016.              Compressing Deep Neural Networks with Pruning, Trained
13. M. Dennins, J. Masthoff and C. Mellish, "Adapting Progress            Quantization and Huffman Coding," 2014. [Online].
    Feedback and Emotional Support to Student Personality,"               Available:
    International Journal of Artificial Intelligence in Education,        https://papers.nips.cc/paper/2015/hash/ae0eb3eed39d2bcef4
    pp. 877-931, 2016.                                                    622b2499a05fe6-Abstract.html.
14. S. Tegos, S. Demetriadis and T. Tsiatsos, "A Configurable         28. Y. Gong, L. Liu, M. Yang and L. Bourdev, "Compressing
    Conversational Agent to Trigger Students' Productive                  Deep Convolutional Networks Using Vector Quantization,"
    Dialogue: A Pilot Student in the CALL Domain,"                        2014. [Online].
    International Journal of Aritificial Intelligence in Education,   29. G. Hinton, O. Vinyals and J. Dean, "Distilling the Knowledge
    pp. 62-91, 2014.                                                      in a Neural Network," 2015. [Online].
15. W. Y. Hwang and S. Y. Wu, "A Case Studet of Collaboration         30. H. Cai, L. Zhu and S. Han, "Proxylessnas: Direct Neural
    with Multi-Robots and its Effect on Children's Interaction,"          Architecture Search on Target Task and Hardware," 2018.
    Interactive Learning Environments, pp. 429-443, 2014.                 [Online].
16. M. Menekse, R. Higashi, C. D. Schunn and E. Baehr, "The           31. X. Zhang, Y. Wang, S. Lu and L. S. W. Liu, "OpenEI: An
    Role of Robotics Teams' Collaboration Quality on Team                 Open Framework for Edge Intelligence," 2019 IEEE 39th




                                                                                                                                 29
International Conference on Distributed Computer Systems,
pp. 1840-1851, 2019.




                                                            30