=Paper= {{Paper |id=Vol-2610/paper8 |storemode=property |title=The challenge of interaction assignment for learning analytics on large digital tabletop displays |pdfUrl=https://ceur-ws.org/Vol-2610/paper8.pdf |volume=Vol-2610 |authors=Matthias Ehlenz,Vlatko Lukarov,Ulrik Schroeder |dblpUrl=https://dblp.org/rec/conf/lak/EhlenzLS20 }} ==The challenge of interaction assignment for learning analytics on large digital tabletop displays== https://ceur-ws.org/Vol-2610/paper8.pdf
                  Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)




    The challenge of interaction assignment on large digital tabletop
                     displays for learning analytics

                         Matthias Ehlenz, Vlatko Lukarov, Ulrik Schroeder
                  Learning Technologies Research Group, RWTH Aachen University
                      [ehlenz,lukarov,schroeder]@informatik.rwth-aachen.de

          ABSTRACT: One system, fours learners, eight hands. A typical situation in collaborative
          learning with digital tabletops and a beautiful testbed for learning analytics if not for the fact,
          that a central question remains open. Who did what? The first part any xAPI-statement is the
          actor, which from the systems perspective can be any one of the current learners.
          This contribution describes the practical experience to employ learning analytics strategies in
          this non-traditional scenario. Collaborative learning on digital tabletops always involves
          multiple users interacting simultaneously with the system. Identifying with user is
          responsible for which interaction is challenging. Various approaches have been taken and are
          briefly described in this contribution.

          Keywords: interaction assignment, multi-touch, digital tabletops, collaborative learning,
          machine learning



1         LEARNING ANALYTICS IN FACE-TO-FACE COLLABORATION

Science has come a long way in a lot of research areas and modern technologies take advantage of
those achievements by combining the benefits and insights to create new or improve known
solutions. Some of these areas are learner modelling [1], intelligent tutor systems [2] and learning
analytics. Results surmount i.e. in adaptive feedback technologies, giving each user the most
valuable feedback for her personality as well as her current situation.

Traditionally learning analytics aims at analysis of user behavior in learning processes to understand
and improve those processes. A lot of steps have to be taken, a lot of questions to be asked. A good
way to identify those questions is to adhere to the Learning Analytics Reference Model [3] and ask
what data to gather, why do it at all, how to analyze and who is stakeholder and thus interested in
the outcome. A question usually not asked is “Who is the learner?”, this time not from a
psychological perspective but from a very practical point of view.

Multitouch tabletop displays got bigger and more affordable in the recent years and consequently
found their ways into the educational system. Large display real estate allows multiple learners to
interact with a single system at the same time thus bringing face to face collaboration back into
technology-enhanced group learning processes. Multiple learning games and applications have been
designed for research purposes at our institution, the most prominent a game to rehearse and
practice regular expressions by dragging matching words into players zones of regular expressions.

The details of the game, it’s didactical approach and first findings can be found in [4], this
contribution focuses on technical challenges and approaches of interaction assignment in that game.
Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International
                                                                                                                        1
(CC BY 4.0).
                  Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)



1.1       The Problem of Assignment

Initially the problem of interaction assignment wasn’t expected at all. The learning game, in our
experimental setup played on an 84” tabletop display, provided each player with a regular
expression of identical structure but different characters in each player’s personal area directly in
front of her. The common area in the middle featured “word cards” with words matching one of
those expressions or not. The idea has been that each player drags matching cards into her area and
the group is awarded points for each matching word, to achieve higher scores everyone needs to get
involved, the collaboration intended to be verbally, learners explaining each other the structure of
current levels regular expression.

In fact, this behavior is present and observed in nearly every session. What has been unaccounted
for is behavior of a different kind. In the first implementation the learning analytics component
attributed the drag and drop interaction to the player standing at the position of the regular
expressions drop area. Practice showed two behavioral patterns not expected and therefore not
covered by this naïve assumption. Pattern A is referenced in the following as “Sorting behavior”.
Some learners tend to sort more than others, but it is observed in all subjects so far to a varying
degree. Interaction does not necessarily end in a target zone. The motives differ, some pull word
cards closer, either to read or claim temporary possession, some pre-sort in “might fit – definitely
doesn’t match”, some pre-sort for the whole group. The last motive is pretty close to pattern B:
Players pushing cards which they might think will fit in other players “drop zone” close to them,
leaving the final decision to them. Not all players show that inhibition to “intrude” their colleagues’
personal space, making up pattern B, which will be consecutively called “Cross interaction”, learners
taking a word card and dropping them into one of the other three target areas not in front of
themselves.




                                             Figure 1: The Game “RegEx”
Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International
                                                                                                                        2
(CC BY 4.0).
                  Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)



Both patterns are challenging from a learning analytics perspective. The touchscreen technology
cannot differentiate between disparate fingers on its surface and thus not tell which player
interacted with which element. Pattern A resulted in simply unassigned interactions, Pattern B even
in falsely assigned interactions, both dangerous for a complete attempt on learning analytics.

In the following years, several attempts have been taken to tackle this challenge and will be
described in the following:

1.1.1 Active Tangibles
The first approach came, in a way, naturally, since the learning game was developed in the context
of a publicly funded research project, TABULA, which focused on tangible learning with graspable
elements. In this project, active tangibles have been developed, small handheld devices which could
interact with the system by providing a recognizable pattern on the bottom side as well as
establishing a Bluetooth connection to the application. Apart from offering new interaction
mechanisms and feedback channels, such devices can be uniquely identified by the system and
thereby identify the user.

The idea in general holds up. Tangibles can help to identify users and provide reliable data for
learning analytics. Nonetheless there are arguments against further application for this approach:
First, this technology is still prototypical. Just that we can use it does not solve the problem itself, it’s
like a crutch not available to everyone. Second, and this is more serious, the project showed that
tangibles change user behavior and therefore does not lead to valid results on the effect of
multitouch collaborative learning and cooperative behavior in general. I.e. some participants
complained to have felt impaired as they have been told to only use the tangible, leaving one hand
unused.

1.1.2 Motion Tracking
Often suggested, there was a preliminary test regarding the suitability of motion tracking devices
like Microsofts Kinect, which was available for PC for a slight time frame. While the idea of skeleton
tracking is appealing, it was not possible to find any angle suitable for this application. Front view is
obstructed by the physically opposing players and could not be altered without changing the
experimental setup and thereby the user’s behavior in itself. Top view had significant problems both
with skeletal tracking (due to highly unusual perspective) as well as obscured view of hands.




Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International
                                                                                                                        3
(CC BY 4.0).
                  Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)



After this first trials the approach did not look promising at and it has been decided to drop it
completely and invest resources elsewhere.

1.1.3 Manual Labeling
The last resort for every researcher. Manual labelling. A test run was conducted in which the test
manager was instructed to take notes of observed behavior of patters A or B. This proved to be
unpractical due to a pretty high frequency of such events and the inability of a single person to
follow the actions of four people simultaneously. Consequently, the experimental setup has been
changed and wide-angle cameras have been installed above the tabletop display.

The video stream proved to be helpful, but manual labelling has been time-consuming and
sometimes difficult as finding a single interaction in a set of usually around 300 events in a five-
minute-session and the corresponding video material is difficult and sometimes, in case of multiple
simultaneous interactions by different players, close to objectively impossible.

1.1.4 Tool-supported post-processing
The difficulties in the manual labeling process led to the development of a dedicated tool for
interaction assignment which streamlined the process by several orders of magnitude. The video file
is loaded into the tool, the dataset of the session according to the file name is fetched from the
server and video and event stream are synchronized by the click of a single button. After this, the
application workflow is as follows: First all “invalid” data is filtered (by our definition events with a
duration below 200ms as those are mostly artifacts and noise), then all unassigned events are
showed and manually assigned to one of the four players. Finally, all events for each player is played
back and either confirmed or corrected by the post processing user.




                                         Figure 2: Tool-supported labelling



Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International
                                                                                                                        4
(CC BY 4.0).
                  Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)



Events are showed in a bigger frame as video stream starting at the events timestamp and can be
slowed down and replayed by press of a button. A small frame on the upper right contains a small
rendering of the interaction’s movement and below the events meta-information is provided.

Finally, the results can be saved back to the server and exported as .csv file. Pre-labelling is done
solely on drop-zone information. This tool improved the post processing significantly, labelling of a
five-minute session of about 300 events dropped from several hours to below 30 minutes for an
experienced researcher.

1.1.5 Machine-learning approach
Various ideas have been developed to further improve the pre-labelling or even automated
assignment of interactions to learners. The most promising has been the evaluation of current
machine learning algorithms to the data.

Fundamental to this is the idea that there are several features and characteristics depending on the
interacting persons position when standing at such a large tabletop that prediction of that position
from speed, acceleration, angle and curvature of the interaction seemed feasible.

The idea and feature calculation showed similarities to handwriting recognition, so a bachelor thesis1
evaluated the usage of the recommended Support Vector Machines, Random Forests, AdaBoost,
CNNs and RNNs on a labeled training set of about 4000 interactions.

                                         Table 1: Comparison of Algorithms
                                                                               Avg.
                                     Algorithm
                                                                               Accuracy
                                     SVN                                       70.44
                                     RandomForest                          -
                                                                               81.7
                                     RandomForestClassifier
                                     RandomForest                          -
                                                                               83.3
                                     RandomForestClassifier
                                     AdaBoost                                  77.16
                                     CNN                                       78.7
                                     RNN                                       80.54



The training data set proved to be far too small for most of those algorithms, but first results suggest
that Random Forest and Recurring Neural Networks look most promising with accuracy above 80%.
While this is still far from automated assignment with a sufficient degree of certainty it brings
further improvement to the post-processing of the interaction data by pre-labeling the data before
the manual check. The algorithms will be reevaluated as the labeled dataset grows.



1
    http://publications.rwth-aachen.de/record/764115

Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International
                                                                                                                        5
(CC BY 4.0).
                  Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20)




2         FINDINGS AND CONCLUSION

Correct attribution remains our biggest challenge in collaborative learning analytics. We strive to
gather more data and intensify our machine learning efforts but look in different directions as well,
starting with usage of eye-tracking glasses this semester.

REFERENCES

[1] Bull, S., Kickmeier-Rust, M., Vatrapu, R. K., Johnson, M. D., Hammermueller, K., Byrne, W., ... & Meissl-
Egghart, G. (2013, September). Learning, learning analytics, activity visualisation and open learner model:
Confusing?. In European Conference on Technology Enhanced Learning (pp. 532-535). Springer, Berlin,
Heidelberg.

[2] Kulik, J. A., & Fletcher, J. D. (2016). Effectiveness of intelligent tutoring systems: a meta-analytic
review. Review of educational research, 86(1), 42-78.

[3] Chatti, M. A., Dyckhoff, A. L., Schroeder, U., & Thüs, H. (2013). A reference model for learning
analytics. International Journal of Technology Enhanced Learning, 4(5-6), 318-331.

[4] Ehlenz, M., Leonhardt, T., Cherek, C., Wilkowska, W., & Schroeder, U. (2018, November). The lone wolf
dies, the pack survives? Analyzing a Computer Science Learning Application on a Multitouch-Tabletop.
In Proceedings of the 18th Koli Calling International Conference on Computing Education Research (pp. 1-8).




Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International
                                                                                                                        6
(CC BY 4.0).