=Paper= {{Paper |id=Vol-1601/CrossLAK16Paper11 |storemode=property |title=Exploring the Impact of a Tabletop-Generated Group Work Feedback on Students’ Collaborative Skills |pdfUrl=https://ceur-ws.org/Vol-1601/CrossLAK16Paper11.pdf |volume=Vol-1601 |authors=Marisol Wong-Villacrés,Roger Granda,Margarita Ortiz,Katherine Chiluiza |dblpUrl=https://dblp.org/rec/conf/lak/Wong-VillacresG16 }} ==Exploring the Impact of a Tabletop-Generated Group Work Feedback on Students’ Collaborative Skills== https://ceur-ws.org/Vol-1601/CrossLAK16Paper11.pdf
      Exploring the Impact of a Tabletop-Generated Group Work
             Feedback on Students’ Collaborative Skills
           Marisol Wong-Villacrés, Escuela Superior Politécnica del Litoral, lvillacr@espol.edu.ec
          Roger Granda, Information Technology Center CTI-ESPOL, roger.granda@cti.espol.edu.ec
         Margarita Ortiz, Information Technology Center CTI-ESPOL, margarita.ortiz@cti.espol.edu.ec
              Katherine Chiluiza, Escuela Superior Politécnica del Litoral, kchilui@espol.edu.ec

         Abstract: This study explores the impact of a tabletop-generated feedback on student’s
         collaborative skills over time. Twenty-one Computer Science students participated in a three-
         week experimentation. A two-group design was used to assess three dimensions of
         collaboration: contributions, communication and respect. While the experimental group was
         asked to solve a database design problem using a tabletop system and received human and
         automatic feedback afterwards, the control group was asked to use a paper-based approach for
         the same task and received human feedback only. Results showed no significant difference
         between both groups, neither in their levels of group work skills, nor in the students’ self-
         perception of their group work skills Nonetheless, there was an improvement over the whole
         experience on the communication dimension on the experimental group. Likewise, both
         conditions showed a significant improvement on students’ self-perception of their group work
         skills. In addition, a positive moderate correlation between the automatic and human
         assessment of students’ contributions to group work was found. This confirms opportunities to
         further explore tabletop-based feedback for group work activities.
         Keywords: learning analytics, tabletop systems, collaborative skills, reflection, self-perception

Introduction
Software design often demands Computer Science practitioners to successfully engage in face-to-face
collaboration with peers, clients and stakeholders (Dekel & Herbsleb, 2007). Aware of this professional
requirement, Computer Science programs regularly promote in-class studio-based group activities (Lee,
Kotonya, Whittle, & Bull, 2015). Nonetheless, engaging in collaborative work does not necessarily lead to the
development of group work skills (Dillenbourg, 1999); learning to collaborate often requires self-reflection
prompted by on-time feedback (O’Donnell, 2006). Obtaining such feedback, however, is not always a
straightforward task; time and attention constraints prevent educators to fully acknowledge individual’s
performance and needs (Zhang, Zhao, Zhou, & Nunamaker Jr., 2004). Within this context, exploring
mechanisms to aid students’ self-reflect on their collaborative skills becomes a relevant goal to pursue.
          Previous work on multi-touch tabletops has shown this novel technology has a strong potential to
strengthen students’ group work skills by promoting communication (Buisine, Besacier, Aoussat, & Vernier,
2012), awareness of others (Falcão & Price, 2011) and equity of participation (Wallace, Scott, & MacGregor,
2013). Moreover, the ability of tabletops to seamlessly garner traces of students’ interactions opens the
possibility to timely deliver the feedback students require to engage in self-reflection (Al-Qaraghuli, Zaman,
Olivier, Kharrufa, & Ahmad, 2011; Clayphan, Martinez-Maldonado, & Kay, 2013; Martinez-Maldonado,
Dimitriadis, Martinez-Monés, Kay, & Yacef, 2013). In spite of the promising potential of tabletop-mediated
classrooms, a deep understanding of its strengths and limitations requires studies to be carried out both over
longer periods of time, and within real classroom settings where students perform tasks directly related to their
interests (Xambó et al., 2013). Although some research has focused on the usage of tabletops within realistic
conditions (Kharrufa et al., 2013; Martinez-Maldonado, Clayphan, & Kay, 2015), most of these studies have
explored how tabletop-captured data can enhance teacher’s class management activities. Nonetheless, there is a
lack of explorations of the effect visualizations of tabletop-captured data can have on students’ self-reflection
process.
          Additionally, several initiatives in the field of learning analytics have explored the analysis of students’
collaboration data in distributed settings (Anaya, Luque, & Peinado, 2015; Charleer, Klerkx, Santos Odriozola,
& Duval, 2013; Wise, Zhao, & Hausknecht, 2013). Most of this work has focused on using the analysis to:
engage students in reflection about their learning process (Anaya et al., 2015), and help students and educators
gain in-line awareness of group activities (Charleer et al., 2013). However, to our knowledge no research in the
field of learning analytics has explored how automatically captured data can impact students’ reflections of their
collaborative skills.

                                                          58
                    Copyright © 2016 for this paper by its authors. Copying permitted for private and academic purposes.
          In this study we examined how frequent exposure to an automatic tabletop-generated feedback can
impact students’ collaborative skills over time by facilitating students’ self-reflection process. More specifically,
we investigated the following three aspects: the impact of frequent on-time mixed (automatic and human-based)
feedback of tabletop-supported group work on students’ collaborative skills; the impact of frequent on-time
mixed (automatic and human-based) feedback of tabletop-supported group work on students’ self-s of their
collaborative skills; and, the level of similitude between a tabletop-generated assessment of students’
contributions to group work and a human-based assessment. In order to explore these questions, we compared
the results obtained from groups using a tabletop application to design software versus groups using a paper-
based approach for the same task.
          Our findings show that the students that were exposed to frequent mixed feedback do not exhibit
different levels of group work skills from those that received human-based feedback only. Similarly, students’
self-perception of their levels of group work skills were not different between the two conditions. Nonetheless,
students’ self-perception of their levels of group work skills, improved significantly over the whole experience
in both groups. Moreover, students exposed to frequent mixed feedback showed an improvement of their ability
to communicate to other team members. This indicates that tabletop-generated on-time feedback has potential to
enhance the development of students’ communication skills in collaborative tasks. Interestingly, a positive
moderate correlation between the automatic and human assessment of students’ contributions to group work was
found. This confirms opportunities to further explore tabletop-based assessment for group work activities
          This paper is structured as follows: first, a related work section is presented and the proposed multi-
touch tabletop application is described. Then, the research context, experiments and corresponding results are
detailed. Finally, a discussion section along with reflections about further research is proposed.

Related Work
The emerging field of learning analytics is concerned with understanding and improving learning through the
measurement, collection, analysis and reporting of data about learners and their contexts (Clow, 2012). One
relevant challenge of research in the area is how to capture and offer effective visualizations of meaningful
traces of learning. Work addressing this challenge usually focuses on using interviews and usability
questionnaires to evaluate the potential of the proposed visualization (Anaya et al., 2015; Charleer et al., 2013;
Clayphan et al., 2013; Martinez-Maldonado et al., 2013). A different challenge for learning analytics is how to
place these visualizations in the context of learning, so that teachers and/or students can make reflective
decisions based on the analytics. Existing work on this challenge has taken two different paths: one path draws
from educational theories and suggests approaches for enhancing the effectiveness of learning analytics projects
(Clow, 2012; Harfield, 2014); the other path explores what learning analytics can do for participants in realistic
environments over the course of time (Martinez-Maldonado et al., 2015; McNely, Gestwicki, Hill, Parli-Horne,
& Johnson, 2012). This paper focuses on this latter path: it seeks to explore how having students regularly
engaging with their own data and goals can impact their activities and behaviors.
          Exploring students’ collaborative actions is a problem area of interest within the field of learning
analytics. Previous work on collaboration analytics has mainly focused on distributed learning settings,
generating automatic context-aware recommendations for students to improve their collaboration process
(Anaya et al., 2015), and proposing personal dashboards and visualizations to support students’ awareness of
achievements and progress (Charleer et al., 2013). In general, learning analytics of students interacting in
distributed settings often ignores that students can interact face-to-face or via other media (e.g., emails)
(Charleer et al., 2013; McNely et al., 2012). Although our research pursues similar goals than previous
explorations on collaborative analytics of distributed interactions, we focus specifically on studying learning
analytics in the context of co-located collaboration.
          Previous research on tabletops indicates this technology has the potential to enhance face-to-face
collaborative learning; tabletops can encourage higher-level thinking and motivate effective learning (Kharrufa,
Leat, & Olivier, 2010), elicit a more productive collaboration (Schneider et al., 2012), and support equitable
participation in learning situations (Wallace et al., 2013). Furthermore, tabletops’ ability to capture traces of
students’ interactions creates opportunities for studying co-located learning analytics. Relevant initiatives in the
area have exploited tabletop-captured data for purposes such as: understanding the impact of users’ territoriality
around the tabletop (Tang, Pahud, Carpendale, & Buxton, 2010), capturing and analyzing collaborative
multimodal data to distinguish the level of collaboration of student groups (Martinez-Maldonado et al., 2013),
and helping educators manage their classrooms (Martinez-Maldonado et al., 2015). Little research on face-to-
face learning analytics has directly identify students as target users; Clayphan et al. (2013) studied the potential
of tabletop-generated feedback to engage students in reflection on their individual and group performance.
However, this author’s research did not focus on understanding the impact of feedback over time. Furthermore,

                                                         59
very few studies have explored tabletop applications for realistic usage scenarios, with tasks that are meaningful
for both students and educators (Martinez-Maldonado et al., 2015). In contrast, the present research examines
the over-time impact of face-to-face learning analytics on students, and proposes a within-the-classroom
approach where participants are studied while engaging in a task of their interest (software design).

System Description
The system used for this study was a projectable multi-touch tabletop system developed to support the
collaborative design of normalized-logical database models (Granda, Echeverria, Chiluiza, & Wong-Villacres,
2015). Some of the hardware component include: 1) An Optitrack Motion Tracking System, 2) A Pico projector
for presenting the image of the system, 3) Up to our 3D-printed pens with infrared markers at the top, 4) Tablets.
The software components are: 1) A motion tracking server system, 2) A user-interface component and 3) A web
application component. Fig 1 shows an overview of the implemented solution.




                    Figure 1. Students using the tabletop system and sample of student feedback.
         Students interact with the system using pens and tablets. At any time, the motion server tracking
system uses the Optitrack infrared-camera to identify markers of user’s pens. Each pen has a unique
configuration of 3 infrared markers. The position of the pen tip is calculated and delivered to the user-interface
component via TUIO multi-touch protocol. The user interface draws traces based on touch points from the
tracking server component. Additionally, this component recognizes the shape of pen traces drawn on the
canvas: if a trace with the shape of a rectangle is recognized then the trace is replaced with the shape of an
Entity within the database design; if a line between Entities is drawn, a Relationship replaces the trace instead.
Text input is enabled by a web component system used on tablets.
         Information about each student’s activity on the tabletop (creation, edition and deletion of entities and
relationships) is automatically captured. After a design session, the system sends an automatic performance
report to each student’s e-mail. The report describes her contribution to the task displaying the following
information: the number and percentages of entity and relationship-related actions performed by the student
(create, modify, delete); the number and percentages of actions performed by the rest of the group. A pie-chart
representation was chosen given the exploratory nature of this study. Figure 1 presents relevant sections of a
typical system report.

Methodology
Based on our review of previous work, we formulated three research questions: RQ1, do students repeatedly
exposed to an automatic and human-based feedback of their group work performance exhibit a significant
improvement on their collaborative skills compared to students who only received a human-based feedback?
RQ2, do students repeatedly exposed to an automatic and human-based feedback of their group work
performance perceive a greater change in their collaborative skills compared to students who only received a
human-based feedback? RQ3, are there similarities between a tabletop-generated assessment of individuals’
contributions to group work and a human-based assessment?
          This study was conducted during the summer of 2015 at an Ecuadorian public university. It involved
the participation of 21 undergraduate students enrolled in a Database System course of a Computer Science (CS)
program (20 male and 1 female). An adapted version of the Readiness for Interprofessional Learning Scale
(RIPLS) (Parsell, Bligh, & others, 1999) that included only the items related to teamwork and collaboration was
used to form homogeneous groups. As a result, seven groups of three students were formed.
          For this study, a two-group design was chosen. Students were randomly assigned to groups considering
the results obtained in RIPLS. Three groups were assigned to the control condition and four to the experimental
condition. The experiment consisted of three sessions. Session 1 and 2 took place the same day, and session 3 a
week later. In each session, groups were assigned a database design problem; while the control group performed

                                                       60
the activity using paper and markers and received human-based feedback only, the experimental group used the
previously described tabletop application, receiving both, human-based and automated feedback. These
activities were carried out after the midterm evaluation to allow for students to practice on Database Design
topics already reviewed during the first part of the term. The instructor did not interact with the students during
the tasks; he only provided formative feedback on the end result of the exercise.
          During each session, a trained observer assessed each student’s group work skills. This provided us
with the information needed to acknowledge any changes in individual’s performance over time. In order to
gauge collaboration we derived the following dimensions both from previous work on the area (Buisine et al.,
2012; Meier, Spada, & Rummel, 2007) and from the university’s expectations of group work skills:
contributions (student verbal and physical useful contributions to the team’s goal), communication (student
verbal expressions and physical gestures used to let the team know his/her opinion to other team members) and
respect (student verbal and physical demonstrations of respect towards others opinions and actions). The
observers had to total the number of actions according to the dimensions. Observers’ results were later
transformed to a 0 to 2 scale: 0 if the performance of the student on a dimension did not meet the expectations, 1
if the expectation was fulfilled partially and 2 if it was completely fulfilled. Even though a wider scale could
better support fine-grained ratings, the 0 to 2 scale was chosen to facilitate the assessment for the observer; due
to the duration of each session, more complex methods with more cognitive load could have a negative impact
on the observer's assessment ability.
          Immediately after each session, students were asked to use the same dimensions to assess their peers’
group work skills as well as their own, using the 0 to 2 scale. Additionally, the tabletop system sent the
automatic generated report previously described, to students in the experimental group. Within three days after
each session, all students received: a summary report comparing their self-assessment with both their observer’s
and group members’ assessment (Figure 2); and guided questions to prompt a reflective writing on their group
work abilities. The questions attempted to engage students in describing the activities carried out during the
task, the obstacles found, their perception on the received feedback, and the actions students planned to take in
order to improve their collaborative skills for the next group activity. For the final reflection, guided questions
focused on prompting students to reconsider their initial self-assessments as well as on gauging students’
perception of the tabletop usefulness. Tabletop usefulness was measured from 1 to 5, being 1: no useful and 5:
very useful.




                               Figure 2. Information displayed in students’ report.
Results
The results were analyzed comparing the assessment data gathered between session 1 and 3 related to the three
previously established group work dimensions: contributions, communication and respect. Descriptive results
from student's evaluation show that a positive effect was observed on the contribution and communication
dimensions: Session 1 (Contributions and Communication: median=1, Respect: median=2). In Session 2 all
dimensions reported (median=1); In Session 3 (Contributions and Communication: median=2, Respect:
median=1).
         Regarding RQ1: Do students repeatedly exposed to an automatic and human-based feedback of their
group work performance exhibit a significant improvement on their collaborative skills compared to students
who only received a human-based feedback? A Mann-Whitney U test was employed. The results showed no
significant differences in all group work dimensions (Contributions U=56.0 W=101.0 p>0.05; Communication
U=56.6 W=101.5 p>0.05; Respect U=29.5 W=74.5 p>0.05). Additionally, tests for intra-group differences
were performed for all dimensions. A positive effect was observed in the communication dimension of the
experimental group between session 1 (median=1) and session 3 (median= 2) (Z=49.5, p<0.011) whereas, the
Respect dimension of the control group exhibited a negative effect (Z=0.0, p<0.020) between session 1
(median=2) and session 3 (median=1).
         Regarding RQ2: Do students repeatedly exposed to an automatic and human-based feedback of their
group work performance perceive a greater change in their collaborative skills compared to students who only

                                                        61
received a human-based feedback? No significant differences were found in any of the dimension between both
conditions when using Mann-Whitney U Test (Contributions U=28.5 W=73.5 p>0.05; Communication U=34.0
W=79.0 p>0.05; Respect U=34.0 W=70.0 p>0.05). Additionally, tests for intra-group differences were
performed for all dimensions. A positive effect was observed in the Contributions dimension of both the
experimental and control group between session 1 and session 3 (experimental group: session 1 median=1,
session 3 median= 2, Z=28.0 p<0.008; control group: session 1 median=1, session 3 median= 1, Z=10.0
p<0.046).
          Regarding RQ3: Are there similarities between a tabletop-generated assessment of individuals’
contributions to group work and a human-based assessment? A Kendall Tau correlation test was performed for
each session. In session 1 no significant correlation was found (rt=0.254, p= 0.368), in session two a moderate
correlation was observed, though not significant (rt=0.4, p=0.213). Finally, in session three a moderate
significant correlation was found (rt=0.613, p =0.030). As it can be seen, an increasing trend in the correlations
over time is observed too.
          Furthermore, feedback about students’ perception of the tabletop usefulness was gathered. The results
obtained were mixed (median=3), meaning that the solution was perceived as "useful". Moreover, qualitative
feedback was also collected. Some comments about the solution were positive, for example: "..The solution
seems interesting to me because, this uses a new way to interact with technology..". Whereas some students
reported: ".. I do not see why using this technology.."

Discussion and Further Work
This study examined the potential of over-time exposure to automatic tabletop-generated feedback on students’
collaborative skills. Results indicate that groups that received mixed feedback do not differ in their group work
abilities when compared to those that received human-based feedback only. Similarly, students’ self-perception
of their group work abilities was not different between the two conditions. Nonetheless, students’ self-
perception of their collaborative skills, improved significantly over time on both the tabletop and the paper-
based conditions. Moreover, communication skills during group work activities for the tabletop condition
showed an improvement over time. These results are in line with the findings of Buisine et al. (2012), who
underlined that tabletop led to more communicative gestures and more distributed verbal contributions than a
paper-based approach.
           Additionally, the results showed that, over time, students who did not receive any exposure to the
tabletop feedback decreased their level of respect to their peers. Previous studies have concluded that pen-based
interactions on a tabletop enhance group members’ awareness of others (Jamil, O’Hara, Perry, Karnik, &
Subramanian, 2011); and that the presence of colored indicators to distinguish ownership of creation in tabletop
systems triggers social comparison and awareness (Buisine et al., 2012). Overall, social awareness could
promote respectful interactions amongst group members; the lack of features that enhance awareness of others
could explain the decrease in the respect dimension of the paper-based group. Furthermore, receiving
continuous tabletop-generated feedback comparing individual’s group work performance to the rest of the
groups can augment individuals’ awareness of their peers. Therefore, another possible reason for students in the
paper-based condition to decrease their levels of respect is the lack of tabletop-generated feedback. It is also
important to note that this research’s results pertaining the level of similitude between an automatic assessment
of individuals’ contributions to group work and a human-based assessment show a positive moderate
correlation between both assessments. Moreover, qualitative feedback showed that this tool is promising due to
the usefulness reported by students. This confirms opportunities to further explore tabletop-based assessment
for group work activities.
           Nonetheless, it is relevant to consider the following confluent variables that could have affected the
results of the experiment: 1) the novelty effect of using a tabletop could have changed student's behavior during
the first session in terms of mutual respect and communication; 2) usability issues hindered students' abilities to
seamlessly execute the tasks they intended, causing them to experience communication breakdowns; 3) the
design of the automatic feedback heavily based on pie charts could have been ineffective to encourage students'
understanding of the data; 4) the possible bias of using only one observer on a group of three to four students
could have had a strong impact on the assessments. Future work on this area must consider a different design for
the automatic feedback, as well as recording the sessions so that at least two observers have the opportunity to
evaluate the groups. Finally, it is relevant to conclude that more research is needed to find the precise effect of
on-time feedback by tabletop systems on collaborative skills of students.
References
Al-Qaraghuli, A., Zaman, H. B., Olivier, P., Kharrufa, A., & Ahmad, A. (2011). Analysing tabletop based

                                                        62
      computer supported collaborative learning data through visualization, 329–340.
Anaya, A. R., Luque, M., & Peinado, M. (2015). A visual recommender tool in a collaborative learning
      experience. Expert Systems with Applications, 45(C), 248–259. http://doi.org/10.1016/j.eswa.2015.01.071
Buisine, S., Besacier, G., Aoussat, A., & Vernier, F. (2012). How Do Interactive Tabletop Systems Influence
      Collaboration? Comput. Hum. Behav., 28(1), 49–59. http://doi.org/10.1016/j.chb.2011.08.010
Charleer, S., Klerkx, J., Santos Odriozola, J. L., & Duval, E. (2013). Improving awareness and reflection
      through collaborative, interactive visualizations of badges. In ARTEL13: Proceedings of the 3rd Workshop
      on Awareness and Reflection in Technology-Enhanced Learning (Vol. 1103, pp. 69–81). CEUR-WS.
Clayphan, A., Martinez-Maldonado, R., & Kay, J. (2013). Designing OLMs for Reflection about Group
      Brainstorming at Interactive Tabletops. In Workshop on Intelligent Support for Learning in Groups
      (ISLG) - International Conference on Artificial Intelligence in Education (AIED 2013) (Vol. 1009).
Clow, D. (2012). The learning analytics cycle: closing the loop effectively. In Proceedings of the 2nd
      international conference on learning analytics and knowledge (pp. 134–138).
Dekel, U., & Herbsleb, J. D. (2007). Notation and Representation in Collaborative Object-oriented Design: An
      Observational Study. SIGPLAN Not., 42(10), 261–280. http://doi.org/10.1145/1297105.1297047
Dillenbourg, P. (1999). What do you mean by collaborative learning? Collaborative-Learning: Cognitive and
      Computational Approaches., 1–19.
Falcão, T. P., & Price, S. (2011). Interfering and resolving: How tabletop interaction facilitates co-construction
      of argumentative knowledge. International Journal of Computer-Supported Collaborative Learning.
Granda, R. X., Echeverria, V., Chiluiza, K., & Wong-Villacres, M. (2015). Supporting the Assessment of
      Collaborative Design Activities in Multi-tabletop Classrooms. In 2015 Asia-Pacific Conference on
      Computer Aided System Engineering (pp. 270–275). IEEE. http://doi.org/10.1109/APCASE.2015.54
Harfield, T. D. (2014). Teaching the Unteachable: On the Compatibility of Learning Analytics and Humane
      Education. In Proceedings of the Fourth International Conference on Learning Analytics And Knowledge
      (pp. 241–245). New York, NY, USA: ACM. http://doi.org/10.1145/2567574.2567607
Jamil, I., O’Hara, K., Perry, M., Karnik, A., & Subramanian, S. (2011). The effects of interaction techniques on
      talk patterns in collaborative peer learning around interactive tables. In Proceedings of the SIGCHI
      Conference on Human Factors in Computing Systems (pp. 3043–3052).
Kharrufa, A., Balaam, M., Heslop, P., Leat, D., Dolan, P., & Olivier, P. (2013). Tables in the wild. In
      Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13 (p. 1021).
      New York, New York, USA: ACM Press. http://doi.org/10.1145/2470654.2466130
Kharrufa, A., Leat, D., & Olivier, P. (2010). Digital Mysteries: Designing for Learning at the Tabletop. In ACM
      International Conference on Interactive Tabletops and Surfaces (pp. 197–206). New York, NY, USA:
      ACM. http://doi.org/10.1145/1936652.1936689
Lee, J., Kotonya, G., Whittle, J., & Bull, C. (2015). Software Design Studio: A Practical Example. In 2015
      IEEE/ACM 37th IEEE International Conference on Software Engineering (Vol. 2, pp. 389–397). IEEE.
      http://doi.org/10.1109/ICSE.2015.171
Martinez-Maldonado, R., Clayphan, A., & Kay, J. (2015). Deploying and Visualising Teacher’s Scripts of Small
      Group Activities in a Multi-surface Classroom Ecology: a Study in-the-wild. Computer Supported
      Cooperative Work (CSCW), 24(2-3), 177–221. http://doi.org/10.1007/s10606-015-9217-6
Martinez-Maldonado, R., Dimitriadis, Y., Martinez-Monés, A., Kay, J., & Yacef, K. (2013). Capturing and
      analyzing verbal and physical collaborative learning interactions at an enriched interactive tabletop.
      International Journal of Computer-Supported Collaborative Learning, 8(4), 455–485.
      http://doi.org/10.1007/s11412-013-9184-1
McNely, B. J., Gestwicki, P., Hill, J. H., Parli-Horne, P., & Johnson, E. (2012). Learning analytics for
      collaborative writing. In Proceedings of the 2nd International Conference on Learning Analytics and
      Knowledge - LAK ’12 (p. 222). New York, New York, USA: ACM Press.
      http://doi.org/10.1145/2330601.2330654
Meier, A., Spada, H., & Rummel, N. (2007). A rating scheme for assessing the quality of computer-supported
      collaboration processes. International Journal of Computer-Supported Collaborative Learning, 2(1).
O’Donnell, A. M. (2006). The Role of Peers and Group Learning. In L. E. Associates (Ed.), Handbook of
      educational psychology (pp. 781–802). Lawrence Erlbaum Associates Publishers.
Parsell, G., Bligh, J., & others. (1999). The development of a questionnaire to assess the readiness of health care
      students for interprofessional learning (RIPLS). Medical Education, 33(2), 95–100.
Schneider, B., Strait, M., Muller, L., Elfenbein, S., Shaer, O., & Shen, C. (2012). Phylo-Genie: Engaging
      Students in Collaborative “Tree-thinking” Through Tabletop Techniques. In Proceedings of the SIGCHI
      Conference on Human Factors in Computing Systems (pp. 3071–3080). New York, NY, USA: ACM.

                                                        63
     http://doi.org/10.1145/2207676.2208720
Tang, A., Pahud, M., Carpendale, S., & Buxton, B. (2010). VisTACO: Visualizing Tabletop Collaboration. In
     ACM International Conference on Interactive Tabletops and Surfaces (pp. 29–38). New York, NY, USA:
     ACM. http://doi.org/10.1145/1936652.1936659
Wallace, J. R., Scott, S. D., & MacGregor, C. G. (2013). Collaborative sensemaking on a digital tabletop and
     personal tablets: prioritization, comparisons, and tableaux. In Proceedings of the SIGCHI Conference on
     Human Factors in Computing Systems (pp. 3345–3354).
Xambó, A., Hornecker, E., Marshall, P., Jorda, S., Dobbyn, C., & Laney, R. (2013). Let’s jam the reactable:
     Peer learning during musical improvisation with a tabletop tangible interface. ACM Transactions on
     Computer-Human Interaction (TOCHI), 20(6), 36.
Zhang, D., Zhao, J. L., Zhou, L., & Nunamaker Jr., J. F. (2004). Can e-Learning Replace Classroom Learning?
     Commun. ACM, 47(5), 75–79. http://doi.org/10.1145/986213.986216
ACKNOWLEDGMENTS
The authors would like to thank the SENESCYT for its support in the development of this study and to ESPOL's
educators and students that participated in the experiment.




                                                    64