=Paper=
{{Paper
|id=Vol-3042/short_paper_3
|storemode=property
|title=Participatory Design of Feedback Mechanism in a Physics Blended-Learning Environment
|pdfUrl=https://ceur-ws.org/Vol-3042/short_paper_3.pdf
|volume=Vol-3042
|authors=Elad Yacobson,Armando M.Toda,Alexandra I.Cristea,Giora Alexandron
}}
==Participatory Design of Feedback Mechanism in a Physics Blended-Learning Environment==
Participatory Design of Feedback Mechanism in a
Physics Blended-Learning Environment
Elad Yacobson1 , Armando M.Toda2 , Alexandra I. Cristea2 and Giora Alexandron1
1
Weizmann Institute of Science, 234 Herzl St., Rehovot, Israel
2
Durham University, Stockton Road, Durham, UK
Abstract
This paper describes a series of participatory-design experiments conducted with physics teachers who
use a blended learning environment, with the goal of developing mechanisms for collecting teachers’
feedback and opinions on the resources that they have used, which can assist peer teachers in search
& discovery of appropriate learning materials. The main challenge that this study addresses is how to
incentivise teachers to provide such feedback, as originally the response rate for feedback requests was
very low. The novel approach that it proposes to address this challenge is by incorporating the idea of
gamification into this crowd-sourcing challenge. First, we present a successful two-phase experiment
conducted during the previous school-year, in which two gamification elements were implemented into
the learning environment, resulting in a 𝑥2.6 increase in the response rate to feedback requests. Second,
we discuss additional changes we are planning to implement in the feedback mechanism during the next
school year. These are inspired by incentive frameworks, adjusted to the domain and context through
interviews, questionnaires, and group discussions conducted with teachers who use the platform.
Keywords
Participatory-Design, Social-Recommendations, Blended Learning, crowd-Sourcing, Gamification
1. Introduction
Blended-Learning environments usually provide educational users with large repositories of
learning-resources (LR) [1]. Teachers can use this repositories to find learning materials that
are most suitable to their students’ needs and to their own preferences, and administer them
to their students [2, 3]. However, the vast amount of LRs that blended learning environments
typically include make it difficult for teachers to find the appropriate learning materials [1].
Interviews we conducted with physics teachers regarding the process of searching and choosing
LRs in a blended learning environment emphasized the importance that teachers ascribe to
social recommendations, meaning that teachers highly value the opinion of their peers, and a
recommendation about a LR given by a fellow teacher can substantially shorten and simplify
the process of finding appropriate learning materials [4]. Alas, despite the fact that social
recommendations are conducive to the process of searching LRs, when teachers are asked
ECTEL 2021: AI for Blended Learning: Empowering Teachers in Real Classrooms, September 20, 2021, Bozen-Bolzano,
Italy
Envelope-Open elad.yacobson@weizmann.ac.il (E. Yacobson); armando.toda@usp.br (A. M.Toda);
alexandra.i.cristea@durham.ac.uk (A. I. Cristea); giora.alexandron@weizmann.ac.il (G. Alexandron)
Orcid 0000-0002-9816-4235 (E. Yacobson); 0000-0003-2681-8698 (A. M.Toda); 0000-0002-1454-8822 (A. I. Cristea);
0000-0003-2676-6912 (G. Alexandron)
© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
CEUR Workshop Proceedings (CEUR-WS.org)
to provide feedback about LRs that they have used, they are often reluctant to do so [5, 6].
Therefore, our goal in this research is to explore the factors that influence teachers’ motivation
to participate in teacher-sourcing activities, such as recommending LRs to other teachers, and to
design elements and mechanisms that could be implemented into blended learning environments
with the purpose of enhancing teacher participation in crowd-sourcing and collective wisdom
processes.We believe our goal can be facilitated with the use of gamification, the use of game-like
elements outside of a game, to stimulate the social interactions and recommendations [7]. To
the best of our knowledge this line of research is the first to study gamification as means to
incentivise teachers in crowdsourcing activities, with a practical contribution to an online
learning system that is in use nation-wide.
The rest of the paper is organized as follows: first, we present the learning environment in
which we are conducting our research; next we present an experiment conducted during the
previous school year, in which two gamification elements were implemented into the learning
environment with the goal of encouraging teachers to provide feedback regarding LRs they
used; and finally we discuss our plans for additional experiments we intend to perform next
year.
2. Learning Environment - PeTeL
PeTeL (Personalised Teaching and Learning) is a blended-learning environment that is developed
at the Department of Science Teaching at the Weizmann Institute of Science. It is implemented
on top of a Moodle learning management system (LMS), and consists of three main components:
a shared repository of open educational resources (OER), an LMS that offers teachers learning
analytics tools, and social network features. The main purpose of PeTeL is to assist STEM
teachers in providing personalised instruction to their students. PeTeL is divided into separate
modules for each subject matter: Biology, Chemistry and Physics. To assist teachers in searching
and choosing learning materials that best suit their students’ needs, PeTeL provides common
search filters, such as subject matter, level of difficulty, duration, technical requirements (e.g.
projector or mobile devices), nature of the activity (e.g. diagnostic questionnaire, interactive
task, home assignment, exercise etc.), and in addition, social-based search and discovery features.
For example, teachers can follow other teachers within a social network style collaborative
environment (referred to as the ‘peer network’), receive recommendations from them, copy
their teaching sequences, and more. Teachers can also search and rank materials based on
reviews provided by their peers. After using an activity in their class, the teachers are presented
with a ‘pop-up’ window requesting them to provide feedback concerning the resource they
used. The teachers can either fill the pop-up survey, postpone filling out the form to a later date,
or cancel it. This feedback mechanism was initially activated in PeTeL during the 2019 – 2020
school year. However, teachers’ cooperation was relatively low, and their response rate to the
feedback requests during this first year was below 3%. Since the reviews were identified by the
teachers as very influential on their decision on which activities to use, and also provide the
basis for an automatic ranking algorithm that is currently under design, we marked the issue of
increasing the response rate as a major challenge that should be addressed.
3. Gamification Experiment
In the first experiment, we decided to explore the idea of using gamification as means to enhance
teachers’ motivation to provide feedback. In recent studies, gamification has proven to be an
effective incentive mechanism for crowdsourcing methods [8, 9]. Our research questions were:
(a) What gamification elements do teachers believe will encourage them to provide feedback?,
and (b) Does implementing these elements actually enhance teachers willingness to provide
feedback?
3.1. Methodology
This experiment took place during the 2020-2021 school year, and consisted of two stages: in
the first stage, we tried to understand which gamification elements are most likely to influence
teachers’ behavior. In order to do so, we presented 17 physics teachers with five mock-ups
of gamification elements that were based on a new and cutting-edge gamification taxonomy,
designed specifically for learning environments [10], and asked them to rank these elements
according to how much they believed each element would contribute to teachers’ willingness
to give feedback. In addition, we gave teachers a few open ended questions and held a group
discussion in which they were asked to express their opinion about the feedback mechanism
in PeTeL and raise ideas on how to improve it. Our conclusions from this first stage of the
experiment were: (a) Teachers need to have clear goals and to know their status with respect
to these goals. (b) Teachers need to have a feeling of ‘impact’, i.e. they need to feel that their
feedback is meaningful, that it is taken seriously, and that it is contributing to the rest of the teacher
community and to the learning environment. (c) Social recognition matters, meaning that teachers
not only want to feel that they are contributing, but also to be recognized by their peers for their
effort and contribution.
In light of these findings, in the second stage of the experiment, we implemented two
gamification elements into PeTeL and measured the impact on the amount of feedback we
received from teachers.
The first element was a progress-bar: we defined a goal of five reviews per teacher per year,
and each time a teacher would review an activity he or she used in class, the progress-bar would
indicate the progress of that teacher toward that goal (see Figure 1). This element addressed the
finding from the previous stage, regarding teachers’ need to have clear goals and to know their
status with respect to these goals.
The second element that was implemented into PeTeL was a recommendation bulletin
board, appearing in the main page of the learning environment, and presenting teachers’
reviews about LRs they used in their classrooms (see Figure 2). Each time a teacher reviewed a
LR he/she used in class, the bulletin board would be updated, presenting the new feedback at
the top of the bulletin board. Each new entry to the bulletin board contained both the name of
the recommending teacher, and the title of the LR that has been recommended. When hovering
with the mouse over a recommendation, a mouse-over text would appear with the details of the
recommendation. All of the recommendations appearing in the bulletin board were ‘linkable’,
meaning that teachers could click on a recommendation and be moved to the LR repository so
they could download the recommended LR to their own course or class. This element addressed
Figure 1: Progress Bar
Figure 2: Recommendations Bulletin Board
both the teachers’ need to have a feeling of impact, that their feedback is actually helpful to the
entire teacher community, and their need for social recognition.
3.2. Results
After the implementation of these two elements, we monitored teachers’ feedback and compared
them to the amount of feedback received during the previous school year (before the implemen-
tation). The results showed a 𝑥2.65 increase in the amount of feedback we got from the teachers,
and an equal increase in the response rate (i.e. the percentage of feedback requests that were
actually answered by the teachers). For detailed information regarding this experiment, see [6].
4. Planned Experiments
4.1. Improvements in the Design of the Feedback Mechanism
Throughout the 2020-2021 year, we interviewed individual teachers, held group discussions
with teachers attending a teacher-training program, and conducted several surveys, with the
goal of better understanding teachers’ attitudes and preferences towards the feedback and
recommendation mechanism, especially regarding the new gamification elements introduced in
the system. Our intention was to build on the findings we obtained, in order to further improve
the design of the feedback mechanism in PeTeL, hopefully bringing an even larger increase in
the number of feedback items we will receive next year. A few findings emerged during this
process:
Importance of recommendation above all other aspects of the review: we found that
teachers are most interested in whether the LR is recommended or not by other teachers.
Questions that appear in the existing feedback form, such as whether the teacher used the LR
in class or as homework, or what was the timing in which the teacher used the activity, were
identified as second order.
Progress bar does not influence teachers’ motivation: both in the interviews and in the
surveys, a majority of the teachers indicated that they did not even notice the progress bar. The
few that did see it, said it had no impact on their willingness to fill in reviews. Therefore, in the
spirit of simplicity (see Occam’s razor), we decided to remove the progress-bar element from
the new feedback design. We note that even though the participatory design process marked
this issue – having an individual goal defined in terms of expected number of responses, and
an indication on the progress towards it – as important, empirical evidence show its effect is
minor. While this may be an indication of the (low) importance of this feature, it may also be
that its design and implementation were unsuccessful in achieving this goal.
Shorter feedback: teachers repeatedly emphasized that the length of the feedback form (i.e.
the large amount of questions it contains) deters them from filling it.
Following from these findings, in the new design of the feedback survey the teachers will
be presented with only two questions: the first is whether they recommend this LR to other
teachers; the second is an open-ended question, asking teachers for insights or suggestions
concerning the use of the LR (see Figure 3).
Another change to the feedback mechanism design will be performed due to the understanding
of the importance of default choices on user behavior. According to the status-quo bias theory
[11], when making decisions, people are usually more inclined to stick with the existing situation
rather than making an active choice to change it. Therefore, the choice of the default options
in the design of a system can have a crucial influence on users’ behavior, as was shown for
instance in [12]. In the existing feedback mechanism in PeTeL, teachers are first presented with
a pop-up window asking them if they would be willing to fill in the feedback form, and only if
they click on the ‘write your response’ button, they are moved to the feedback form. Therefore,
Figure 3: New Pop-Up Window Design
the default choice is not to give feedback, as they are first required to actively choose to fill it.
In the new design, the two questions in the feedback form are immediately presented to the
teacher in the pop-window, making filling in the form the default choice. In this design, the
teacher has to actively choose the ‘cancel’ button in order to avoid filling a review.
4.2. Planned Experiments and Conclusions
In the coming school year, we are planning to continue our experiments, in order to examine
both the effect of the new design on teachers’ behavior, and the effect of the aforementioned
gamification elements on a larger number of teachers. To achieve these goals, we intend to
implement the new feedback mechanism design we described previously into all three modules
in PeTeL: Physics, Biology and Chemistry.
In the physics module, we will compare the amount of feedback we will collect from teachers
in the coming year (2021 - 2022) to the amount of feedback we received during the current year
(2020 - 2021), in order to see whether the new design has an influence on teachers’ motivation.
In the chemistry and biology modules, during the previous school year, there was no feedback
mechanism available for teachers, to review and recommend learning resources they used. In the
coming year, the new feedback mechanism will be also implemented in these two modules, and
we will run an A/B test experiment: 50% of the teachers will be able to see the recommendation
bulletin board (experimental group), and 50% will not (control group). This way, we shall be
able to see whether there is a (statistically significant) difference in the amount of feedback we
will get from each group, and thus understand if the bulletin board affects teachers’ motivation.
5. Acknowledgements
This research is supported by a Making Connections Grant funded by Weizmann UK. The work
of GA is also supported by the Willner Family Leadership Institute for the Weizmann Institute
of Science.
References
[1] T. T. Dien, N. Thai-Nghe, et al., An approach for semantic-based searching in learning
resources, in: 2020 12th International Conference on Knowledge and Systems Engineering
(KSE), IEEE, 2020, pp. 183–188.
[2] S. Downes, Models for sustainable open educational resources, Interdisciplinary Journal
of E-Learning and Learning Objects 3 (2007) 29–44.
[3] D. Porcello, S. Hsi, Crowdsourcing and curating online education resources, Science 341
(2013) 240–241.
[4] K. I. Clements, J. M. Pawlowski, User-oriented quality for oer: Understanding teachers’
views on re-use, quality, and trust, Journal of Computer Assisted Learning 28 (2012) 4–14.
[5] N. T. Heffernan, K. S. Ostrow, K. Kelly, D. Selent, E. G. Van Inwegen, X. Xiong, J. J. Williams,
The future of adaptive learning: Does the crowd hold the key?, International Journal of
Artificial Intelligence in Education 26 (2016) 615–644.
[6] E. Yacobson, A. Toda, A. I. Cristea, G. Alexandron, Encouraging teacher-sourcing of social
recommendations through participatory gamification design, in: A. I. Cristea, C. Troussas
(Eds.), Intelligent Tutoring Systems, Springer International Publishing, Cham, 2021, pp.
418–429.
[7] S. Deterding, M. Sicart, L. Nacke, K. O’Hara, D. Dixon, From Game Design Elements
to Gamefulness: Defining ”Gamification”, Proceedings of the 2011 annual conference
extended abstracts on Human factors in computing systems - CHI EA ’11 (2011) 2425.
URL: http://www.scopus.com/inward/record.url?eid=2-s2.0-79957930613{&}partnerID=
tZOtx3y1. doi:1 0 . 1 1 4 5 / 1 9 7 9 7 4 2 . 1 9 7 9 5 7 5 . a r X i v : 1 1 / 0 9 .
[8] Y. Feng, H. J. Ye, Y. Yu, C. Yang, T. Cui, Gamification artifacts and crowdsourcing partic-
ipation: Examining the mediating role of intrinsic motivations, Computers in Human
Behavior 81 (2018) 124–136.
[9] B. Morschheuser, J. Hamari, J. Koivisto, Gamification in crowdsourcing: a review, in:
2016 49th Hawaii International Conference on System Sciences (HICSS), IEEE, 2016, pp.
4375–4384.
[10] A. M. Toda, W. Oliveira, A. C. Klock, P. T. Palomino, M. Pimenta, I. Gasparini, L. Shi,
I. Bittencourt, S. Isotani, A. I. Cristea, A taxonomy of game elements for gamification in
educational contexts: Proposal and evaluation, in: 2019 IEEE 19th International Conference
on Advanced Learning Technologies (ICALT), volume 2161, IEEE, 2019, pp. 84–88.
[11] W. Samuelson, R. Zeckhauser, Status quo bias in decision making, Journal of risk and
uncertainty 1 (1988) 7–59.
[12] C. Ansher, D. Ariely, A. Nagler, M. Rudd, J. Schwartz, A. Shah, Better medicine by default,
Medical Decision Making 34 (2014) 147–158.