=Paper=
{{Paper
|id=Vol-3042/paper_5
|storemode=property
|title=Towards Continuity of Personalisation in a Large Blended Course
|pdfUrl=https://ceur-ws.org/Vol-3042/paper_5.pdf
|volume=Vol-3042
|authors=Sergey Sosnovsky,Almed Hamzah
}}
==Towards Continuity of Personalisation in a Large Blended Course==
Towards Continuity of Personalisation in a Large
Blended Course
Sergey Sosnovsky1 , Almed Hamzah1,2
1
Utrecht University, The Netherlands
2
Universitas Islam Indonesia, Yogyakarta, Indonesia
Abstract
Effective teaching and learning in a large blended learning can be challenging, especially when a course
population is diverse. Existing adaptive systems mostly focus on supporting students’ self-regulated
work at home. There also exist a few systems that help instructors make classroom more interactive.
This paper present Quizitor - an online platform that is capable to deliver both the at-home and the
in-class assessment. We believe, that combining these two streams of data can help achieve more accu-
rate student modelling and potentially, more effective adaptive support in blended settings. The pilot
evaluation of Quizitor demonstrates that a model aggregating data from student activity conducted at
home and in class predicts students’ grades better than models separately trained on either of these two
types of activity.
Keywords
self-assessment, blended learning, student modelling, personalisation, voting tool
1. Introduction
Giving a lecture in a large course often lacks interactivity. This is detrimental to the quality
of teaching/learning from two standpoints. First, the level of student engagement is directly
related to the amount of interaction a learning activity involves [1]. In its absence, a student
remains a passive receiver of information, an object (not a subject) of the learning process. She
lacks opportunities to monitor her knowledge and self-reflect, has no control over her own
learning, and cannot achieve deeper understanding of the material. From another perspective,
lack of interactivity results in a shortage of information about learning that occurs (or does
not) in a classroom, which brings about less effective instruction. For a teacher, it becomes
hard to estimate how much individual students understand, which concepts require additional
focus, and where remedial actions are needed. Furthermore, a teacher often remains unaware of
individual learning difficulties even after the lecture and cannot address them, hence students
are rarely provided with effective tools to catch up on their own.
This problem is magnified when the course population is diverse in terms of relevant back-
ground. This is often the case in introductory programming courses. A large portion of students
taking them have limited or no programming experience; while other students might have
ECTEL 2021: AI for Blended-Learning: Empowering Teachers in Real Classrooms, September 20, 2021, Bozen-Bolzano,
Italy
" s.a.sosnovsky@uu.nl (S. Sosnovsky); a.hamzah@uu.nl (A. Hamzah)
0000-0001-8023-1770 (S. Sosnovsky); 0000-0003-4965-7057 (A. Hamzah)
© 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org)
already developed software on their own. Such a difference in students’ initial knowledge of the
core concepts makes it very challenging for the instructor to properly cater the course material
for all categories of students.
Over the years, a range of technologies has been developed to increase the level of student
awareness and engagement during lectures. The least disruptive to the overall flow of the
lecture and the most flexible in terms of the subject and context of instruction are the personal
response systems [2] or their “close relatives” – voting systems [3]. They allow a teacher to
organize real-time in-lecture assessment, immediately display summative results and engage
students in a brief remedial discussion. Unfortunately, the data collected from such assessments
are not utilized outside of lecture halls to help trace students’ progress and guide them to helpful
learning material.
Adaptive intelligent learning environments tools have been successfully used to support
students’ individual work outside of classrooms [4]. For example, intelligent tutoring systems
[5] and adaptive educational hypermedia systems [6] have proven their effectiveness in various
subjects and learning contexts. More recently, learning analytics technologies have gained wide
adoption with a goal to assist both students [7] and teachers [8]. Unfortunately, these systems
and technologies mostly focus on supporting independent learning and self-assessment at home
or in the lab and are less applicable during lectures, when students have been just exposed to
new knowledge and may experience learning difficulties with new material for the first time.
A combination of in-lecture and at-home assessment coupled with adaptive support has
a potential to significantly improve learning experiences in a large university course. In-
lecture assessment and at-home self-assessment have different purposes, but they both provide
valuable information about student progress and opportunities for targeted interactions. The in-
lecture assessment keeps students engaged and can serve as initial input on student conceptual
understanding. The at-home self-assessment helps the student to practice acquired skills at
individual pace and receive adaptive guidance. Combining these two streams of data and two
modes of adaptive learning support in a single system could directly benefit students by enabling
their reflection on the current progress and building a stronger link between knowledge and
skills thus facilitating deeper understanding of the subject. For a teacher, such a system can
provide information on individual difficulties and overall performance which should help inform
and improve the teaching practices used in a course.
This paper presents Quizitor - a system that supports two modes of assessment. It can be
used by a teacher during a lecture for a pop-up synchronised assessment of the entire class, and
by a student at home for individual self-paced assessment. Quizitor tracks students’ attempts
across both these modes and can integrate these data for a more holistic adaptive support of
blended learning. We have piloted Quizitor in an undergraduate programming course. An
initial analysis of the collected data shows that a model integrating student activity from both
at-home and in-class assessment can predict students’ performance better than models trained
on individual streams of activity.
2. Related Work
In many respects, the blended learning paradigm has emerged as an ad-hoc response to the
proliferation of online learning environments and the transition of many educational activities
from classroom instruction to individual self-regulated learning. There was no consensus on the
methodology for blended learning for the first decade after the term was introduced, and even
definitions of blended learning sounded rather vague, simply mentioning that "blended learning"
assumes a combination of face-to-face and online instruction [9]. How these component should
be combined, how lessons should be orchestrated and how support should be administered was
not defined.
In the middle of 2010s, a number of models of blended learning were proposed [10], a
handbook on blended learning was published [11] and several literature reviews were written
[12, 13]. However, when it comes to the technology-enhanced blended learning, researchers and
educators focused primarily on supporting the online learning component, largely disregarding
the classroom. This is understandable, as in most models of blended learning, the online
component assumes individual, self-regulated work; which means, students may struggle with
planning their learning, engaging in learning activities, reflecting on potential mistakes, etc. In
fact, effective regulation of independent studying becomes the biggest challenge for students
in blended learning [14]. In this regard, a multitude of systems have been designed to help
students in blended learning environments, focusing on specific educational approaches, such
as gamification [15] or integrated learning experience [16].
However, somewhat counter-intuitively, neither of these methods for blended learning
support assumes a true "blend of learning". In blended learning environment, face-to-face and
individual learning activities have different outcomes [17]. Combining the activity data to
gain an integrated outcome can benefit both students and teachers. According to [14] both
students and teachers face several challenges when it comes to blended learning. Students have
difficulty in self-regulated learning and learning the new tools. At the same time, teachers
view face-to-face and online components of blended learning as two separate activities and
as consequence, for them, it is more difficult to manage two rather than one learning activity.
There have not been many attempts in the literature to propose working solutions for blending
support of the both learning components. Most of them were limited to describing frameworks
and architectures [18, 19]. This paper is trying to make a more practical step in this direction
by describing and evaluating an assessment tool that can be used both in class and at home.
3. Quizitor: Combining In-class and At-home Assessment
This section discusses the design and implementation of Quizitor, an online assessment tool
combining in-class and at-home assessment. As of now, Quizitor has been used only in a
Web Technology course; however, it is a domain-independent tool that can be used to deliver
online questions of several types. Quizitor’s interface has been designed using a responsive
Web methodology, hence it can be used with a variety of screen sizes from desktops to mobile
phones.
The assessment questions in Quizitor are combined into quizzes, which themselves are
Figure 1: Teacher’s view of an in-class ordering question.
organised into topics following a course structure. In our current Web technology setup, 6 topics
cover 122 at-home questions combined into 14 quizzes and 60 in-class questions combined into
6 quizzes. Currently, four types of question are available in Quizitor, namely: multiple choice
questions (MCQ), short answer questions (SAQ), ordering questions (ORD), and multiple answer
questions (MAQ). Questions can include graphics and code fragments.
Quizitor supports two assessment modes. The in-class mode is a synchronous assessment
where students take a quiz in the class with a teacher. A teacher starts the quiz for all students
at the same time. A teacher can see, how many students have submitted answers for the current
question. A teacher decides when to stop accepting answers and display the results of the
current question. The aims of the in-class assessment are take a short break from a lecture
routine, help students recall the learning material that has been recently taught, help students
reflect on their understating of the material, and give the teacher information on how well
students understand the material. These quizzes usually do not exceed 15 minutes. Figure 1
shows a teacher interface of an in-class question; the student interface looks largely the same,
but misses the indicators and controls at the bottom of the page.
The at-home mode supports the asynchronous self-assessment. The aims are to help students
practice, reflect, identify knowledge gaps and prepare for exams. In contrast with the in-class
mode, the at-home questions can be more complex, as students are not under time pressure
when answering them. Students start and stop the questions themselves. They determine the
time, the place, and the quiz to take. Students can make as many attempts as they want for each
questions. Students can navigate through the quizzes, by clicking on question numbers at the
bottom of the page. Quizitor helps students track, which questions they have already tried and
which they have answered correctly. Figure 2 displays an example of an at-home question.
Figure 2: At-home quiz page.
4. The Experiment
In this pilot study, we have tried to investigate the advantages of blending two streams of
assessment data coming from the two modes of a blended course. Both data streams have been
produced by students using Quizitor: one - in the at-home self-assessment mode and the other
- in the in-class assessment mode. Our hypothesis is that a model of student mastery taking
into account both these streams of data would be able to predict student course performance
better than the models taking into account only individual streams of data. In order to test
this, we have computed two models of students’ mastery using the Elo Rating System (ERS)
approach. One model was based on their at-home assessment results and another - based on
their in-class results. After that, we have conducted two simple linear regression analyses,
where the obtained students’ mastery scores were used as the predictors of their midterm results.
Finally, we have combined the both models and conducted a multiple regression analysis to
show that the integrated model can help predict students’ grades even better.
4.1. Data collection
The data were collected in the undergraduate course on Web technology taught in Utrecht
University from February until March 2021. The use of Quizitor started at the third lecture
and continued for six lectures until the midterm. The topics included basics of HTML, CSS,
Javascript and Internet protocols.
This course is offered every year, but this was the first time Quzitor system was used as a
learning tool in this course. Students of first, second and third year from computer science,
information science and artificial intelligence programs take this course. The overall number of
students was 198. To participate in the study, students had to sign a consent form. 168 students
completed the form. Students who used the tool actively enough (attempted 75% of at-home
questions) were given a small extra credit (1% of the course grade). This reduced the number of
subjects to 124. Finally, we did not include in the analysis the activity from students who did
not pass the midterm exam. The resulting number of subjects in the study was 61.
Table 1 presents the summary of basic statistics characterising students’ activity with Quizitor.
Table 1
Summary of students’ activity with Quizitor
Modes At-home(avg) In-class(avg)
M (SD) % M (SD) %
Questions attempted 114.23 (21.22) 93.6 33.44 (14.38) 55.73
Quizzes attempted 12.26 (2.04) 87.58 4.29 (1.71) 71.58
Number of attempts 235.9 (88.73) - - -
Number of attempts per question 2.06 (1.6)
4.2. Estimating students’ mastery
To estimate students’ mastery based on their activity with Quizitor, we applied ERS, which
is a relatively easy yet accurate method for modelling an ability. It has been recently gaining
popularity in the educational data mining and student modelling community [20]. It can
dynamically assess students’ ability in a certain field based on the results of their continuous
assessment. While assessing student ability, ERS also keeps adjusting the difficulty of questions
that students answer. Essentially, ERS constantly balances the "strength" (=ability) a student
vs. the "strength" (=difficulty) of a question. There are two steps of estimating these strengths
([21]) as a result of each encounter (a student answering a question). First, ERS calculates the
probability of the expected result.
1
𝑃 (𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑠 𝑖 = 1) = 1+𝑒−(𝜃𝑠 −𝑑𝑖 )
Second, it updates the ratings of the student and the question based on the probability of the
expected result.
𝜃𝑠 := 𝜃𝑠 + 𝐾(𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑠 𝑖 − 𝑃 (𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑠 𝑖 = 1))
𝑑𝑖 := 𝑑𝑖 + 𝐾(𝑃 (𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑠 𝑖 = 1) − 𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑠 𝑖)
The initial values for 𝜃𝑠 and 𝑑𝑖 are 0 and K has been set to 0.4. Based on these formulae,
student’s rating decreases if a question is answered incorrectly. On the contrary, the rating will
increase if an attempt has been correct.
4.3. Student models
In this study, two student models have been built: the in-class (IC) and the at-home (AH). The
IC model is trained based on students’ in-class assessment. The AH model represents students’
mastery as a result of their at-home self-assessment. In order to compute more accurate students’
Elo scores, first we have estimated the Elo scores of all questions, i.e., their levels of difficulty.
Figure 3: At-home student rating distribution.
Figure 4: In-class student rating distribution.
First, we split all students into two groups of 80% and 20%. The question difficulty is estimated
by calculating their Elo ratings based on the answers from 80% of students. Then, the obtained
question model is used to estimate the Elo scores of the remaining 20% of students. Then,
another group of 20% of students is selected and the processes restarts. After five iterations,
mastery of all students have been modeled. We have repeated this process separately to compute
the IC and AH models. Figure 3 and Figure 4 shows the distribution of students’ ratings for AH
and IC respectively.
5. Results
A simple linear regression model has been used to predict students’ midterm grade based on
their performance with Quizitor (reflected as Elo scores). As there are two basic models, the
simple regression has been computed twice: for the AH and the IC modes. After that a combined
multiple regression model has been used to verify the main hypothesis. A significant regression
equation has been found for all three models and there are positive relationships between the
students’ Elo scores and their midterm grades. Table 2 presents the summary of the regression
models. It is easy to see, that the main hypothesis is confirmed, a bigger portion of the variability
in the predicted variable is explained by the joint model.
Table 2
Summary of Coefficient of determination from each model
Model 𝑅2 𝑅2 -adj 𝑝-value
In_class 0.117 0.102 0.007
At-home 0.114 0.099 0.008
In_class and At-home 0.210 0.182 0.001
5.1. Model in-class
Student ability computed based on the in-class activity with Quizitor has been found statistically
significant as a predictor of the midterm grade, F(1,59) = 7.784, p = .007, accounting for 11.7%
of the variability in midterm grade with adjusted 𝑅2 = .102. The correlation between in-class
activity and midterm grade is statistically significant, r(59) = .341, p = 0.007. The regression
equation for predicting the midterm grade based on students’ IC Elo score is y = 7.527 + 0.848x
(in-class Elo rating). The confidence interval for the slope to predict the midterm grade from
students’ IC Elo score is 95% CI [0.240, 1.455]. Therefore, for each unit of increase of student’s
Elo score, the midterm grade will increase as well by a value between 0.24 and 1.45.
5.2. Model at-home
Student ability computed based on their AH activity with Quizitor has been found statistically
significant as a predictor of their midterm grades, F(1,59) = 7.581, p = .008, accounting for 11.4%
of the variability in the midterm grade with adjusted 𝑅2 = .099. The correlation between the AH
Elo score and the midterm grade is statistically significant, r(59) = .337, p = 0.008. The regression
equation for predicting the midterm grade from students’ AH Elo score is y = 6.909 + 0.480x
(in-class Elo rating). The confidence interval for the slope to predict the midterm grade is 95%
CI [0.131, 0.829]. Therefore, for each unit of increase of the AH Elo score, the midterm grade
will increase by a value between 0.13 and 0.83.
5.3. Model in-class and at-home
The multiple linear regression model combining students’ IC and AH Elo score as predictor
variables is also significant F(1,59) = 7.687, p = .001, accounting for 21% of the variability in
their midterm grade. What is also important, the adjusted 𝑅2 = .182, which is much higher
than the adjusted 𝑅2 values of the two simple models, which means we are not overfitting. The
regression equation for predicting the midterm grade from the IC and AH Elo scores is y = 7.359
+ 0.436𝑥1 + 0.772𝑥2 .
6. Discussion and Conclusion
In this paper, we have presented Quizitor - an assessment tool that can deliver both in-class
and at-home quizzes. Quizitor has been built as the first step in an attempt to organise truly
blended adaptive support in a blended course. While Quizitor at the moment does not have any
adaptive capabilities, its pilot evaluation has demonstrated that a combination of data coming
from the both face-to-face and online components of a blended course can help achieve a more
accurate estimation of student ability than models limited to only one of these components.
Three models to predict students’ grades are analysed and compared. Based on the values
presented in Table 2, the 𝑅2 of the in-class model is very close to the 𝑅2 of the at-home model
and is around 0.11. This parameter is increasing when the two components of the students’
mastery are combined in a multiple regression model predicting the midterm grade (𝑅2 > 0.21).
The adjusted 𝑅2 of the combined model is also much higher compared to individual models
indicating absence of overfitting. This means that both modes of students’ work with Quizitor
can provide mutually enriching sources of data. An effective "blend" of these data can inform
an adaptive tool truly supporting blended learning.
There are several directions for future research. First, based on the result, there is an evidence
that the two stream of data coming from in-class and at-home have an effect on students’ grade.
We plan to experiment with different approaches of student modelling where the in-class and
at-home activity are merged into an integrated representation of student ability. Second, we
plan to add into Quizitor an adaptive functionality that will support students in working with
the question material based on their current levels of knowledge. Such an adaptive support
can happen not only during students’ at-home activity, but also during their in-class question
answering in the form of personalised feedback.
Acknowledgments
Almed Hamzah received funding from Universitas Islam Indonesia.
References
[1] P. L. Machemer, P. Crawford, Student perceptions of active learning in a large cross-
disciplinary classroom, Active learning in higher education 8 (2007) 9–30.
[2] S. A. Gauci, A. M. Dantas, D. A. Williams, R. E. Kemm, Promoting student-centered active
learning in lectures with a personal response system, Advances in physiology education
33 (2009) 60–71.
[3] S. W. Draper, M. I. Brown, Increasing interactivity in lectures using an electronic voting
system, Journal of computer assisted learning 20 (2004) 81–94.
[4] E. Herder, S. Sosnovsky, V. Dimitrova, Adaptive intelligent learning environments, in:
Technology Enhanced Learning, Springer, 2017, pp. 109–114.
[5] K. R. Koedinger, J. R. Anderson, W. H. Hadley, M. A. Mark, et al., Intelligent tutoring goes
to school in the big city, International Journal of Artificial Intelligence in Education 8
(1997) 30–43.
[6] P. Brusilovsky, S. Sosnovsky, M. Yudelson, Addictive links: The motivational value of
adaptive link annotation, New Review of Hypermedia and Multimedia 15 (2009) 97–118.
[7] K. Kitto, M. Lupton, K. Davis, Z. Waters, Designing for student-facing learning analytics,
Australasian Journal of Educational Technology 33 (2017).
[8] V. Echeverria, R. Martinez-Maldonado, S. B. Shum, K. Chiluiza, R. Granda, C. Conati,
Exploratory versus explanatory visual learning analytics: driving teachers’ attention
through educational data storytelling, Journal of Learning Analytics 5 (2018) 72–97.
[9] J. Reay, Blended learning-a fusion for the future, Knowledge Management Review 4 (2001)
6.
[10] C. R. Graham, Blended learning models, in: Encyclopedia of Information Science and
Technology, Second Edition, IGI Global, 2009, pp. 375–382.
[11] C. J. Bonk, C. R. Graham, The handbook of blended learning: Global perspectives, local
designs, John Wiley & Sons, 2012.
[12] J. Arbaugh, A. Desai, B. Rau, B. S. Sridhar, A review of research on online and blended
learning in the management disciplines: 1994–2009, Organization Management Journal 7
(2010) 39–55.
[13] B. Güzer, H. Caner, The past, present and future of blended learning: an in depth analysis
of literature, Procedia-social and behavioral sciences 116 (2014) 4596–4603.
[14] R. A. Rasheed, A. Kamsin, N. A. Abdullah, Challenges in the online component of blended
learning: A systematic review, Computers & Education 144 (2020) 103701.
[15] C. Cheong, F. Cheong, J. Filippou, Quick quiz: A gamified approach for enhancing learning,
Proceedings - Pacific Asia Conference on Information Systems, PACIS 2013 (2013).
[16] P. Brusilovsky, S. Sosnovsky, D. H. Lee, M. Yudelson, V. Zadorozhny, X. Zhou, An open
integrated exploratorium for database courses, AcM SIGcSE bulletin 40 (2008) 22–26.
[17] G. Siemens, D. Gašević, S. Dawson, Preparing for the digital university: A review of the
history and current state of distance, blended, and online learning (2015).
[18] L. Howard, Z. Remenyi, G. Pap, Adaptive blended learning environments, in: International
Conference on Engineering Education, 2006, pp. 23–28.
[19] K. Gynther, Design framework for an adaptive MOOC enhanced by blended learning:
Supplementary training and personalized learning for teacher professional development.,
Electronic Journal of e-Learning 14 (2016) 15–30.
[20] M. Yudelson, Elo, i love you won’t you tell me your k, in: European Conference on
Technology Enhanced Learning, Springer, 2019, pp. 213–223.
[21] R. Pelánek, Applications of the Elo rating system in adaptive educational systems, Com-
puters & Education 98 (2016) 169–179.