=Paper= {{Paper |id=Vol-1841/R04_106 |storemode=property |title=Predicting Student Participation in Peer Reviews in MOOCs |pdfUrl=https://ceur-ws.org/Vol-1841/R04_106.pdf |volume=Vol-1841 |authors=Erkan Er,Miguel Luis Bote-Lorenzo,Eduardo Gómez-Sánchez,Yannis Dimitriadis,Juan Ignacio Asensio-Pérez |dblpUrl=https://dblp.org/rec/conf/emoocs/ErBGDA17 }} ==Predicting Student Participation in Peer Reviews in MOOCs== https://ceur-ws.org/Vol-1841/R04_106.pdf
                                         Proceedings of EMOOCs 2017:
    Work in Progress Papers of the Experience and Research Tracks and Position Papers of the Policy Track



     Predicting Student Participation in Peer Reviews in
                          MOOCs

Erkan Er, Miguel Luis Bote-Lorenzo, Eduardo Gómez-Sánchez, Yannis Dimitriadis,
                           Juan Ignacio Asensio-Pérez

              GSIC/EMIC, Universidad de Valladolid, Valladolid, Spain.
    erkan@gsic.uva.es, {migbot|edugom|yannis|juaase}@tel.uva.es



        Abstract. Assessing and providing feedback to thousands of student artefacts in
        MOOCs is an unfeasible task for instructors. Peer review, a well-known peda-
        gogical approach that offers various learning gains, has been a common approach
        to address this practical challenge. However, low student participation is a poten-
        tial barrier to the success of peer reviews. The present study proposes an approach
        to predict student participation in peer reviews in a MOOC context, which can be
        utilized to achieve an effective peer-review activity. We attempt to predict the
        number of different peer works that students will review for each of four assign-
        ments based on their past activities in the course. Results show that students’
        preceding activities were predictive of their participation in peer reviews starting
        from the first assignment, and that the prediction accuracy improved considerably
        with the inclusion of past peer-review activities.

        Keywords: MOOC, Peer review, Engagement prediction, Regression


1       Introduction

Massive open online courses (MOOCs) enable millions to receive university-level
courses at no cost. However, the massiveness comes with several practical challenges.
One known challenge is the assessment of thousands of student artefacts (submitted to
open-ended assignments) [1]. One approach to address this challenge has been the use
of peer review (or peer assessment). Peer review is an active learning process in which
a student work is examined and rated by another equal-status student [2]. Besides its
utility in terms of reducing the workload of instructors, which is considered a main
benefit in the MOOC context, peer review offers learning gains for both those students
who performed the review and those whose work was reviewed. These benefits include,
but are not limited to, the development of higher-order thinking skills, problem solving
skills, communication skills, and teamwork skills [2, 3]. However, conducting an ef-
fective peer review itself is a challenge in large scales. One barrier to its successful
implementation is the low student participation. Considering the lack of instructor me-
diation and the large diversity in MOOC participants (e.g., native language, culture,
etc.), there are high chances that not many students will be naturally motivated to re-
view a peer’s work [4]. Lack of participation in peer review may result in situations in



                                                         65
                                           Proceedings of EMOOCs 2017:
      Work in Progress Papers of the Experience and Research Tracks and Position Papers of the Policy Track


which the submissions of striving students remain ungraded, leading to a decrease in
their motivation to continue the course. Nevertheless, as opposed to numerous studies
that are concerned about resolving the validity issues of peer reviews [5, 6], there exists
scarce works that investigated student participation in peer review at large scale [7].
Thus, there is a need for further research to contribute to the solution of this problem.
    The present study proposes an approach to predict student participation in reviewing
peers’ work in a MOOC context, and in this paper, we share the preliminary findings
of this in-progress research. In particular, we attempt to predict the number of different
peer works that students will review for a specific assignment based on their past activ-
ities in the course. An accurate estimation of number of times a student will perform
peer review can help instructors take timely actions to achieve a successful peer-review
process [8]. For example, the peer-review task might be rather challenging for some
students depending on their abilities [1], and these students may need more time for
completing their reviews. Therefore, instead of a firm deadline for peer reviews, an
adaptive schedule based on the predicted participation levels can be used to promote
participation in peer reviews. In addition, this estimation might be utilized in designing
other effective collaborative learning activities. For example, using the information re-
garding the levels of participation, student groups can be formed in a way that maxim-
izes the likelihood that each peer work will be reviewed by another group member. As
student participation in peer reviews can be also considered an engagement indicator,
other approaches that are used to foster engagement can be applied [9].
    In the following section, we describe the course data at hand and the features gener-
ated for the prediction task. Next, we present the experimental study by describing the
details of the method and the results regarding the performance of each prediction
model employed. We conclude by presenting the follow-up research ideas.


2         Predicting Participation in Peer Reviews

2.1       Course Data

The course data for this study was retrieved from a public dataset published by Canvas
Network 1. No contextual information was available (e.g., whether the peer review was
mandatory or not), but we attempted to make some inferences about the course design
based on the available log data, since such contextual information may help us explain
better the prediction results. The course had 3620 enrolments and contained four main
assignments (each worth 25 points) for which students needed to upload a specific ar-
tefact. These assignments were reviewed by peers, and they were scheduled starting
from the second week of the course with a one-week interval between each one of them.
   The course data contains fine-grained information regarding students’ content visits
as well as their various activities in discussions, assignments, and quizzes (e.g., create,
view, or subscribe to a discussion topic, submit or view an assignment, etc.). Moreover,


1
    https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/XB2TLU
    The id of the course is 770000832960949.


                                                           66
                                           Proceedings of EMOOCs 2017:
      Work in Progress Papers of the Experience and Research Tracks and Position Papers of the Policy Track


we identified the number of peer submissions reviewed by each student (at each assign-
ment), which is the outcome (or dependent) variable in this study. Given that most stu-
dents reviewed three different peer works at each assignment (see Figure 1), it is likely
that students were suggested to perform at least 3 reviews by the course instructors.
Descriptive statistics regarding the outcome variable are given in the figure below.




                         µ=2.62, SD=1.42    µ=2.56, SD=1.24    µ=2.41, SD=1.67     µ=2.46, SD=1.35
                         1st Peer Reviews   2nd Peer Reviews   3rd Peer Reviews   4th Peer Reviews

Fig. 1. Histograms of peer works reviewed along with the mean and standard deviation scores.

2.2       Feature Generation

In this subsection, we briefly discuss the rationale for the features generated to be used
in the prediction of student participation in peer reviews. Active MOOC learners are
likely to perform well as a result of their consistent participation in most activities of
the course including the peer reviews [1]. Such active students may probably achieve a
good understanding of the course content as a result of their engagement (e.g., viewing
course content pages, participating in discussions, completing quizzes) [10], and there-
fore they are more likely to feel confident reviewing a peer’s work. Accordingly, in the
present study, we hypothesize that students’ preceding engagement in the course is as-
sociated with their subsequent participation in peer-review activities. For this purpose,
we built a set of predictors (or features) based on various student activities in the course
(e.g., discussions, assignments, and quizzes) and used them to predict students’ partic-
ipation in peer-review activities. Based on the overview of the data at hand and the
previous research [11], a set of features (see Table 1) was generated to characterize the
student engagement in the course. These features considered only student activities dur-
ing the last 6 days before the deadline of the corresponding assignment (since there was
a one-week interval between assignments).


3         Experimental Study

3.1       Method

Considering the large set of features, we preferred to use regularized regression meth-
ods, which can penalize the weak predictors and eliminate them to improve the model



                                                               67
                                           Proceedings of EMOOCs 2017:
      Work in Progress Papers of the Experience and Research Tracks and Position Papers of the Policy Track


performance. Three regularized regression methods were chosen: least absolute shrink-
age and selection operator (LASSO), elastic net, and ridge regression since these meth-
ods incorporate an internal feature-selection mechanism [12]. These three methods
were applied to make a prediction regarding the number of different peer works that
were reviewed by students at each assignment. To evaluate the model performance, the
mean absolute error (MAE) scores were used [13]. Since the sample size was small,
10-fold cross validation method was used.

 Table 1. Features generated based on students’ overall engagement in the course
 {a}_{b}_count                    Total number of requests made.
 {a}_{b}_avg_p_day                Average requests per day.
 {a}_{b}_count_li                 Total number of requests made when later requests were given a
                                  higher weight {1/6, 1/5, 1/4, 1/3, 1/2, and 1}.
 {a}_{b}_count_ei                 Total number of requests made when earlier requests were given
                                  a higher weight {1, 1/2, 1/3, 1/4, 1/5, and 1/6}.
 {a}_{b}_days                     Total number of days with at least one request made.
 {a}_{b}_days_li                  Number of days with at least one request when later requests were
                                  given a higher weight {1/6, 1/5, 1/4, 1/3, 1/2, and 1}.
 {a}_{b}_days_ei                  Number of days with at least one request when earlier requests
                                  were given a higher weight {1, 1/2, 1/3, 1/4, 1/5, and 1/6}.
 {a}_{b}_{n}x_times               Runs of n consecutive days with at least one request (1