=Paper=
{{Paper
|id=Vol-1669/WS1_5_103_Paper
|storemode=property
|title=Improving Peer Assessment by using Learning Analytics
|pdfUrl=https://ceur-ws.org/Vol-1669/WS1_5_103_Paper.pdf
|volume=Vol-1669
|authors=Usman Wahid,Mohamed Amine Chatti,Ulrik Schroeder
|dblpUrl=https://dblp.org/rec/conf/delfi/WahidCS16
}}
==Improving Peer Assessment by using Learning Analytics==
Raphael Zender (Hrsg.): Proceedings of DeLFI Workshops 2016 co-located with 14th e-Learning Conference of the German Computer Society (DeLFI 2016) Potsdam, Germany, September 11, 2016 52 Improving Peer Assessment by using Learning Analytics Usman Wahid1, Mohamed Amine Chatti1 , Ulrik Schroeder1 Abstract: In recent years, peer assessment has established itself as one of the most promising meth- ods of assessment in open learning environments typically in massive open online courses (MOOCs). However, researchers are still working on ways to address a number of challenges in peer assess- ment. In this paper, we propose some solutions from the field of learning analytics which have the potential to provide significant value in the peer assessment process. Keywords: Peer Assessment, Peer Grading, Peer Feedback, Open Learning Environments, Scalable Assessment, Learning Analytics. 1 Introduction In recent years, Massive Open Online Courses (MOOCs) have changed the ideology and direction of technology enhanced learning (TEL). A number of challenges have been iden- tified for MOOCs which include (but not limited to) the teacher’s role, completion rates and most importantly assessment. Peer assessment offers a scalable and cost effective way in MOOCs where fellow learners are asked to evaluate student assignments and provide feedback to their peers encouraging them to take part in the assessment process [WCS16]. Despite its benefits, peer assessment faces issues like validity and quality of feedback, which require innovative solutions to make it an effective assessment method. In this study, we highlight the challenges of peer assessment and propose some possible solutions to these challenges from the emerging field of learning analytics. 2 Challenges of Peer Assessment Peer assessment not only lessens the teacher’s workload; it brings many potential benefits to student learning as well, including a sense of ownership and autonomy, increasing the motivation for learning and high level cognitive and discursive reasoning [WCS16]. But despite these potential benefits, peer assessment faces a number of challenges like trans- parency, credibility, accuracy, reliability, validity, diversity, scalability, and effeciency. In order to address the concern of accuracy of peer assessment in MOOCs, a number of ap- proaches used by different peer assessment tools at various stages are discussed in [Su14]. They include connectivist MOOCs, reviewer calibration (Calibrated Peer Review, CPR), Bayesian post hoc statistical correction methods and creation of a credibility index [Su14]. 1 RWTH Aachen, Center for Innovative Learning Technologies (CiL), Ahornstraße 55, 52074 Aachen, Germany, nachname@cil.rwth-aachen.de Peer Assessment and Learning Analytics 53 Despite the importance of the work done in this area, the peer assessment challenges out- lined above are far from being effectively addressed. In this paper, we argue that the emerg- ing field of learning analytics can provide innovative ways to overcome these challenges. 3 Peer Assessment and Learning Analytics In this section, we investigate how the learning analytics methods can be leveraged to improve the peer assessment process. A recent systematic comparison of peer assessment tools provides a number of dimensions including review loop, rubrics, validation, reviewer calibration, reverse reviews and scalability to judge effectiveness of any peer assessment system [WCS16]. These dimensions provide general research areas which could benefit from the use of certain learning analytics techniques. 3.1 Rubrics Grading rubrics are an easy and effective way of defining a transparent criteria on which the learners can rate the work of their peers [Yo15]. Feedback Classification: The answers to rubrics from the reviewer would normally fall in short answers category. Applying classification techniques on these short answers could really help speed up the process of identifying better reviews and reviewers based on dif- ferent features, such as feedback content length, reviewer’s credibility, content type (e.g. text, image), feedback creation duration etc. Analyzing Rubric Answers: Another variation could be to use text analysis, word count and string matching techniques to make sure that the reviewer provides some meaningful answers to the rubric questions, instead of just writing ”nice” or ”good work”. 3.2 Validation The validation dimension in peer assessment is a broader category that co-relates to the validity, reliability, accuracy and credibility challenges in peer assessment. Combining Peer and Automated Assessment: Validation could profit from the use of au- tomated assessment using the classification and natural language processing techniques. However, to rely solely on automated assessment does not ensure a valid assessment. Thus, there is a need to follow a human-in-the-loop (peers) approach by blending automated as- sessment with peer assessment. Accuracy Indicators: Accuracy indicators refer to the assigned weight to the peer raters according to their relative degree of accuracy. The higher the accuracy indicator, the more weight is given to that rater’s rating of peer submission [Su14]. Predictive analytics meth- ods can be applied to predict the accuracy indicators of peer raters based on e.g. knowledge in the subject area, received ratings, and feedback history etc. 54 Usman Wahid et al. 3.3 Scalability The number of learners in open learning environments tends to be greater than in tradi- tional learning scenarios. It is thus essential for the assessment and feedback methods in these environments to be scalable. Visual Assessment: A peer assement system could make use of clustering and visualiza- tion techniques to scale up the feedback process. It could either cluster similar submis- sions together or cluster them based on the reviews provided by peers and the answers to the rubric questions. Once the clustering is done, then visualization techniques (e.g. dashboards, word clouds etc.) could be applied to show clusters of submissions to the in- structor, who could decide whether to grade all the related submissions at once or take a closer look at any of them. Another variation could be to show each submission with a visual aid to the teacher with features extracted from the submission itself as well as the reviews from peers. Important aspects of the submission could be highlighted using word clouds etc., to help speed up the teacher feedback as they could rely on visual aids to make a quick decision instead of going through all the submissions in detail. 4 Conclusion Peer assessment is a cost effective and powerful assessment method which could be used in open learning environments such as MOOCs to provide effective assessment and feedback to a large number of learners. In this paper, we presented a number of open challenges in peer assessment including transparency, credibility, accuracy, reliability, validity, diversity, scalability, and effeciency. Further, we discussed how learning analytics can help to over- come these challenges by using classification, text mining, prediction, and visualization techniques at various stages of the peer assessment process to ensure its effectiveness. References [Su14] Suen, Hoi: Peer assessment for massive open online courses (MOOCs). The International Review of Research in Open and Distributed Learning, 15(3), 2014. [WCS16] Wahid, Usman; Chatti, Mohamed Amine; Schroeder, Ulrik: A Systematic Analysis of Peer Assessment in the MOOC Era and Future Perspectives. In: Proceedings of the Eighth International Conference on Mobile, Hybrid, and On-line Learning, elml 2016. IARIA XPS Press, 2016. [Yo15] Yousef, Ahmed Mohamed Fahmy; Wahid, Usman; Chatti, Mohamed Amine; Schroeder, U; Wosnitza, M: The Effect of Peer Assessment Rubrics on Learners’ Satisfaction and Performance within a Blended MOOC Environment. In: Proc. CSEDU 2015 conference. volume 2, pp. 148–159, 2015.