Workshop on Computer-Supported Peer Review in Education Co-chaired by … Edward F. Gehringer Ferry Pramudianto Yang Song North Carolina State University North Carolina State University North Carolina State University Department of Computer Science Department of Computer Science Department of Computer Science Raleigh, NC 27695-8206 Raleigh, NC 27695-8206 Raleigh, NC 27695-8206 +1 919-515-2066 +1 919-513-0815 ysong8@ncsu.edu efg@ncsu.edu fferry@ncsu.edu PROGRAM COMMITTEE Luca de Alfaro, University of California Santa Cruz, USA Julia Morris, Old Dominion University, USA Dmytro Babik, James Madison University, USA Joe Moxley, University of South Florida, USA Eric Ford, Johns Hopkins University, USA Katja Niemann, XING AG, Germany Ilya Goldin, 2U.com, USA Melissa Patchan, Georgia State University, USA Bill Hart-Davidson, Michigan State University, USA Lakshmi Ramachandran, Pearson, USA Zhewei Hu, North Carolina State University, USA Arlene Russell, University of California, Los Angeles, USA Steve Joordens, University of Toronto, Canada Chris Schunn, University of Pittsburgh, USA Jennifer Kidd, Old Dominion University, USA Marco Temperini, Sapienza University of Rome, Italy Da Young Lee, North Carolina State University, USA David Tinapple, Arizona State University, USA Jay Loftus, Western University, Canada Yanqing Wang, Harbin Institute of Technology, China Andrew Luxton-Reilly, University of Auckland, New Zealand Anita Woods, Western University, Canada Pedro José Muñoz Merino, Universidad Carlos III, Spain Ravi Yadav, North Carolina State University, USA ORGANIZERS’ INTRODUCTION Computer-supported peer review is becoming a more important component of the IT repertoire accessible to in- structors around the world. All of the major learning management systems include a peer-assessment module, and there are dozens or even hundreds of standalone tools for peer assessment, notably Calibrated Peer Re- view™. Research on peer assessment is growing by leaps and bounds. A Google scholar search for “peer as- sessment” returns over 300 hits in the last year. However, the academic community behind these tools is spread across the curriculum. Education, psychology, and computer science have probably the largest representation, but English and the lab sciences have important contingents, because of their need to teach expository writing and technical writing to large numbers of students. Systems have also been developed by researchers in domains as diverse as business and arts and media. Despite the breadth of interest, academic gatherings devoted to peer assessment of students by students have been rare. The first that we are aware of is the first CSPRED workshop, co-located with ITS 2010 in Pittsburgh. Since then, there have been two PRASAE (Peer Review, Peer Assessment, and Self Assessment in Education) workshops at learning-technologies conferences in Europe, the first associated with ICWL 2014 and the second with ICSLE 2015. The third workshop in that series will take place at ICWL 2016 in Rome. Our intent was to bring together researchers from many of these fields. We received twelve submissions, of which eight (67%) were chosen for presentation at the workshop. The other authors with relevant and useful work were invited to condense their paper and produce a poster for display at a workshop session. The workshop also features a keynote by a veteran user of several different peer-assessment systems, and a panel on the timely issue of peer assessment and student privacy. We think that the workshop comprises a good cross section of work in this area, and we hope that the papers published here will be helpful to researchers and practitioners alike. Thank you for your interest in our work, and we hope to hear from you and establish fruitful collabora- tions!