=Paper= {{Paper |id=Vol-3410/short3 |storemode=property |title=Learnersourcing Modular and Dynamic Multiple Choice Questions |pdfUrl=https://ceur-ws.org/Vol-3410/short3.pdf |volume=Vol-3410 |authors=Haesoo Kim,Inhwa Song,Juho Kim |dblpUrl=https://dblp.org/rec/conf/lats/KimSK22 }} ==Learnersourcing Modular and Dynamic Multiple Choice Questions== https://ceur-ws.org/Vol-3410/short3.pdf
Learnersourcing Modular and Dynamic Multiple Choice
Questions
Haesoo Kim 1, Inhwa Song1, Juho Kim1
1
    University - Korea Advanced Institute of Science & Technology (KAIST), Daejeon, Republic of Korea

                 Abstract
                 Multiple choice questions (MCQs) are widely used for evaluating learning outcomes. In
                 particular, student-generated questions have shown promise in promoting active learning and
                 higher-order thinking. However, the process of generating quality MCQs can be challenging
                 and time-consuming for students. Additionally, the existing crowdsourcing approaches for
                 MCQ generation lack scalability and quality control. To address these issues, we introduce a
                 system concept called Kuiz that implements a modularized and dynamic method for generating
                 MCQs, allowing students to contribute at various levels and personalize their learning
                 experience. The questions are modularized into question stems, answer sets, and distractor sets,
                 enabling students to refine and improve them collaboratively. By dynamically altering question
                 stems and answer sets, we enhance the quality and difficulty of the MCQs, providing
                 personalized learning opportunities. Through Kuiz, we aim to reduce students' burden in
                 question generation tasks, increase engagement, and create scalable learning materials. By
                 combining learnersourcing with dynamic question generation, Kuiz offers a framework for
                 creating engaging and personalized learning experiences.

                 Keywords 1
                 Learnersourcing, Student generated questions, Dynamic question generation

1. Introduction
Multiple choice questions (MCQs) are a commonly used resource in learning and are known to be an
effective way of evaluation and testing for various learning goals [3]. In particular, student generated
questions have been noted as an effective way to promote active learning as it encourages higher level
thinking in students [9]. Systems such as Peerwise have evaluated the effect of using student generated
multiple choice questions (SGMCQs) in the classroom, suggesting that it is capable of increasing
engagement as well as learning effect across various subjects [2]. Interventions based on motivational
theories [9] can come to the rescue, encouraging students to pursue these optional activities. We propose
applying the ideas behind educational psychological interventions to create just-in-time interventions
that are triggered straight at the decision points with a focus on stimulating students’ engagement with
extra activities.
    However, the issue still remains that generating questions is a challenging task for most. Question
generation often requires high level thinking and understanding of the subject, which can be a
discouraging factor for students [5]. There is also the issue of quality control, inherent in many
crowdsourcing tasks. The average quality of SGMCQs can be high when students are provided with
proper scaffolding activities [1], but there is still room for improvement especially regarding large-
scale, open question repositories outside the classroom. Khashaba et al. have also noted that users of
SGMCQ systems also preferred answering questions to creating them, due to the larger perceived
efficacy in learning [6]. Here, we also recognize the need to effectively utilize the generated question
set in a way that is scalable and beneficial to the students.
    In light of this, we propose a method of learnersourcing multiple choice questions such that the
questions are modularized and dynamic. We also introduce Kuiz, a system concept that utilizes the

The first annual workshop on Learnersourcing: Student-generated Content @ Scale, June 01, 2022, NYC, NY
EMAIL: haesookim@kaist.ac.kr (H. Kim); igreen0485@kaist.ac.kr (I. Song) ; juhokim@kaist.ac.kr (J. Kim)
              © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
              CEUR Workshop Proceedings (CEUR-WS.org)
aforementioned method. Here, the questions are modularized in that each question can be subdivided
into question stems and options, both of which are subject to refinement through learnersourcing. This
has the effect of allowing modular participation from the students, reducing their burden and cognitive
load. Furthermore, the questions are also dynamic in that a given question stem and answer set could
be utilized to create multiple versions of varying quality and difficulty.
    Through this approach, we aim to reduce the students’ burden in question generation tasks by
allowing students to contribute in various levels and forms, and ultimately facilitating engagement. We
also provide increased flexibility and variability in the question creation process, allowing for more
personalized and effective methods of self-testing for learning. Finally, by incorporating these two
tasks, we propose a framework where learnersourcing tasks can directly contribute to creating scalable
learning materials.

2. Dynamic Generation of MCQs
To dynamically construct a multiple choice question, we first divide them into smaller units. Each
question contains three components: (1) the question stem, (2) the answer set, and (3) the distractor set.
The structure is illustrated in Figure 1.




Figure 1: Structure of a dynamically generated MCQ. (A) shows the construction of the question stem
and (B) shows how options are determined from the distractor and answer sets.

2.1.    Question Stem
Each MCQ is built on a question stem, or the question part of the MCQ. Question stems often follow a
set convention of formats that are representative of the answer type [4]. Through multiple different
approaches to the question stem, it is also possible that the question-maker can inquire the same
knowledge through multiple different approaches.
    We propose that, given a concept that the question is trying to verify, there can be multiple different
versions of the question stem. For example, if there is more than one element in the answer set, the
question stem may be modified to a multiple answer question. Similarly, given an MCQ that asks for
the correct option, the answer set and distractor sets could be substituted so that the question stem asks
for the incorrect option (e.g. “Which of the following is NOT an instance of...”).
This method of altering the question can be used to modify the quality and difficulty of the given MCQ.
For example, multiple answer MCQs are known to be more challenging than typical MCQs, as well as
having more pedagogical value due to the expanded solution space and reduced efficiency of random
guessing [7]. Following such methods, questions could be personalized to the student to further
increase learning gains.

2.2.    Answer and Distractor Sets
The answer set and distractor set refer to the collection of possible answers and non-answers to a given
question stem, respectively. Each answer or distractor should be paired with an explanation that
explains why or why not this is the answer for a given question stem. Since the quality of multiple
choice questions relies heavily on the quality of the options [8], we aim to provide a scalable way to
generate and evaluate options through learnersourcing.
     Each answer or distractor can be evaluated based on metrics such as the selection ratio: how many
times it has been chosen with reference to how many times it appeared. For distractors in particular, the
system can use this data to determine ‘effective distractors’. If a distractor is chosen many times in lieu
of the actual answer, such data may suggest that the option is an effective distractor, swaying the student
from the true answer. Conversely, if a certain answer is not chosen often, it might mean it is a ‘harder’
answer, more difficult to guess. This approach can be used in tandem with the variable question stems,
since questions with effective distractors would be more difficult than questions with distractors that
are ‘obviously wrong’.

3. Learnersourcing System Design: Kuiz
Kuiz is a system that utilizes the dynamic SGMCQ concept to promote efficient learnersourcing at a
larger scale. There are two main stages to the system: Question Creation and Self Testing.

3.1.    Question Creation and Refinement
In the question creation stage, students focus on creating questions, augmenting questions that others
have made through quantitative and qualitative feedback, and adding their own options. This process
will build the question stems as well as the answer and distractor sets to be used in future phases.
    We further modularize the question creation process by eliminating the need to create full questions
in the initial phase. Students may first create a question based on the given set of possible question stem
types (denoted in Figure 1 (A)). Without options, this functions as a simple answer question. Students
are then encouraged to present their own answers or distractors, as well as their level of confidence.
Here, there can be two desired effects. First, the open-endedness of the question format reduces the
impact of guessing, encouraging students to think more deeply about the concept. Second, even if the
student submits a wrong answer, such answers can still contribute to the system as a distractor. Thus,
the system can encourage students to try and answer even if they are not very confident about their
knowledge level.
    As the options are accumulated, they can be grouped by similarity and evaluated by other students
to ensure correctness. Even if a student submitted a wrong answer in the first stage, the feedback process
can rectify this mistake and transfer the option to the distractor set instead. Finally, the system constructs
answer and distractor sets based on the collected options, and can begin generating the MCQs. Students
can continuously contribute to the generated questions, by leaving feedback on the question stem or
options, as well as creating new options.
3.2.    Self Testing
In the testing stage, the collected set of questions can be used to dynamically generate ‘test exams’.
Through this, students can evaluate their level of understanding of a subject. Here, dynamic MCQs can
be utilized to create non-identical variations of the same question Thus, students will be less affected
by learning effect by solving the same question repeatedly. This approach improves the scalability of
the testing process and allows students to have a bigger learning effect. Moreover, by using multiple
versions of the question stem, the system can account for varying difficulty per the level of student even
with the same question.
   Finally, the testing stage can provide data on such as answer rate, user ratings on questions, and so
forth. This can be further used to evaluate metrics such as question quality, the difficulty of the question,
and even the quality of options; allowing further development of the question set.

4. Conclusion
In conclusion, our research introduces Kuiz, a learnersourcing system that addresses the challenges of
generating high-quality MCQs in education. By modularizing the question components and
incorporating dynamic variations, Kuiz allows students to contribute and refine questions
collaboratively, reducing their burden and promoting engagement. The system offers personalized
learning opportunities by modifying question stems and answer sets, catering to individual students'
needs and enhancing learning effectiveness. Moreover, Kuiz integrates feedback mechanisms and
evaluation metrics, ensuring quality control and improvement of the question set. Overall, Kuiz
represents a promising system that empowers students, improves question quality, and facilitates the
creation of scalable and engaging learning materials.

5. References
[1] Simon P Bates, Ross K Galloway, Jonathan Riise, and Danny Homer. 2014. Assessing the quality
    of a student-generated question repository. Physical Review Special Topics-Physics Education
    Research 10, 2 (2014), 020105.
[2] Paul Denny, John Hamer, Andrew Luxton-Reilly, and Helen Purchase. 2008. PeerWise: students
    sharing their multiple choice questions. In Proceedings of the fourth international workshop on
    computing education research. 51–58.
[3] Mercedes Douglas, Juliette Wilson, and Sean Ennis. 2012. Multiple-choice question tests: a
    convenient, flexible and effective learning tool? A case study. Innovations in Education and
    Teaching International 49, 2 (2012), 111–121.
[4] Thomas M Haladyna, Steven M Downing, and Michael C Rodriguez. 2002. A review of multiple-
    choice item-writing guidelines for classroom assessment. Applied measurement in education 15,
    3 (2002), 309–333.
[5] Vincent Hoogerheide, Justine Staal, Lydia Schaap, and Tamara van Gog. 2019. Effects of study
    intention and generating multiple choice questions on expository text retention. Learning and
    Instruction 60 (2019), 191–198.
[6] Ahmed Sayed Khashaba. 2020. Evaluation of the Effectiveness of Online Peer-Based Formative
    Assessments (PeerWise) to Enhance Student Learning in Physiology: A Systematic Review Using
    PRISMA Guidelines. International Journal of Research in Education and Science 6, 4 (2020), 613–
    628.
[7] Andrew Petersen, Michelle Craig, and Paul Denny. 2016. Employing multipleanswer multiple
    choice questions. In Proceedings of the 2016 ACM Conference on Innovation and Technology in
    Computer Science Education. 252–253.
[8] Michael C Rodriguez. 2005. Three options are optimal for multiple-choice items: A meta-analysis
    of 80 years of research. Educational measurement: issues and practice 24, 2 (2005), 3–13.
[9] Anjali Singh, Christopher Brooks, Yiwen Lin, and Warren Li. 2021. What’s In It for the Learners?
    Evidence from a Randomized Field Experiment on Learnersourcing Questions in a MOOC. In
    Proceedings of the Eighth ACM Conference on Learning@ Scale. 221–233.