<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Anukarna: A Software Engineering Simulation Game for Teaching Practical Decision Making in Peer Code Review</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ritika Atal</string-name>
          <email>ritika13103@iiitd.ac.in</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ashish Surekay</string-name>
          <email>ashish.sureka@in.abb.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Indraprastha Institute of Information Technology</institution>
          ,
          <addr-line>Delhi (IIIT-D)</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <fpage>63</fpage>
      <lpage>70</lpage>
      <abstract>
        <p>-Application of educational and interactive simulation games to teach important concepts is an area that has attracted several Software Engineering researchers and educators attention. Previous research and studies on usage of simulation games in classroom to train students have demonstrated positive learning outcomes. Peer code review is a recommended best practice during software development which consists of systematically examining the source code of peers before releasing the software to Quality Assurance (QA). Practitioners and Researchers have proposed several best practices on various aspects of peer code review such as the size of code to be reviewed (in terms of lines of code), inspection rate and usage of checklists. We describe a single player educational simulation game to train students on best practices of peer code review. We define learning objectives, create a scenario in which the player plays the role of a project manager and a scoring system (as a factor of time, budget, quality and technical debt). We design a result screen showing the trace of events and reasoning (learning through success and failure as well as discovery) behind the points awarded to the player. We conduct a survey of the players by conducting a quiz before and after the game play and demonstrate the effectiveness of our approach.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Keywords—Peer Code Review, Simulation Game, Software
Engineering Education and Training, Teaching Critical Decision
Making</p>
      <p>I.</p>
    </sec>
    <sec id="sec-2">
      <title>RESEARCH MOTIVATION AND AIM</title>
      <p>
        Software Engineering (SE) being a practice-oriented and
applied field is taught primarily (at University Level) using
instructor-driven classroom lectures as well as team-based
projects requiring hands-on skills. In comparison to classroom
lectures, team-based hands-on projects require more active
participation and experiential learning. SE Educators have
proposed and shown positive student learning outcomes by
teaching certain concepts using simulation games. Some of the
advantages of teaching using simulation games are
incorporation of real-world dynamics such as critical decision making
under multiple and conflicting goals, encountering unexpected
and unanticipated events, allowing exploration of alternatives
(discovery learning) and allowing incorporation of learning
through doing and failure [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Peer code review consists
of reviewing and critiquing team members source code in
order to detect defects and improve the quality of the software
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ][
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Software code review is practiced in several
open-source and closed-source software project settings. There
are several best practices on various aspects of peer code
review such as the code review size, coverage and rate. Code
reviewer expertise, reviewer checklist and usage of tools (such
as mailing lists, Gerrit or Rietveld) also play an important role
in influencing the impact of code review on software quality
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ][
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The work presented in this paper is motivated by the
need to teach the importance and best practices of peer code
review to students using a simulation game. The research aim
of the work presented in this paper is the following:
1)
2)
3)
      </p>
      <p>To develop a web-based interactive educational SE
simulation game or environment for teaching benefits
and best-practices of peer-code review process.</p>
      <p>To investigate a learning framework and model based
on discovery learning, learning from failure, evidence
and reasoning for teaching concepts on the practice
of peer code review.</p>
      <p>To evaluate the proposed learning framework and tool
by conducting experiments and collecting feedback
from users.</p>
      <p>II.</p>
    </sec>
    <sec id="sec-3">
      <title>RELATED WORK &amp; RESEARCH</title>
      <p>CONTRIBUTIONS</p>
      <p>In this Section, we discuss closely related work and state
our novel research contributions in context to the related work.
We conduct a literature survey of papers published on the topic
of teaching Software Engineering concepts using simulation
games. Table I shows the result of our literature review. Table
I lists 9 papers in reverse chronological order and reveals
that teaching Software Engineering concepts using simulation
games is an area that has attracted several researchers attention
from the year 2000 until 2013. We characterize 9 papers
based on the tool name, year, simulation topic, University
and interface. We infer that teaching software engineering
processes and project management are the two most popular
target areas for simulation games. The game interfaces various
from simple command line and menu driven model to animated
and 3D interfaces. Researchers have also experimented with
board games in addition to computer-based games. In context
to closely related work, the study presented in this paper makes
the following novel contributions:
1)</p>
      <p>While there has been work done in the area of
teaching Software Engineering processes and project
management skills, our work is the first in the area
of building and investigating simulation games for
teaching best practices for peer code review.</p>
      <p>We propose a simulation game to teach peer code
review practices based on learning by failure, success
and discovery. We demonstrate the effectiveness of
our approach, discuss the strengths and limitations of
our tool based on conducting user experiments and
collecting their feedback.</p>
      <p>III.</p>
    </sec>
    <sec id="sec-4">
      <title>GAME ARCHITECTURE AND DESIGN</title>
      <sec id="sec-4-1">
        <title>A. Learning Objectives</title>
        <p>In this game we define 12 learning objectives covering
multiple aspects of peer code review. These learning objectives
are captured in our pre game questions1 and post game
questions2. Table II shows 6 of the 12 learning objectives (due
to space constraint), corresponding decisions and the questions
which we asked to the player in game. Table II shows the
structure that we follow to design the game questions. Each
question is designed keeping in mind the learning objectives of
the game. Based on the learning objectives we come up with a
situation or a decision which best questions that objective. An
evaluation question is then formed around this decision which
challenges the player’s knowledge to its best. We therefore
present different situations to the player where they have to
make decisions like whom to assign the task of review process
(Figure 1), what inspection rate to choose (Figure 2), when
to start with review, steps required to foster good code review
culture in team etc. A decision can map to one or more learning
objectives. Similarly two or more evaluation questions may
map to one decision.</p>
      </sec>
      <sec id="sec-4-2">
        <title>B. Unexpected Events</title>
        <p>We introduce unexpected events and unforeseen
circumstances (such as internal conflicts between the team members,
attrition and change in deadline or demand from the customer)
in the game to make it more realistic. Unexpected events
are unobservable and our goal is to examine the response
and the decision making ability of the player to unexpected
circumstances. Figure 3 shows a screenshot for the simulation
game in which the player is presented with an unexpected</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>1http://bit.ly/1ddyitO</title>
      <p>2http://bit.ly/1GEdBBX
situation. As shown in Figure 3, the project manager is
encountered with a situation wherein a developer quits the team
or organization just one month before the release date.
C. Scoring System</p>
      <p>1) Technical Debt: We apply Technical Debt3 concept or
metaphor as one of the elements of our scoring system. We
calibrate our scoring system such that incorrect decisions
(quick and dirty solutions) lead to accumulation of technical
debt. Figure 4 shows technical debt score of various players
(data collected during our experimental evaluation of the
simulation game) as the game progresses from start to finish.
Figure 4 reveals various behaviors: we observe cases in which
technical debt gets build-up due to lack of knowledge and
incorrect decisions and on the other hand we notice cases in
which prudent decisions lead to controlled technical debt. Each
decision a player makes has certain weight or point associated
to it varying from 1 to 5. We take the maximum range value
i.e. 5 as standard TD (Technical Debt) point against which all
calculations are made. For every decision taken by the player,
we calculate the deviation of player’s current decision point
from the standard TD point and find the average TD point value
obtained so far. We then calculate the equivalent percentage
value of this TD point relative to the standard TD point which
is the overall TD incurred by player so far in the game (refer
to Equations 1, 2, 3 and 4).</p>
      <p>deviationV alue = (standardT D
decisionW eight) (1)
sumT D = sumT D + deviationV alue
T Davg = sumT D=decisionCounter
T D = (T Davg=standardT D) 100
(2)
(3)
(4)
2) Time: Time is the most valuable resource in any project
and time management is another key aspect of managing a
project. It is therefore required by the player to properly
plan the project time and associated decisions to meet project
deadline. Every decision that a player takes in game has a
positive or negative impact. The magnitude of impact varies
depending on the choice or decision made by the player (as
shown in Figure 5). Series of decisions taken by a player either</p>
    </sec>
    <sec id="sec-6">
      <title>3http://martinfowler.com/bliki/TechnicalDebt.html</title>
      <p>Learning Objective
The earlier a defect is found, the better. The
longer a defect remains in an artifact, the
more embedded it will become and the more
it will cost to fix
Authors should annotate source code before
the review begins
Use of checklists substantially improve
results for both authors and reviewers
prevent them from meeting the deadline or they are able to
complete the project successfully. All of the decisions have a
path time associated with them. This path time gets deducted
from the remaining time as the player proceeds further in game
(Equation 5 and 6).</p>
      <p>remainingT ime = remainingT ime
pathT ime
timeScore (days) = remainingT ime
(5)
(6)
3) Budget: At the beginning of game, player is allotted a
budget of Rs 2 million to complete the project. The scoring
system that we have built, tracks player’s utilisation of this
budget. If at any point of time player has consumed the entire
budget value, they can go no further in game. Figure 6 shows
different player behaviours in terms of budget utilisation.
Budget remaining at the end of game is the combined effect of
the cost consumed to perform review, developer’s recruitment,
buying tools, team incentives etc. Refer to Equation 7 and 8
to see how path cost for player’s decision is used to obtain the
cost score.</p>
      <p>
        remainingC ost = remainingC ost
pathC ost
costScore (Rs) = remainingC ost
(7)
(8)
4) Quality: We incorporate the concept of software quality
in our scoring system to keep a check of code review
consistency maintained by the player. The scoring system captures
the quality standards from the beginning of the project. We use
defect density (defects/kLOC) to measure the software quality
maintained during the game. Code bases with a defect density
of 1:0 (or 1 defect for every 1; 000 lines of code) are considered
good quality software[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. Figure 7 illustrates the varying
quality standards for different players as they progress through
the game with an initial defect density of 145 defects/kLOC
(mentioned at the beginning of game). Each path that player
takes has a defect percentage (defect%) associated with it.
Evaluation Question
Mr. Manager, This is to inform you that 10% of project
has been developed. Would you like to forward these code
modules for review or send them directly for testing? As a
manager, what would your decision be?
Mr. Manager, I have been observing that our novice
developers lack the knowledge of standard code development
practices. I would suggest to get them acquainted with these
practices to fasten up the code review process. Which of the
following guidelines will you issue for developers?
Mr. Manager, I have been observing that our novice
developers lack the knowledge of standard code development
practices. I would suggest to get them acquainted with these
practices to fasten up the code review process. Which of the
following guidelines will you issue for developers?
The rate at which code is reviewed should minimize the
defect density and at the same time increase the productivity
of reviewer involved Which amongst the following is the
most optimal inspection rate?
What is the preferable code size to be reviewed at a time,
that you would suggest to reviewer for carrying out his task
efficiently?
Mr. Manager, This to inform you that our developers often
forget to fix bugs found during code reviews. It is hard for
me to keep track of these unresolved defects, which in turn is
affecting the code quality too. Please look into this issue and
suggest a good way to ensure that defects are fixed before
code is given All Clear sign. Choose what steps will you
take to handle this situation?
Depending on player’s decision, this defect% either increases
or decreases the software quality standards. Equation 9, 10 and
11 captures project quality (projectQlty) calculation.
decisionQlty = def ect%
projectQlty
projectQlty = projectQlty
decisionQlty
qualityScore (def ects=kLOC ) = projectQlty
(9)
(10)
(11)
      </p>
      <sec id="sec-6-1">
        <title>D. Game Tree</title>
        <p>
          Each game consists of a problem space, initial state and
single (or a set of) goal states. A problem space is a
mathematical abstraction in form of a tree (refer to Figure 8) where
the root represents starting game state, nodes represent states
of the game (decisions in our case), edges represent moves and
leaves represent final states (marked as red circles) [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Figure
8 represents the game tree for our game. It has a branching
factor of 4 with a solution depth of 9. The time complexity
for its traversal is O(49).
        </p>
      </sec>
      <sec id="sec-6-2">
        <title>E. Final Scoring</title>
        <p>At the end of game, each player gets a score reflecting their
overall performance. This score produced, takes into account
all the factors like path taken, remaining time, budget
consumed, project quality index and technical debt accumulated
at the end of game. Following are the steps depicting the
procedure to compute the performance of each player during
the execution of game and based on the decisions made by the
player.</p>
        <p>1)
2)
3)
4)</p>
        <p>Each decision a player takes has a weight (Wd)
associated with it, which varies from +1 to +5
(represented in Figure 8). As the player proceeds in game,
the weight associated with each decision keeps on
accumulating and is stored in Psum (refer to Equation
12). Final value of Psum is then used to determine
whether a player followed a poor, optimal, good or
an excellent path during the game.</p>
        <p>Psum =</p>
        <p>
          X Wd (out of 50)
(12)
Next we scale and obtain the equivalent values for
cost (Cf), time (Tf) and quality (Qf) remaining at the
end of game. Project attributes sum (Pattr sum) holds
the average of equivalent values of these three project
attributes (Equation 13) [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
        <p>Pattr sum = (Cf + Tf + Qf )=3 (out of 50) (13)
Score sum (Sf’) is obtained by adding the values
calculated in step 1 and 2 (Equation 14).</p>
        <p>Sf0 (out of 100) = Psum + Pattr sum
(14)
We then make use of TDavg (refer to Equation 3) to
calculate the final score (Sf) in equation 15.</p>
      </sec>
      <sec id="sec-6-3">
        <title>F. Feedback and Analysis</title>
        <p>To meet the game’s objective, it is necessary to ensure
that player is learning throughout the course of game. The
player is given a proper feedback at each step of the game
so they can observe the effect of decisions taken by them on
the project. Every decision of player has a direct impact on
the four parameters of scoring system i.e. time, cost, quality
and technical debt, which is made visible by the presence
of four dynamic meters in game screens (demonstrated in
Figure 1, 2 and 3). The value of score meters change after
each decision and makes player aware of the consequences
of his decision on the available resources. In the end, final
values in score meters obtained for the player is compared
with the ideal values that should be there (refer to Figure 9).
It helps student compare the consumption of resources done
during their project course and the quality standards that they
managed to maintain. Along with step wise feedback, we also
provide a detailed analysis of player’s performance at the end
of the game. We reflect to player’s the decisions taken by them
during the game and provide feedback in form of remarks.
The remark that a player gets, help them discover about the
correctness of their decisions. For example, if a player chooses
to recruit a full time reviewer (refer to screenshot in Figure
1) they are reminded that there is a more experienced and
expert co developer present in the team who can carry out
this task. In case they select a very fast inspection rate like
900 or above LOC/hour (Figure 2) they are told that with this
high inspection rate they can conclude that the reviewer is not
looking at the code at all. This in depth analysis helps player
trace the events and reasoning (learning through success and
failure as well as discovery) behind the points awarded to them
(Figure 10 shows remarks provided to player).</p>
        <p>IV. EXPERIMENTAL ANALYSIS AND PERFORMANCE</p>
        <p>EVALUATION</p>
        <p>We conduct experiments to evaluate our proposed approach
by asking 17 students (10 Undergraduate or Bachelors in
Computer Science and 7 Graduate or Masters in Computer Science
students) to play the game. Table III, IV presents the list of
questions and their responses in the pre-game questionnaire4.
Sf = S0
f
(1</p>
        <p>T Davg=standardT D)
(15)</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>4http://bit.ly/1ddyitO</title>
      <p>TABLE III.</p>
      <p>
        QUESTIONNAIRE RESULTS BEFORE THE GAME PLAY
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] Peer code review is a staple of the software industry. Why?
Reduces the number of delivered bugs [
Eliminates the need to perform testing
Keeps code maintainable [
Makes new hires productive quickly and safely[ ]
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] Who amongst the following is most suitable for the job of peer code review?
Any co-developer from the team
A full time reviewer from outside the team, having full expertise in code review
Team’s senior co developer with required expertise and experience [ ]
Developers should self select the peer reviewer for their own code
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] What is the most optimal inspection rate to carry out an effective code review ?
Reviewing 100-300 LOC per hour
Reviewing 300-600 LOC per hour [ ]
Reviewing 600-900 LOC per hour
Reviewing 900 or more LOC per hour.
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] What is the role of checklist in peer code review?
It keeps track of whether reviews are consistently performed throughout your team
Omissions in code are the hardest defects to find and checklist combat this problem [
They perform mapping of user requirement to code
Reminds authors and reviewers about common errors and issues which needs to be handled [
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] Which of the following is preferable review code size?
100-200 LOC
Fig. 11.
      </p>
      <p>Survey score values (in pre game and post game survey) of students who took part in the game evaluation
The pre-game questionnaire consists of questions having one
correct answer (2, 3, 5, 7, 8, 9, 10 and 11) and also questions
having more than one correct answer (1, 4 and 6). Table III
displays the questions, answer choices, correct answer(s) and
the responses of 17 students. Table III reveals that 30% of the
respondents selected the correct choice for the most optimal
inspection rate for peer code review and only 18% selected the
correct answer for optimal code review size. We asked such
questions to test the knowledge of the students and investigate
improvement after playing our game. As shown in Figure III,
our pre-game questionnaire consists of questions on various
aspects of peer code review.</p>
      <p>Table IV displays a set of inter related questions, answer
choices (common for all questions) and the responses of 17
students for the same. We asked students these questions to test
their knowledge of existing review processes. The questions
test the efficiency of review processes when compared in
different categories like cost of using that process, time taken
to carry out review process, review effectiveness and resource
consumption. It can be observed that 47:05% of students think
that tool assisted review process is more efficient in terms of
cost and only 29:42% are aware that manual review processes
actually have low investment cost. The general belief that tool
assisted reviews are better that manual review processes in
every aspect is evident from Table IV, despite the fact that
both of the review process find just as many bugs.</p>
      <p>As observed from Table III and IV, there are total 11
questions in the survey questionnaire, which we made mandatory
for each student to answer. To obtain the score of pre and
post game survey questions we assign a weight age of 1 mark
to each question. For every wrong response +0 marks were
given and for each right response selected +1/n marks were
given (n is the number of correct answers for that question).</p>
      <p>As mentioned above, questions in survey have both single and
multiple correct answers, therefore value of n varies from 1
to 3 (n=1 for Q2, 3, 5, 7, 8, 9, 10, 11; n=2 for Q4; n=3
for 1,6). Using this scoring criteria, we made a bar graph
to see the improvement in score before and after playing the
game (refer to Figure 11). It can be seen that mean score of
students in pre game survey is 4:44 which increases to 7:75
in post game survey. Thus, there is an average improvement
of 74:54% i.e. 3:31 in score. In terms of absolute score a
maximum improvement of 5:67 (Student ID 10) and minimum
improvement of 1:5 (Student ID 2) can be seen.</p>
      <p>Figure 12 is a bar graph representing how survey scores
vary for each question, before and after game play. The values
in Figure 12 are the values for each question that we obtained
from the responses of 17 students who took part in evaluation.</p>
      <p>We follow the same scoring criteria that we use to draw Figure
11. It explores the improvement trend that exists for each and
every question, before and after the game play. As visible
from the graph, there are some questions which could be
categorized easy as more than 50% of students could answer
them correctly in pre game survey (Q1, 3, 5 and 9). Similarly
there is another set of questions which students found hard and
only 30% or less could answer correctly like Q2, Q4, Q6, Q7
and Q8. A mean pre game score of 6:86 is observed for these
questions, which then raises to 11:98 in post game survey.</p>
      <p>Thus an average improvement of 74:63% (5:12 in absolute)
is observed, which is similar to that observed in Figure 11.</p>
      <p>In absolute terms, a maximum improvement of 12 (Q7) and
minimum improvement of 2 (Q9 and Q11) is observed.</p>
      <p>We perform post game survey once the students are done
playing the game. It is divided into two sections. First section
contains all the questions specified in Table III and IV, to
test student’s learning after the game play. There is another
section which requires the player to rate the game on different
aspects like learning done throughout the game, score and
analysis provided at the end of game, motivation to try again on
strategy or decision failure and introduction of new concepts or
practices which student was not aware of. Figure 13 represents
the box plot demonstrating the distribution of ratings provided
by the player to game in these categories. It can be seen that
learning done throughout the game has a median of 4 with
most of the rating distribution spread from 3:75 to 4 with 7
outliers (3, 5) as visible from Figure 13. Similar information
can be obtained about other aspects too from Figure 13.</p>
      <p>V.</p>
      <p>CONCLUSION</p>
      <p>We describe an education and interactive software
engineering simulation game to teach important peer code review
concepts and best practises. We define 12 learning objectives
covering various aspects of peer code review and design our
game as well as scoring system to teach the pre-defined
learning goals. We evaluate the effectiveness of our approach
by conducting experiments involving 17 undergraduate and
graduate students. We observe a variety of responses and
behaviour across various players. We found that students lack
basic knowledge of peer code review and its standard industrial
practices. Our experiments reveal that there is a significant
improvement in the knowledge of the participants after playing
the game. We observe that students find simulation game more
engaging and interesting platform for learning.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E. O.</given-names>
            <surname>Navarro</surname>
          </string-name>
          and
          <string-name>
            <surname>A. van der Hoek</surname>
          </string-name>
          , “
          <article-title>Simse: An educational simulation game for teaching the software engineering process</article-title>
          ,”
          <source>in Proceedings of the 9th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education, ITiCSE '04</source>
          , pp.
          <fpage>233</fpage>
          -
          <lpage>233</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N. bin</given-names>
            <surname>Ali</surname>
          </string-name>
          and M. Unterkalmsteinere´, “
          <article-title>Use of simulation for software process education:a case study,”</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mittal</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Sureka</surname>
          </string-name>
          , “
          <article-title>Process mining software repositories from student projects in an undergraduate software engineering course</article-title>
          ,” in
          <source>Companion Proceedings of the 36th International Conference on Software Engineering, ICSE Companion</source>
          <year>2014</year>
          , (New York, NY, USA), pp.
          <fpage>344</fpage>
          -
          <lpage>353</lpage>
          , ACM,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mishra</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Sureka</surname>
          </string-name>
          , “
          <article-title>Mining peer code review system for computing effort and contribution metrics for patch reviewers,” in Mining Unstructured Data (MUD</article-title>
          ),
          <source>2014 IEEE 4th Workshop on</source>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>Sept 2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P. C.</given-names>
            <surname>Rigby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>German</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cowen</surname>
          </string-name>
          , and M.-A. Storey, “
          <article-title>Peer review on open-source software projects: Parameters, statistical models, and theory</article-title>
          ,”
          <source>ACM Trans. Softw. Eng. Methodol.</source>
          , vol.
          <volume>23</volume>
          , pp.
          <volume>35</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>35</lpage>
          :
          <fpage>33</fpage>
          ,
          <string-name>
            <surname>Sept</surname>
          </string-name>
          .
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>McIntosh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kamei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Adams</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Hassan</surname>
          </string-name>
          , “
          <article-title>An empirical study of the impact of modern code review practices on software quality,” Empirical Software Engineering</article-title>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>44</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sripada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Reddy</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Sureka</surname>
          </string-name>
          , “
          <article-title>In support of peer code review and inspection in an undergraduate software engineering course,” in Software Engineering Education and Training (CSEET</article-title>
          ),
          <source>2015 IEEE 28th Conference on</source>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>6</lpage>
          , May
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E. H. S. J.</given-names>
            <surname>Roland</surname>
          </string-name>
          <string-name>
            <given-names>Mittermeir1</given-names>
            ,
            <surname>Andreas Bollin1</surname>
          </string-name>
          and
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>Wakounig1e´, “Ameise: An interactive environment to acquire project-management experience</article-title>
          ,”
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>R. S. Christiane</surname>
          </string-name>
          <article-title>Gresse von Wangenheim and</article-title>
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Borgattoe</surname>
          </string-name>
          ´, “
          <article-title>Deliver! an educational game for teaching earned value management in computing courses</article-title>
          ,” vol.
          <volume>54</volume>
          , pp.
          <fpage>286</fpage>
          -
          <lpage>298</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A. T.</given-names>
            <surname>Nicholaos</surname>
          </string-name>
          <string-name>
            <surname>Petalidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Gregory</given-names>
            <surname>Gregoriadis</surname>
          </string-name>
          and
          <string-name>
            <surname>A</surname>
          </string-name>
          . Chronakise´, “
          <article-title>Promasi a project management simulator</article-title>
          ,”
          <source>in Proceedings of the 2011 15th Panhellenic Conference on Informatics, PCI '11</source>
          , pp.
          <fpage>33</fpage>
          -
          <lpage>37</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Jain</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Boehme</surname>
          </string-name>
          ´, “Simvbse:
          <article-title>Developing a game for valuebased software engineering</article-title>
          ,”
          <source>in Proceedings of the 19th Conference on Software Engineering Education and Training</source>
          , pp.
          <fpage>103</fpage>
          -
          <lpage>114</lpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>E. O. N. Alex</given-names>
            <surname>Baker</surname>
          </string-name>
          and
          <string-name>
            <surname>A. van der Hoek,</surname>
          </string-name>
          “
          <article-title>An experimental card game for teaching software engineering processes</article-title>
          ,” in
          <source>The Journal of Systems and Software</source>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>16</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Shaw</surname>
          </string-name>
          and J. D. e´, “
          <article-title>Engendering an empathy for software engineering</article-title>
          ,”
          <source>in Proceedings of the 7th Australasian Conference on Computing Education, ACE'05</source>
          , pp.
          <fpage>135</fpage>
          -
          <lpage>144</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>M. B. Alexandre Dantas</surname>
            and
            <given-names>C.</given-names>
          </string-name>
          <article-title>Wernere´, “A simulation-based game for project management experiential learning</article-title>
          ,
          <source>” in Proceedings of the 16th International Conference on Software Engineering and Knowledge Engineering</source>
          , SEKE'
          <volume>04</volume>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>24</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Drappa</surname>
          </string-name>
          and J. Ludewige´, “Simulation in software engineering training,”
          <source>in Proceedings of the 22nd International Conference on Software Engineering, ICSE '00</source>
          , pp.
          <fpage>199</fpage>
          -
          <lpage>208</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16] “
          <article-title>Coverity scan: 2012 open source integrity report</article-title>
          ,” 2012-Coverity-ScanReport.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>R. B. Myerson</surname>
          </string-name>
          ,
          <article-title>Game theory</article-title>
          . Harvard university press,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>K.</given-names>
            <surname>Jha</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>Iyer</surname>
          </string-name>
          , “
          <article-title>Commitment, coordination, competence and the iron triangle</article-title>
          ,”
          <source>International Journal of Project Management</source>
          , vol.
          <volume>25</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>527</fpage>
          -
          <lpage>540</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>