<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Using Mini-Projects to Teach Empirical Software Engineering</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Michael Felderer</string-name>
          <email>michael.felderer@uibk.ac.at</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marco Kuhrmann</string-name>
          <email>kuhrmann@acm.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Clausthal University of Technology</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Innsbruck</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>412</fpage>
      <lpage>421</lpage>
      <abstract>
        <p>Empirical studies have become a central element of software engineering research and practice. Yet, teaching the instruments of empirical software engineering is challenging, since students need to understand the theory of the scientific method and also have to develop an understanding of the application of those instruments and their benefits. In this paper, we present and evaluate an approach to teach empirical software engineering with course-integrated mini-projects. In mini-projects, students conduct small empirical studies, e.g., surveys, literature reviews, controlled experiments, and data mining studies in collaborating teams. We present the approach through two implementations at two universities as a self-contained course on empirical software engineering and as part of an advanced software engineering course; with 101 graduate students in total. Our evaluation shows a positive learning experience and an improved understanding of the concepts taught. More than a half of the students consider empirical studies helpful for their later careers. Finally, a qualitative coding and a statistical analysis showed the proposed approach beneficial, but also revealed challenges of the scientific work process, e.g., data collection activities that were underestimated by the students.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Empirical software engineering aims at making
software engineering claims measurable, i.e., to analyze
and understand phenomena in software engineering
and to evaluate software engineering approaches and
solutions, and to ground decision-making processes
in evidence. For this, an extensive portfolio of
instruments for empirical software engineering has been
developed. For instance, Wohlin et al. [
        <xref ref-type="bibr" rid="ref26">27</xref>
        ] provide a
collection of instruments, e.g., controlled experiments,
surveys and case studies, to be used for empirical
studies in software engineering. Kitchenham et al. [
        <xref ref-type="bibr" rid="ref14">13</xref>
        ]
extended these instruments by a detailed guideline for
conducting systematic reviews. For most of the basic
instruments used in empirical software engineering
today, extended and more detailed (pragmatic)
guidelines exist, such as for systematic reviews [
        <xref ref-type="bibr" rid="ref17 ref27">16, 28</xref>
        ],
systematic mapping studies [
        <xref ref-type="bibr" rid="ref22 ref23">23, 24</xref>
        ], multi-vocal
reviews [
        <xref ref-type="bibr" rid="ref11 ref12">10, 11</xref>
        ], or surveys [
        <xref ref-type="bibr" rid="ref13 ref20">12, 21</xref>
        ]. All these
instruments are meant to support researchers and
practitioners alike in conducting empirical studies and to
ground their work and decisions in evidence.
      </p>
      <p>
        Conducting empirical studies is challenging and
requires careful preparation and a disciplined work
approach. Quite often, students consider empirical
studies of little to no help when it comes to software
development and project work, since the relation to
actual development tasks is not obvious. Yet, many
of today’s applications rely on data, e.g., machine
learning systems like text and speech recognition, IoT
devices, and autonomous cars. Empirical methods
as such are about data analysis and, thus, provide
a suitable approach to teach data analysis—or data
engineering in general—that is a core competence
in data-intensive applications. Furthermore, modern
software development paradigms, such as DevOps
including continuous integration and deployment,
utilize data, e.g., to analyze a system’s performance, to
predict defects, and to make informed decisions in
the development process as practiced in continuous
experimentation [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Therefore, it is necessary for
teachers to open the students’ minds for a rigorous
and evidence-based work approach.
      </p>
      <p>In this paper, we present and evaluate the concept of
course-integrated mini-projects to teach empirical
software engineering instruments. Our approach helps
students learn how to conduct empirical studies and
understand the instruments and challenges coming
along with such studies. Collaborating project teams
conducting small empirical studies form the basis of
our approach. We implemented the approach in two
courses at the University of Southern Denmark (2016,
68 students) and University of Innsbruck (2017, 33
students). Mini-projects allow students to learn
empirical instruments by practically applying them. Our
evaluation shows a positive learning experience and
an improved understanding of (empirical) software
engineering concepts. More than half of the students
perceive empirical studies helpful for their later
careers. Our evaluation also shows that notably data
collection activities (e.g., for surveys and experiments)
are underestimated by the students. Our findings thus
lay the foundation for improving research-oriented
courses that require data and data analysis.</p>
      <p>The remainder of the paper is organized as follows:
Section 2 gives an overview of background and
related work. Section 3 describes the mini-project
approach, and Section 4 presents the approach’s
evaluation based on two implementations at the University
of Southern Denmark and the University of Innsbruck
respectively. We conclude the paper and discuss future
work in Section 5.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Background and Related Work</title>
      <p>
        Using empirical studies in software engineering
education is not a new idea [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. However,
empirical studies—notably (controlled) experiments—are
mainly used as a tool to support research using
students as subjects, but got little appreciation as a
teaching tool in software engineering in the first place.
That is, students only get in touch with empirical
studies as subjects in an empirical inquiry, and they
have to carry out tasks, e.g., in an experiment as for
instance reported in [
        <xref ref-type="bibr" rid="ref19 ref7 ref9">7, 8, 18, 20</xref>
        ]. Yet, teaching
empirical software engineering as a subject requires a
setup in which empirical studies are the main
subject or at least provide a significant contribution to
a course. In this regard, Wohlin [
        <xref ref-type="bibr" rid="ref25">26</xref>
        ] proposes three
levels for integrating empirical studies in software
engineering courses: (i) integration in software
engineering courses, (ii) as a separate course, and (iii)
as part of a research method course. Wohlin
mentions that introducing empirical software engineering
will provide more opportunities to conduct empirical
studies in student settings, but that educational and
research objectives need to be carefully balanced.
Dillon [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] comes to the same conclusion and considers
a successful observation of a phenomenon as part of
an empirical study not be an end in itself. Students
need time to get familiar with ideas and concepts
associated with the phenomenon under observation.
Finally, Parker [
        <xref ref-type="bibr" rid="ref21">22</xref>
        ] considers experiments distinctive
and more participative. Students are likely to
remember lessons associated with experiments.
      </p>
      <p>
        In this paper, we present an approach that considers
empirical studies major subjects of a course and that
uses such studies as teaching tool. Referring to
established learning models such as Bloom’s Taxonomy of
Learning [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and Dale’s Cone of Learning [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], we aim
to include as many active learning parts as possible
in the courses. Still, we use passive learning methods
to transfer knowledge about theoretical basics, such
as methods and their application contexts.
Addressing the active learning levels, however, is challenging.
In “ordinary” software engineering education, project
courses are used to train software project work. For
empirical studies, it is required that the students carry
out actual research to practice the application of the
empirical instruments. In our previous work [
        <xref ref-type="bibr" rid="ref18">17–19</xref>
        ],
we presented different, self-contained classroom
experiments and developed a guideline to select the
bestfitting study type for a specific context [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref15">14</xref>
        ], we
introduced a teaming model that helps implementing
empirical studies in larger project courses.
      </p>
      <p>
        We contribute a generalized concept grounded in
[
        <xref ref-type="bibr" rid="ref15 ref6">6, 14</xref>
        ] that allows for including empirical studies as
course units. We implemented and evaluated our
approach in a course on empirical software
engineering and an advanced course on software engineering
and demonstrate how to implement and scale
courseintegrated empirical studies.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Course-Integrated Mini-Projects</title>
      <p>
        We present the course-integrated mini-projects
approach in Section 3.1. The presentation includes the
description of the team setups, project and task
descriptions, and examples for which we present details
in Section 3.2. Section 3.3 demonstrates two
integration strategies: the first integration strategy is a
self-contained course on empirical software
engineering [
        <xref ref-type="bibr" rid="ref15">14</xref>
        ] and the second strategy is a topic-specific
part of an advanced software engineering course.
      </p>
      <sec id="sec-3-1">
        <title>3.1 Mini-Projects and Project Teams</title>
        <p>
          Figure 1 shows the general organization model for the
mini-project approach. A MiniProject has a Topic, a
Schedule, and optional Reference Literature and
Input Data. It is always carried out using at least one
Method, e.g., an experiment [
          <xref ref-type="bibr" rid="ref26">27</xref>
          ], a case study [
          <xref ref-type="bibr" rid="ref24">25</xref>
          ],
and a survey [
          <xref ref-type="bibr" rid="ref20">21</xref>
          ]. Finally, every mini-project consists
of a Project Team and an Advisory Team, which we
describe in more detail in the following.
        </p>
        <p>1
Advisory Team
1
1..*
Advisor</p>
        <p>
          Mini Project
- Topic [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
- Schedule [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
- Method [1..*]
- Reference Literature [*]
- Input Data [*]
1
provide
advice
request
advice
        </p>
        <p>1..*</p>
        <sec id="sec-3-1-1">
          <title>Teacher</title>
        </sec>
        <sec id="sec-3-1-2">
          <title>External Advisor</title>
          <p>2..*</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>Student</title>
          <p>1</p>
          <p>1..*
1 0..*
Project Team
1 0..*</p>
        </sec>
        <sec id="sec-3-1-4">
          <title>Result</title>
          <p>joint research
I
shared S
research design</p>
        </sec>
        <sec id="sec-3-1-5">
          <title>Practice Team</title>
        </sec>
        <sec id="sec-3-1-6">
          <title>Method Team</title>
        </sec>
        <sec id="sec-3-1-7">
          <title>Service Team</title>
          <p>Advisory Teams An advisory team bundles all
advisors involved in a specific mini-project—usually one
or two persons. An Advisor is either the teacher of
the course or an external advisor, such as an external
The “normal” way of doing a mini-project is
the isolated way of working. Isolated means
that a project team has a self-contained task
that can be worked on without any interaction
with other teams.</p>
          <p>This style is applied if project teams
collaboratively work on a joint (research) project. A
complex project is broken down into a
number of smaller projects. Project teams thus
have to be coordinated in terms of task
distribution, scheduling, and result synthesis.</p>
          <p>
            This style is applied if project teams
competitively work on the same (research) topic. Two
or more teams are assigned the same task;
team-specific methods can be varied and
results can be compared. That is, this style helps
conducting controlled experiments or
implementing independently conducted studies.
project topic sponsor [
            <xref ref-type="bibr" rid="ref15">14</xref>
            ]. Besides offering and
promoting project topics, advisors regularly interact with
the project teams for which they handle individual
support requests, and they provide general technical
and methodical support.
          </p>
          <p>Project Teams A project is composed of all students
working on a specific problem. Project teams can
interact with each other. We distinguish the three
interaction styles isolated, joint, and shared, which are
explained in Table 1. Furthermore, we distinguish
three types of project teams according to the type
of task they are working on: a Practice Team
performs an “active” task, e.g., a development task or a
research task. A Method Team deals with
methodological expertise, i.e., it develops competencies regarding
specific (scientific) methods and offers “consultancy
services” in terms of applying a specific method “right”
to practice teams. Finally, a Service Team develops
skills in more general topics, such as data analysis or
presentation, and offers respective “services” to other
teams—practice teams and method teams alike.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2 Project- and Task Descriptions</title>
        <p>Every mini-project is supposed to produce at least
one Result. In this section, we provide a blueprint
for a 1-page project- and task description, which also
illustrates the manifestation of the different attributes
of the class Mini Project (Figure 1).</p>
        <p>For every project, a description that includes tasks,
dates, and expected results is necessary. Table 2
provides a summary and a description of task-description
items that we consider relevant. The work
description requires special attention as it comprises the
detailed activity list, input material, and the description
of expected results. The expected results are
speciItem</p>
        <p>Description
Metadata This section contains all information
relevant to a task, e.g., hand-in date.</p>
        <p>Title A project needs a telling title and an ID
Context This section briefly describes the context of
the project and provides a short summary
of the basic tasks. Recommendation: the
context section should be treated as an
abstract, such that it can be used as a teaser
and a small piece of information, e.g., used
in a course management system.</p>
        <p>Work The detailed work description contains at
Description least:
Schedule
Related
Projects
Literature
1. A detailed task list.
2. A list of input/reference material.
3. A list of deliverables to be shipped.</p>
        <p>Note: The level of detail depends on the
actual task, i.e., for an “explorative” task,
the description needs to be more open while
a specific development task requires a more
detailed task description.</p>
        <p>The basic schedule lists all deadlines and
the respectively expected results.</p>
        <p>Our concept allows for collaborative and
competitive work (Figure 1 and Table 1). If
such a collaborative/competitive work style
is implemented, this section provides the
information about the other teams. If method
or service teams provide useful services to
a project, such teams are referred here too.</p>
        <p>
          This section lists selected reference
literature relevant to a project.
fied right here; alternatively, a separate catalog of
results has to be provided, e.g., including templates and
mandatory/recommended outlines. Relevant types
for project outcomes are, e.g., (research) data1,
essays or reports, presentations, tutorials, and software.
The second important item of the task description is
the list of related projects. For instance, if a task is a
collaborative task, this list refers to all related projects
that contribute to the overall project goal.
Furthermore, this list also refers those method and service
teams that provide useful support, e.g., if the project
is concerned with developing and conducting a survey,
this list can refer to a method team that is focused
on the theoretical aspects of survey research. Finally,
the task description can also be used to develop a
checklist, which is used for the final submission. This
checklist helps students to check if their delivery
package is complete, and it helps teachers validating the
delivery and grade its components. Figure 2 shows
1Recommendation: If students conduct a research task, analyzed
data that is necessary for the project documentation must always
be complemented with the original raw data.
a practically used example of a task description as
described in Table 2. This task description is taken
from the course given at University of Southern
Denmark and describes a survey research task, which
was performed collaboratively with two external
researchers [
          <xref ref-type="bibr" rid="ref10">9</xref>
          ]. This particular project was a
collaborative project. Specifically, two teams (No. 15 and 16;
see the related projects part) were assigned one task,
but had to conduct the survey with different target
groups. Each team submitted its own data and report,
but, both teams gave only one joint presentation.
3.3
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>Course-Integration Strategies</title>
        <p>We describe two implementation strategies for the
presented approach using two graduate courses. For
these implementations, we provide a short summary
of the respective courses and their learning goals, and
we provide an overview of the projects implemented
in the respective courses (Table 3). Furthermore, this
section lays the foundation for the approach’s
evaluation, which is presented in Section 4.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.3.1 Implementation as a Self-Contained</title>
      </sec>
      <sec id="sec-3-5">
        <title>Empirical Software Engineering Course</title>
        <p>A course on the Scientific Method (SCM, University
of Southern Denmark, SDU, 2016) implemented the
presented approach as a self-contained master-level
course. Figure 3 illustrates the overall organization of
the SCM course showing the introduction parts and
the active learning/project parts.</p>
        <p>
          The goal of the SCM course was to teach empirical
software engineering as main subject by letting the
students perform small studies themselves [
          <xref ref-type="bibr" rid="ref15">14</xref>
          ].
SpecifiStudy Type
        </p>
        <p>SCM
Theory (tutorial)
Experiment
Survey
Systematic Review
Mapping Study
Simulation
Data Mining Study
cally, the major learning goals of the SCM course were
defined as follows:</p>
        <p>After the course, students know the basic
terminology and the key concepts of the scientific method.
After the course, students know and understand
the most important empirical research methods.
After the course, students have shown their ability
to practically apply one research method, conduct
and report on a small research project.
In total, 30 research topics were presented to the
students. According to their preferences, students could
apply for up to three topics, which built the
foundation for the final team setup. Finally, 20 projects were
started with two to three students each. Besides the
theoretical topics, five research methods were covered
by the projects (Table 3).</p>
        <p>
          The SCM course implements the concept from
Figure 1 as follows: method teams became theory teams
for those students that did not want to carry out
a study, but wanted to learn more details about a
specific method or technique, e.g., the systematic
review method [
          <xref ref-type="bibr" rid="ref14">13</xref>
          ]. Service teams became cross-cutting
teams that build up a specific expertise and consulted
theory and practice teams. The 20 teams were
connected with each other, e.g., a theory team supported
one or many practice teams, and both were supported
by cross-cutting teams. The teacher supervised the
individual teams as well as the groups of collaborating
teams. The teams were formed right in the first
session of the course and, thus, the projects became the
main subjects to build the learning experience upon.
        </p>
      </sec>
      <sec id="sec-3-6">
        <title>3.3.2 Integration in an Advanced Software</title>
      </sec>
      <sec id="sec-3-7">
        <title>Engineering Course</title>
        <p>A course on Advanced Software Engineering (ASE,
University of Innsbruck, UI, 2017) implemented the
presented approach as part of a master-level course on
software engineering in which empirical studies
complemented the (technical) software engineering topics.
These technical software engineering topics were
organized around the concept of models in software
engineering and covered software process models
(including agile process models), modeling languages
including UML and DSLs, model transformations as
well as predictive models, e.g., for defect prediction.
Figure 4 illustrates the overall organization of the ASE
course showing the introduction parts and the active
project parts.</p>
        <p>The overall goal of this course was to teach students
advanced topics in software engineering and to let
students make the experience of the value that empirical
studies have to support software engineering
activities. Specifically, the major learning goals of the ASE
course were defined as follows:</p>
        <p>After the course, students know and understand
different advanced software engineering concepts.
After the course, students know the basic
empirical research methods and know how to utilize
empirical studies in the different software
engineering activities.</p>
        <p>After the course, students have shown their
ability to practically apply one research method to a
specific software engineering activity, conduct and
report on a small research project.</p>
        <p>In total, eight topics have been proposed to the
students and students could apply for the topics. Finally,
g
n
i
r
e
e
n
i
g
n
reE res
taw tcu
foS eL
l
a
c
i
n
h
c
e
T</p>
        <p>d
ted an
c g
le in
e t
es sen
th re
irkngon il.(csnp )
:ow rvye iitg</p>
        <p>n
irnng rsuo rw
lea sR
d L
tec ,.S
re .g
li-fed ,isec
S tpo
Complementing
Material/Activities:</p>
        <p>Topic-specific
empirical studies</p>
        <p>and tutorials
Overview Software</p>
        <p>Engineering
Empirical Methods in
Software Engineering</p>
        <p>Software Process</p>
        <p>Models
Modeling Languages</p>
        <p>Model
Transformations
Predictive Models
Overview: Empirical Methods in Software
Engineering (Experiments, Surveys, Data</p>
        <p>Mining, Secondary Studies)
Presentation of Project Topics
Topic Selection and Team Building</p>
        <p>Initial Project Feedback
Weekly Standup (Project Progress)</p>
        <p>Final Project Presentations</p>
        <p>Final Project Feedback</p>
        <p>Project Evaluation
13 projects were started with two to three students
each. The projects covered four research methods and
six topics (Table 3).</p>
        <p>The concept from Figure 1 was implemented as
follows: the 13 project teams were formed as practice
teams. Since the empirical studies were designed to
complement selected topics of the ASE course, no
explicit method teams or service teams have been formed.
That is, all practice teams worked on specific topics
and developed the technical skills and parts of the
required methodological skills themselves. Additional
methodological skills were delivered to the teams by
the teachers and guest lectures, who also acted as
advisors. Teams were formed when the mini-projects
were assigned to the students.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Evaluation</title>
      <p>In this section, we present the research design and
evaluation strategy in Section 4.1, the results and a
discussion in Section 4.2, and threats to validity in
Section 4.3.
Research Questions
Do course-integrated mini-projects help students
better understand the role of empirical studies?
We aim to study the general attitude towards
empirical studies, i.e., do students change their
attitude once they actively conducted an
empirical study. For this, we investigated two
detailed questions:
Do mini-projects change the attitude towards
empirical studies?
Do course-integrated empirical studies studies
help understanding challenges (revealing
misconceptions)?
What are the perceived pros and cons of the
mini-project approach? We aim to study
dis/advantages perceived by students that
participated in mini-projects.</p>
      <sec id="sec-4-1">
        <title>4.1 Research Design and Evaluation</title>
      </sec>
      <sec id="sec-4-2">
        <title>Strategy</title>
        <p>We evaluated our approach in two master-level
courses at two universities and by surveying the
participating students. In this section, we present our
research questions, outline the survey instrument, and
describe the data collection strategies.</p>
        <p>Research Questions To evaluate the proposed
miniproject approach, we aim at answering the research
questions listed in Table 4. Our three top-level
research questions address the (general) learning
experience, the usefulness of the approach in terms of
improving the understanding of the role of empirical
studies, and the perception concerning the pros and
cons of the approach presented.</p>
        <p>
          Data Collection To collect the data, we (initially)
developed two online questionnaires for the SCM
course based on Google Forms [
          <xref ref-type="bibr" rid="ref15">14</xref>
          ]. The first
questionnaire was used in a mid-term evaluation; the second
(extended) questionnaire was used for the final
evaluation. While preparing the ASE course, we revised
both questionnaires and conducted the first data
collection before we started the mini-projects in the ASE
course, and the second data collection, again, in the
course’s final evaluation when the mini-projects have
been finished. All four questionnaires including a
summary are available online2.
        </p>
        <p>Our questionnaire design allows for a 2-staged data
collection that helps observing changing student
perceptions and evaluating the courses over time. The
questionnaires share a number of questions to allow
for comparing courses, the implementations of our
approach, and to qualitatively analyzing the student
feedback. As a learning from the SCM data collection,
the two ASE questionnaires put more emphasis on the
single phases of the scientific workflow, e.g., by
specifically asking for challenges and difficulties regarding
the design of research instruments and conducting
the data collection. Different to the questionnaires
used in the SCM course, is the ASE questionnaires,
students were asked to provide nicknames (to ensure
anonymity), such that tracking individual students
was possible to evaluate specific ratings and to
evaluate such ratings over time.</p>
        <p>Analysis Procedures All four questionnaires
produce quantitative and qualitative data. For the
quantitative analysis, we primarily use descriptive statistics
to analyze the four measurements individually, over
time per course, and for analyzing both courses.
Furthermore, due to the questionnaire’s evolution, for
the ASE course we could conduct additional
inferential statistical analyses, e.g., hypothesis testing and
correlation analysis.</p>
        <p>To qualitatively analyze the data, we used the
freetext answers provided by the students. For the ASE
course, an analysis of general learning and learning
outcomes was performed using the questions for the
expected learning outcomes, and the questions about
the learning regarding the mini-project topics and
the way of conducting empirical research (Appendix;
variables MP6–MP8). An overall analysis of the course
as such (in both courses) was performed using the
questions for the courses’ appropriateness, the lectures
and exercises, and the perceived relation to practice
(Appendix; variables GC2–GC4; interpreted as school
grades). The coding of the feedback into categories
was jointly performed by the two authors.</p>
        <p>
          Validity Procedures To mitigate threats and to
ensure the validity of the instrument, we reused a
questionnaire design that was already applied to other
courses and that received an external quality
assurance [
          <xref ref-type="bibr" rid="ref16">15, 18</xref>
          ]. The original questionnaire design was
extended by specific questions to evaluate the
suitability of the mini-project approach.
        </p>
        <p>Demographics In total, 68 students were enrolled
in the SCM course of which 39 students participated
2Appendix: https://kuhrmann.files.wordpress.com/2018/
11/appendix-draft.pdf
in the initial evaluation, and 38 in the final evaluation.
In the ASE course, 33 students were enrolled, and 29
students participated in both evaluations. From the
29 ASE-students, 27 provided a nickname that was
used in the subject-based analyses. Table 5 gives an
overview of the general course evaluation. Students
of both courses perceived the course complexity, speed
and volume as moderate and see a good relation of
the course to practice.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.2 Results and Discussion</title>
        <p>This section presents the findings of the evaluation of
our proposed course-integrated mini-project approach.
The following sections present the findings according
to the research questions described in Table 4.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.2.1 RQ 1: Improved Work Approach</title>
        <p>With RQ1, we aim to study if course-integrated
miniprojects help students understand the value of a
structured scientific work approach. To better structure
the findings, we defined three sub-questions (Table 4),
which we discuss in the following.</p>
        <p>RQ 1.1: Support for a better learning This
subquestion is addressed by the answers to the
statement: “The mini-projects improve the learning
experience”, which was quantitatively analyzed.
20% 30% 40% 50% 60% 70% 80% 90% 100%</p>
        <p>Somewhat Agree Indifferent Somewhat Disagree Fully Disagree</p>
        <p>Figure 5 shows the results (taken from the final
evaluation) for both courses and shows that 95% of
the SCM-students consider the mini-project approach
contributing to an improved perceived learning
experience (3% each rate the teaching format neutral or
less effective than other teaching formats). For the
ASE course, 59% consider the mini-project approach
more effective, and 21% each rate it neutral or less
effective. In summary, mini-projects contribute to an
improved perceived learning experience, especially in
the SCM setting, but also in ASE, where mini-projects
were only one part of the course.
RQ 1.2: Support for a better understanding of
concepts A key to provide value to the students is to
make software engineering concepts better/easier to
understand. For this, students were asked to rate the
statement: “The mini-projects helped me understanding
concepts better”, i.e., whether or not the understanding
of concepts of interest has been improved. In this
context, an investigation of the role of empirical studies is
provided in Sect. 4.2.2. Again, the majority of the
students (92% for the SCM course and 69% for the ASE
course; Figure 6) considers the mini-project approach
advantageous for gaining a better understanding of
software engineering concepts.</p>
        <p>ASE
SCM
6
13
14</p>
        <p>5
22</p>
        <p>10%
Fully Agree
20% 30% 40% 50% 60% 70% 80% 90% 100%</p>
        <p>Somewhat Agree Indifferent Somewhat Disagree Fully Disagree
RQ 1.3: Perceived Learning The question for the
perceived learning is answered using five statements
(Appendix, MP2:4–MP2:8). In particular, we were
interested to learn about the perceived impact to the later
career (“The practiced scientific work approach will help
me in my later career.”) and shareable expertise built
in the course (“I built a specific exercise that I could
share with other teams.”), and a retrospective rating
of the group work in the mini-projects (looking back:
“contributed to my learning expericence”, “team work
[..] was good” and “collaboration [..] was good”).</p>
        <p>Figure 7 shows the aggregated results for the
perceived learnings. Approximately 63% of the SCM
students and 59% of the ASE students think that the
courses provide take-aways that will have a positive
impact on their later careers. Concerning the
shareable expertise, 53% of the SCM students state that
they have obtained knowledge and expertise that they
can share with others; 29% are indifferent. In the ASE
course, even though the course has more “practical”
elements, only 41% of the students think that they
built a shareable expertise, but 48% are indifferent.</p>
        <p>Concerning the general perceived learning
experience and the teamwork within the project team, the
vast majority of the students rate the courses as good
and very good. However, the cross-team collaboration
shows a different picture—notably in the SCM course
in which interdisciplinary work was enforced by the
course design. In the SCM course, 29% of the students
considered the cross-team collaboration good to very
good, but 55% rated the cross-team collaboration bad
to very bad. Analyzing this phenomenon, we found
the necessity for the different teams to interact with
each other to obtain required knowledge from other
lta re ASE
r
e
in re
lepH ca SCM
e
r
sha ASE 2
o
t
e
s
it
rep SCM
x</p>
        <p>E
am iton ASE
- ab
te r
s o
rsoC llcoa SCM 3
teams by also trying to keep their own schedule the
most disappointing aspect. On the other hand, we
found an “understanding” for this kind of work, which
reflects reality in interdisciplinary collaboration and,
thus, students eventually considered this a significant
learning. Considering the ASE course, we wanted to
learn whether a similar behavior can be observed. As
Figure 7 shows, cross-team collaboration is still
considered a problem, even though the heterogeneity of
the project teams and thus the need to collaborate
was reduced.</p>
        <p>An in-depth analysis of the perceived learnings of
mini-projects was performed by qualitatively coding
the responses of the free-form text questions (GC1:
“What was your major take-home asset [..]?”, MP7:
“What did you learn about the topic of your research
project?” and MP8: “What did you learn about
empirical research?”). For GC1, 26 students from the SCM
course provided feedback. In the ASE course, 27
students provided feedback for MP7 and MP8. In total,
we extracted 38 statements from the SCM course and
60 statements from the ASE course (for both
questions). The students’ statements were categorized
based on keywords, and the threshold for building a
category was set to three references.</p>
        <p>Table 6 provides the condensed qualitative
feedback on the perceived learnings in eight categories.
Summarized, topic specific learnings (e.g., application
of DSLs or comments in programming languages) as
well as learnings related to the application of
empirical methods (e.g., formulation of research questions
or application of specific empirical methods like
surveys) were frequently mentioned. Also, data
management (i.e., data collection, preparation and analysis
Category
Topic specific (i.e., mini-project topics)
Empirical methods (e.g. experiments)
Reporting findings (from studies)
Importance, meaning (emp. research)
Data management (in studies)
Effort (to plan/conduct a study)
Technical skills
Soft skills
Total</p>
        <p>SCM</p>
        <p>ASE
of data) and reporting results of empirical studies are
highlighted. Students experienced that conducting an
empirical study causes effort and might be a complex
endeavor (“It is hard to get results, which have a strong
meaning”). On the other hand, students also built an
understanding of the importance and the meaning of
empirical research in software engineering (“The topic
exists and is very useful if done correctly”). Finally,
students hardly report learnings regarding technical skills
(e.g., using LATEX or R). However, manifold learnings
about soft skills (e.g., reviewing techniques) are
reported, notably concerning teamwork and cross-team
collaboration (see also Figure 7).</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.2.2 RQ 2: Improved Understanding</title>
        <p>This research question aims at investigating if students
built an understanding of the role of empirical
studies. Specifically, if students consider empirical studies
a valuable instrument to complement the technical
software engineering activities in a beneficial way.
RQ 2.1: Changed Attitude towards Empirical
Studies To learn about the students’ understanding of
the value of empirical studies, we asked the students
whether their view on empirical studies has changed
once they actively conducted an empirical study
themselves (“I like the mini-project part”).</p>
        <p>ASE
SCM
5
9
10
9</p>
        <p>2
23
3</p>
        <p>10%
Fully Agree
20% 30% 40% 50% 60% 70% 80% 90% 100%</p>
        <p>Somewhat Agree Indifferent Somewhat Disagree Fully Disagree</p>
        <p>Figure 8 shows that 84% of the SCM students
changed their view on science and the value of
empirical studies after conducting an empirical study
themselves. The ASE course provides a different
picture. Still 52% of the students changed their view, but
almost 38% did not change their view on science and
empirical studies.
RQ 2.2: Challenges of Empirical Studies Besides
the general perception of science and empirical
studies, we are also interested in specific challenges. For
this, we revised the questionnaire (see Appendix) and
added questions that explicitly address the scientific
work process.</p>
        <p>Activity
Definition of research questions
Working with scientific literature
Design of research instruments
Implementation of instrument
Data collection
Performing data analysis
Writing the report
Results
V = 80
V = 55.5
V = 68
V = 43
V = 135
V = 107
V = 106
p-value</p>
        <p>To study the perceived challenges students face
when implementing empirical studies, we asked the
students to rate the different parts of the scientific
work process (Appendix, variables MP5:1–MP5:7; see
also Table 7). Furthermore, 27 students enrolled in
the ASE course provided a nickname, which we used
to investigate challenges over time by evaluating the
initial and the final questionnaires. We performed
a Wilcoxon signed-rank test to compare if there are
significant differences between the initial perception
of the scientific work process and the final one after
the study has been performed. Table 7 shows the test
results for the various parts of the work process.</p>
        <p>The results show that only for the activity data
collection there is a significant difference (p &lt; 0:05).
This indicates the perceived difficulties and challenges
regarding the data collection changed for the
participating students. A Spearman rank correlation
coefficient for the initial and final feedback on the level of
challenges regarding the data collection is low ( =
0.2775), which further indicates that there is only a
weak positive correlation between between the data
collection challenges perceived initially and the
challenges perceived at the end. Figure 9 shows the
uncorrelated perceived challenges regarding the different
activities in the scientific work process (collected in
the ASE course; initial and final questionnaire).</p>
        <p>We conclude that the data collection is an activity
in empirical studies for which challenges can be easily
over- and underestimated. When teaching empirical
software engineering it is therefore important to put
special emphasis on the important role of data
collection and its challenges to prevent students from
overor underestimating the required effort.</p>
      </sec>
      <sec id="sec-4-6">
        <title>4.2.3 RQ 3: Perceived Dis-/Advantages</title>
        <p>The third research question aims to investigate the
perceived advantages (“pros”) and disadvantages
(“cons”) of the mini-project approach. For this, we
qualitatively analyzed the students’ feedback by
codthe tr Final 3
tng epo
ii
rW R Initial 3
a
tgD iss Final 1
a
in ly
frrom anA Initial 1 4
e
P
ing the responses of the free-form text questions (GC2:
“Up to 5 things that were good” and GC3: “Up to 5
things that were bad”).</p>
        <p>In the SCM course data for GC2 and GC3 was
collected in the initial and final questionnaire. For the
ASE course, data was collected in the final
evaluation only. Hence, for the data analysis presented in
the paper at hand, we only consider the data
collected in both final evaluations. In the SCM course, 29
students provided comments in the final evaluation.
Respectively, 21 ASE students provided feedback on
perceived dis-/advantages. In total, we extracted 72
pro- and 42 con-statements from the SCM feedback,
and we extracted 54 pro- and 23 con-statements from
the ASE feedback. Both feedback sets were
categorized and analyzed based on keywords (qualitative
coding), whereas the threshold for a category was set
to three mentions. Table 8 provides the aggregated
qualitative feedback on the perceived pros and cons of
the mini-project approach in nine categories grouped
by SCM, ASE, and in total.</p>
        <p>General Perception In summary, the mini-project
approach was seen very positive. For both courses
SCM and ASE, the students identified considerably
more pros than cons on the mini-projects. In both
settings, i.e., SCM (a self-contained empirical software
engineering course) and ASE (a part of a software
,
engineering course), the obtained research-related
and technical skills were perceived very positive. The
topics covered in the courses were positively
evaluated as well; especially in the ASE course in which
mini-projects were integrated as a part of a software
engineering course and focused on current (hot)
topics in software engineering. Feedback and knowledge
transfer were especially highlighted and considered
positive in the SCM course with its different types of
collaborating teams (dedicated practice, method and
service teams), in-depth introductions to empirical
methods, and introductions and exercises in scientific
reading and writing. Also, group work was generally
considered positive, yet, the ASE course with its more
uniform teams was perceived more advantageous. As
already discussed in Sect. 4.2.2, the setting from the
SCM course suffers from the teams’ heterogeneity and
the necessity to establish a cross-team collaboration,
which caused more effort for the project teams.
Guest Lectures In both courses, guest lectures were
given by external researchers. Considering the
outcomes from Table 8, guest lectures in the ASE course
were considered positive, whereas the guest lectures
in the SCM course received a more indifferent rating
(with a slight tendency towards a negative evaluation).
We argue that this perception is related to the
selection of the speakers rather than the course setting, yet,
this remains subject to further investigation.
Course Organization From the organizational
perspective, the courses were perceived indifferent and a
relatively high number of positive and negative
comments was provided. On the positive side, students
highlighted the organization and structure in general
(“Organization was good”), and, in particular, also
the course assessment mode (“Fair grading system”)
and the sequence of activities (“The mini-projects were
structured well—good planning of when to do the work,
when to make a presentation and when to turn in the
paper”). On the negative side, the overall flow was
mentioned (“Things started to slow down way too much
after the first 5 lectures”) as well as the speed of the
lecture (“A little slow lectures”) and the general
scheduling and coordination of the lecture with other
obligations (“During examination time, time overlap with
studying”).</p>
        <p>Effort Finally, the effort caused by the courses was
perceived negative in both settings. Feedback for the
SCM course says “The volume does not fit well a 5 ECTS
point course” and, respectively, for the ASE course
“Sometimes a single task was too big”. This feedback
reflects a side-effect of project work that typically causes
more effort than closed tasks—or even “listen-only”
classes. However, such comments motivate a revision
of the projects, e.g., splitting tasks into smaller units
to better support continuous work and to reduce the
perceived effort in a short time frame, but still keep
the learning effect of performing empirical research
as we received it also from the SCM comments “in my
opinion learning the scientific method without working
with the methods it’s only knowing about the methods,
not learning them.”
4.3</p>
      </sec>
      <sec id="sec-4-7">
        <title>Threats to Validity</title>
        <p>We discuss issues, which may have threatened the
construct, internal and external validity as well as
measures to mitigate them.</p>
        <p>
          Construct Validity The construct validity might be
threatened by the two different implementations of
the approach presented and the instrument used for
its evaluation. Although the two courses differed with
respect to the applied integration strategy for
miniprojects, for both courses, we used the same yet
tailored questionnaire and combined the responses. To
increase construct validity, we developed the
questionnaire from an external source [
          <xref ref-type="bibr" rid="ref16">15</xref>
          ], which both
authors reviewed and evolved. Furthermore, we
collected and combined quantitative and qualitative data
to answer and discuss the different research questions.
        </p>
        <p>Due to the overall setup, the questionnaires differed
between the courses. Hence, some analyses like the
individual-based investigation of challenges over time
(RQ 2.2) were possible in the ASE course only.
Furthermore, analyses regarding perceived learnings of
the students had to be performed using different
questions (SCM: GC1, ASE: MP7 and MP8; see Appendix),
which was handled using a multi-staged coding
process that resulted in common categories.</p>
        <p>Internal Validity The internal validity might be
threatened by the rather low number of participants
and the participants’ self-reporting, which both might
affect the relationship between course-integrated
miniprojects and the investigated effects on learning and
understanding of concepts and the role of empirical
studies in software engineering. To mitigate these
threats, we studied the integration of mini-projects
in two settings with an acceptable number of
participants (SCM: 39 and ASE: 29). We also introduced the
questionnaire to the students to minimize the risk of
misinterpretation.</p>
        <p>To triangulate the results of quantitative analysis
and to investigate the relationship between
courseintegrated mini-projects and their effects holistically,
we also applied qualitative analysis to analyze the
responses of the participating students.</p>
        <p>External Validity The external validity might be
threatened by the issue of the rather low number of
settings in which the course was performed and
evaluated. However, we implemented and evaluated the
course for each of the two course integration strategies
proposed in Section 3.3, i.e., self-contained
empirical software engineering course and integration in
software engineering course. Two researchers from
two different institutions were involved in preparing,
conducting and evaluating the courses. Furthermore,
we combined quantitative and qualitative data to get
a broader view on teaching empirical software
engineering with course-integrated mini-projects.</p>
        <p>The participants’ self-reporting might also affect the
generalizability of the results. To mitigate this threat,
we introduced the questionnaire to the students to
minimize the risk of misinterpretation. Since the
paper at hand is an initial study, we consider further
in-depth analyses, e.g., of course artifacts like final
reports, and replications of the courses as well as the
transfer of the approach presented and its evaluation
as future work.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion</title>
      <p>In this paper we presented and evaluated an approach
to teach empirical software engineering with
courseintegrated mini-projects. In such mini-projects
students conduct small empirical studies in
collaborating teams within a software engineering course or
a research methods course. We illustrated the
approach by presenting two integration strategies: first
a self-contained course on empirical software
engineering given at the University of Southern Denmark
and second as part of an advanced software
engineering course given at the University of Innsbruck.
Both implementations were complemented by a study
that showed a positive learning experience and an
improved understanding of (empirical) software
engineering concepts.</p>
      <p>More than half of the participating students state
as a perceived learning of the course that empirical
studies are helpful for their later careers (systematic
and evidence-driven work). In addition, a statistical
analysis revealed that especially the data collection
activities are underestimated by the students, which
allows for future improvements of university courses.
We consider this aspect critical not only for empirical
software engineering in particular, but also due to the
increased importance of Data Science, Machine
Learning and Artificial Intelligence courses or even programs
in general. In all these setting, effective and efficient
data collection and preparation for further analysis
is essential. Hence, we encourage other teachers to
put special emphasis on all data-related activities, not
only the data analysis.</p>
      <p>In summary, students provided an overall positive
qualitative feedback on the course-integrated
miniprojects as well as on the skills achieved and their
relevance. This motivates to further explore and
disseminate the presented approach. However, on the
downside, the students pointed out the overall flow of
the courses and the perceived high effort to perform
mini-projects, which requires a further refinement of
the courses, e.g., by splitting it into smaller tasks of
uniform granularity. In future, we will therefore revise
our approach accordingly and further disseminate it.
We also plan to perform further in-depth analyses of
course artifacts, e.g., the students’ final reports, as
well as replications of the courses.</p>
      <p>Finally, this paper presents an approach that has
been implemented twice and it also provides initial
data. To get further insights and to improve the data
available, we cordially invite other teachers to adapt
and integrate our approach into their courses and to
share their experiences.
[19] M. Kuhrmann and J. Münch. Enhancing
software engineering education through
experimentation: An experience report. In 2018 IEEE
International Conference on Engineering, Technology
and Innovation (ICE/ITMC), pages 1–9, June
2018.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L. W.</given-names>
            <surname>Anderson</surname>
          </string-name>
          and
          <string-name>
            <surname>D. R</surname>
          </string-name>
          . Krathwohl, editors.
          <article-title>A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational Objectives, Abridged Edition</article-title>
          .
          <source>Pearson, 1st edition</source>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Basili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Selby</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Hutchens</surname>
          </string-name>
          .
          <article-title>Experimentation in software engineering</article-title>
          .
          <source>Trans. on Software Engineering</source>
          ,
          <volume>12</volume>
          (
          <issue>7</issue>
          ):
          <fpage>733</fpage>
          -
          <lpage>743</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Dale</surname>
          </string-name>
          .
          <article-title>Audiovisual methods in teaching</article-title>
          .
          <source>Dryden Press, 3 edition</source>
          ,
          <year>1969</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dillon</surname>
          </string-name>
          .
          <article-title>A Review of the Research on Practical Work in School Science</article-title>
          .
          <source>Technical report, King's College</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Fagerholm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Guinea</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Mäenpää</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Münch</surname>
          </string-name>
          .
          <article-title>Building blocks for continuous experimentation</article-title>
          .
          <source>In Proceedings of the 1st International Workshop on Rapid Continuous Software Engineering, RCoSE</source>
          <year>2014</year>
          , pages
          <fpage>26</fpage>
          -
          <lpage>35</lpage>
          , New York, NY, USA,
          <year>2014</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>F.</given-names>
            <surname>Fagerholm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuhrmann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Münch</surname>
          </string-name>
          .
          <article-title>Guidelines for using empirical studies in software engineering education</article-title>
          .
          <source>PeerJ Computer Science</source>
          ,
          <volume>3</volume>
          (
          <issue>e131</issue>
          ),
          <year>September 2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Fucci</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Turhan</surname>
          </string-name>
          .
          <article-title>A replicated experiment on the effectiveness of test-first development</article-title>
          .
          <source>In International Symposium on Empirical Software Engineering and Measurement</source>
          , pages
          <fpage>103</fpage>
          -
          <lpage>112</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          , Oct
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Fucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Turhan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Oivo</surname>
          </string-name>
          .
          <article-title>Impact of process conformance on the effects of test-driven development</article-title>
          .
          <source>In International Symposium on Empirical Software Engineering and Measurement</source>
          , pages
          <volume>10</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          :
          <fpage>10</fpage>
          . ACM,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>V.</given-names>
            <surname>Garousi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Felderer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuhrmann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>K.</given-names>
            <surname>Herkilog</surname>
          </string-name>
          <article-title>˘lu. What industry wants from academia in software testing?: Hearing practitioners' opinions</article-title>
          .
          <source>In Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering, EASE'17</source>
          , pages
          <fpage>65</fpage>
          -
          <lpage>69</lpage>
          , New York, NY, USA,
          <year>2017</year>
          . ACM.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Garousi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Felderer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Mäntylä</surname>
          </string-name>
          .
          <article-title>The need for multivocal literature reviews in software engineering: complementing systematic literature reviews with grey literature</article-title>
          .
          <source>In Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering, page 26. ACM</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Garousi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Felderer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Mäntylä</surname>
          </string-name>
          .
          <article-title>Guidelines for including grey literature and conducting multivocal literature reviews in software engineering</article-title>
          .
          <source>Information and Software Technology</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kasunic</surname>
          </string-name>
          .
          <article-title>Designing an effective survey</article-title>
          .
          <source>Technical report</source>
          , Carnegie Mellon University Pittsburgh Software Engineering Institute,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Kitchenham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Budgen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Brereton</surname>
          </string-name>
          .
          <article-title>Evidence-Based Software Engineering and Systematic Reviews</article-title>
          . CRC Press,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuhrmann</surname>
          </string-name>
          .
          <article-title>Teaching empirical software engineering using expert teams</article-title>
          .
          <source>In SEUH</source>
          , pages
          <fpage>20</fpage>
          -
          <lpage>31</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuhrmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Fernández</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Münch</surname>
          </string-name>
          .
          <article-title>Teaching software process modeling</article-title>
          .
          <source>In International Conference on Software Engineering</source>
          , pages
          <fpage>1138</fpage>
          -
          <lpage>1147</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuhrmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Mendez</given-names>
            <surname>Fernández</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Daneva</surname>
          </string-name>
          .
          <article-title>On the pragmatic design of literature studies in software engineering: an experience-based guideline</article-title>
          .
          <source>Empirical Software Engineering</source>
          ,
          <volume>22</volume>
          (
          <issue>6</issue>
          ):
          <fpage>2852</fpage>
          -
          <lpage>2891</lpage>
          ,
          <year>Dec 2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kuhrmann</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Münch</surname>
          </string-name>
          .
          <article-title>Distributed software development with one hand tied behind the back: A course unit to experience the role of communication in gsd</article-title>
          .
          <source>In 1st Workshop on Global Software Engineering Education (in conjunction with ICGSE</source>
          '
          <year>2016</year>
          ). IEEE,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>K.</given-names>
            <surname>Labunets</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Janes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Felderer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Massacci</surname>
          </string-name>
          .
          <article-title>Teaching predictive modeling to junior software engineers-seminar format and its evaluation: poster</article-title>
          .
          <source>In Proceedings of the 39th International Conference on Software Engineering Companion</source>
          , pages
          <fpage>339</fpage>
          -
          <lpage>340</lpage>
          . IEEE Press,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>J.</given-names>
            <surname>Linåker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Sulaman</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. M. de Mello</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Höst</surname>
          </string-name>
          .
          <article-title>Guidelines for conducting surveys in software engineering</article-title>
          .
          <source>Technical report</source>
          , Lund University,
          <year>January 2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.</given-names>
            <surname>Parker</surname>
          </string-name>
          .
          <article-title>Using laboratory experiments to teach introductory economics</article-title>
          . Working paper, Reed College, http://academic.reed.edu/economics/ parker/ExpBook95.pdf, accessed
          <year>2014</year>
          -
          <volume>10</volume>
          -23.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>K.</given-names>
            <surname>Petersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Feldt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mujtaba</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Mattson</surname>
          </string-name>
          .
          <article-title>Systematic mapping studies in software engineering</article-title>
          .
          <source>In International Conference on Evaluation and Assessment in Software Engineering</source>
          , pages
          <fpage>68</fpage>
          -
          <lpage>77</lpage>
          . ACM,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>K.</given-names>
            <surname>Petersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Vakkalanka</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Kuzniarz</surname>
          </string-name>
          .
          <article-title>Guidelines for conducting systematic mapping studies in software engineering: An update</article-title>
          .
          <source>Inf. Softw. Technol.</source>
          ,
          <volume>64</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>August 2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>P.</given-names>
            <surname>Runeson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Höst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rainer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Regnell</surname>
          </string-name>
          .
          <source>Case Study Research in Software Engineering: Guidelines and Examples</source>
          . John Wiley &amp; Sons,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wohlin</surname>
          </string-name>
          .
          <article-title>Empirical software engineering: Teaching methods and conducting studies</article-title>
          .
          <source>In Proceedings of the International Workshop on Empirical Software Engineering Issues: Critical Assessment and Future Directions</source>
          , volume
          <volume>4336</volume>
          <source>of LNCS</source>
          , pages
          <fpage>135</fpage>
          -
          <lpage>142</lpage>
          . Springer,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wohlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Runeson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Höst</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Ohlsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Regnell</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Wesslén</surname>
          </string-name>
          . Experimentation in software engineering. Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Babar</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Tell</surname>
          </string-name>
          .
          <article-title>Identifying relevant studies in software engineering</article-title>
          .
          <source>Information and Software Technology</source>
          ,
          <volume>53</volume>
          (
          <issue>6</issue>
          ):
          <fpage>625</fpage>
          -
          <lpage>637</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>