<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Decision Support Platform for Guiding a Bug Triager for Resolver Recommendation Using Textual and Non-Textual Features</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ashish Sureka</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Himanshu kumar Singh</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Manjunath Bagewadi</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Abhishek Mitra</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rohit Karanth</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Siemens Corporate Research</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Technology</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>India</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2015</year>
      </pub-date>
      <fpage>23</fpage>
      <lpage>30</lpage>
      <abstract>
        <p>-It is largely believed among researchers that the software engineering methods and techniques based on mining of software repositories (MSR) have the potential of providing sound and empirical basis for Software Engineering tasks. But it has been observed that the main hurdles to adoption of the techniques are organizational in nature or people centric, for example lack of access to data, organizational inertia, general lack of faith in results achieved without human intervention, and a tendency of experts to feel that their inability to arrive at optimal decisions is rooted in someone else's shortcomings, in this case person who files the bug. We share our experiences in developing a use case for applying such methods to the common software engineering task of Bug Triaging within an industrial setup. We accompany the well researched technique of applying textual information content in bug reports with additional measures in order to improve the acceptance and effectiveness of the system. Specifically we present: A) use of non-textual features for factoring in the decision making process that a human would follow; B) making available effectiveness metrics that present a basis for comparing the results of the automated systems against the existing practice of relying on human decision making; and C) presenting reasoning or the justification behind the results so that the human experts can validate and accept the results. We present these non-textual features and some of the metrics and discuss on how these can address the adoption concerns for this specific use case. Index Terms-Bug Fixer Recommendation, Bug Triaging, Issue Tracking System, Machine Learning, Mining Software Repositories, Software Analytics, Software Maintenance</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. PROBLEM DEFINITION AND AIM</title>
      <p>Bug Resolver Recommendation, Bug Assignment or
Triaging consists of determining the fixer or resolver of an issue
reported to the Issue Tracking System (ITS). Bug Assignment
is an important activity both in OSS (Open Source Software)
or CSS/PSS (Closed or Proprietary Source Software) domain
as the assignment accuracy has an impact on the mean time to
repair and project team effort incurred. Bug resolver
assignment is non-trivial in a large and complex software setting,
especially with globally distributed teams, wherein several
bugs may get reported on a daily or weekly basis increasing the
burden on the triagers. One of the primary ways, identification
of resolvers for open bug reports is normally done is through
a decision by Change Control Board (CCB), members of
which represent various aspects of the software project such as
project management, development, testing and quality control.
The CCB usually works as a collective decision making body
that reviews the incoming bugs and decides who to assign it to,
or whether more information is required, or a bug is irrelevant,
or behavior needs to be observed more. As can be imagined
that the decisions made by the CCB are knowledge intensive
and it requires prior knowledge about the software system,
expertise of the developer, team structure and composition and
developer workload. In some instances, in order to optimize
the time of the entire CCB, a pre-CCB is conducted by
individual members of the board on assigned subset of bugs
and the individual recommendations are reviewed in complete
CCB. The average time to triage a bug in such a process can
be captured as following:
t = Tpre CCB</p>
      <p>M + TCCB</p>
      <p>N</p>
      <p>
        Here, t denotes the average time it takes to triage a bug,
given M committee members taking Tpre CCB time
individually for assessing their subset of bugs and TCCB time together
to discuss and finalize recommendation for N bugs. From the
above, its clear that any method that can assist in reducing
any of M, TCCB and Tpre CCB has potential to increase
overall efficiency. Research shows that manual assignment of
bug reports to resolvers without any support from an expert
system results in several incorrect assignments [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Incorrect assignment is undesirable and inefficient as it delays
the bug resolution due to reassignments. While there has been
recent advancements in solutions for automatic bug
assignment, the problem is still not fully solved [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
Furthermore, majority of the studies on automatic bug assignment
are conducted on OSS data and there is a lack of empirical
studies on PSS/CSS data. In addition to lack of studies on
Industrial or Commercial project data, application of
nontextual features such as developer workload, experience and
collaboration network for the task of automatic bug assignment
is relatively unexplored. The work presented in this paper is
motivated by the need to develop a decision support system for
bug resolver recommendation based on the needs of Triagers.
The specific aims of the work presented in this paper are:
1) To build a decision support system for guiding and
assisting triagers for the task of automatic bug assignment,
this involves application of textual (terms in bug reports)
to build a classification model
2) Using of non-textual features (components, developer
workload, experience, collaboration network, process
map) for contextualizing the model.
3) Provide insights about the bug fixing efficiency, defect
proneness and trends on time-to-repair through visual
analytics and a dashboard.
4) Build the system in user centric manner by providing
the justification and reasoning behind the recommended
assignments.
      </p>
      <p>Rest of the paper is structured as follows: in Section II we
discuss and argue that user centric approach to build such
recommendation systems incorporates the elements necessary
to address the above goals. Next we discuss some of the
contextualization measures for the model, specifically use of
a practitioner survey results and process map. In Section IV,
describes some of the metrics and measures that accompany
the system are how can they be used. Section V presents early
results from applying the system on two sets of data obtained
from actual industrial projects, one active for 2 years whereas
other for 9 years.</p>
    </sec>
    <sec id="sec-2">
      <title>II. USER CENTERED DESIGN AND SOLUTION</title>
      <p>ARCHITECTURE</p>
      <p>We create a User-Centered Design considering the
objectives and workflow of CCB. Our main motivation is to ensure a
high degree of usability and hence we give extensive attention
to the needs of our users. Figure 1 shows a high-level overview
of the 4 features incorporated in our bug assignment decision
support system. We display the Top K recommendation (k is
a parameter which can be configured by the administrator)
which is the primary goal of the recommender system. In
addition to Top K recommendation, we present the justification
and reasoning behind the proposed recommendation.</p>
      <p>We believe that displaying justification is important as the
decision maker needs to understand the rule or logic behind the
inferences made by the expert system. We display the textual
similarity or term overlap and component similarity between
the incoming bug report and the recommended bug report as
justification to the end-user. We show developer collaboration
network as one of the output of the recommendation system.
The node size in the collaboration network represents the
number of bugs resolved, edge distance or thickness represents
the strength of collaboration (number of bugs co-resolved)
and the node color represents role. As shown in Figure 1,
we display the developer workload and experience to the
Triager as complementary information assisting the user to
make triaging decisions. Figure 1 illustrates all four factors
influencing triaging decisions (Top K Recommendation,
Justification and Reasoning, Collaboration Network and Developer
Workload and Experience) which connects with the results
of our survey and interaction with members of the CCB in
our organization. Figure 2 shows the high-level architecture
illustrating key components of the decision support system.
We adopt a platform-based approach so that our system can
be customized across various projects using project based
customization and fine-tuning. The architecture consists of
a multi-step processing pipeline from data extraction (from
the issue tracking system) as back-end layer to display as
the front-end layer. As shown in Figure 2, we implement
adaptors to extract data from Issue Tracking System (ITS)
used by the project teams and save into a MySQL database.
We create our own schema to save the data in our database
and implement functionality to refresh the data based on a
pre-defined interval or triggered by the user. Bug reports
consist of free-form text fields such as title and description.
We apply a series of text pre-processing steps on the bug
report title and description before they are used for model
building. We remove non content bearing terms (called as
stop terms such as articles and propositions) and apply word
stemming using the Porter Stemmer (term normalization). We
create a domain specific Exclude List to remove terms which
are non-discriminatory (for example, common domain terms
like bug, defect, reproduce, actual, expected and behavior).
We create an Include List to avoid splitting of phrases into
separate terms such as OpenGL Graphics Library, SQL Server
and Multi Core. We first apply the Include List and extract
important phrases and then apply the domain specific exclude
list. Include and Exclude Lists are customizable from the User
Interface by the domain expert. The terms extracted from
the title and description of the bug reports represents
discriminatory features for the task of automatic bug assignment
(based on the hypothesis that there is a correlation between
the terms and the resolver). The next step in the processing
pipeline is to train a predictive model based on the Machine
Learning framework. We used Weka which is a widely used
Java based Machine Learning toolkit called for model building
and application. We embed Weka within our system and invoke
its functionality using the Java API. We train a Random
Forest and Naive Bayes classification model and use a voting
mechanism to compute the classification score of the ensemble
rather than individual scores to make the final predictions. We
also extract the component of the bug report as a categorical
feature as we observe a correlation between the component
and the resolver.</p>
      <p>In terms of the implementation, we create an
AttributeRelation File Format (ARFF) that describes the list of
training instances (terms and components and predictors and the
resolver as the target class). As shown in the Figure 3, we
extract the developer collaboration network, information on
prior work experience with the project and workload from the
ITS. The ITS contains the number of bugs resolved by every
developer from the beginning of the project. The ITS also
contains information about the open bugs and the assignees
for the respective open bug. We use close and open bug
status information and the assignees field to compute the prior
experience of a developer and the current work load with
respect to bug resolution. Similarly, the collaboration network
between developers is determined by extracting information
from the bugs lifecycle. The frond-end layer implementation
consists of D3.JS, JavaScript, Java Servlet and Java Server
Pages (JSP).</p>
      <p>Since our goal is to solve problems encountered by the
practitioners and model the system as closely as possible to the
actual process and workflows of CCB, we conduct a survey of
experienced practitioners to better understand their needs. We
conduct a survey of 5 senior committee members belonging
to Change Control Board (CCB) of our organizations
software product lines. The average experience (in CCB) of the
respondents was 7.5 years. In our organization, a CCB consists
of members belonging to various roles: project manager,
product manager, solution architect, quality assurance leader,
developers, and testers. The survey respondents had been in
various roles and active members of bug triaging process.
Hence the survey responses are from representatives in-charge
of various aspects such as development, quality control and
management. The objective of our survey was to gain insights
on factors influencing the change boards triaging decisions.</p>
    </sec>
    <sec id="sec-3">
      <title>IV. PRACTITIONER’S SURVEY Figure 3 shows the 5 questions in our questionnaire and the responses received. Each response is based on a 5 point scale (1 being low and 5 being high). Figure 3 reveals that</title>
      <p>Fig. 3. Survey Results of Practitioners in Industry on Factors Influencing
Bug Resolver Recommendation Decision [EX: Resolver Experience with the
Project, WL: Resolver Workload, CP: Bug Report Component, TD: Bug
Report Title and Description, PS: Bug Report Priority and Severity]
there are clearly multiple factors and tradeoffs involved in
making a triaging and bug assignment decision. We observe
that bug report title and description and the available resolvers
experience with the project are the two most important factors
influencing the triaging decision (both having a score of 3.8 out
of 5). The priority and severity of the bug as well as component
assigned to the bug are also considered quite important with a
score of 3.4. The current workload of the resolvers as a criteria
influencing bug triaging decision received a score of 2.4 out
of 5 which is the lowest amongst all the 5 factors. The survey
results support our objective of developing a bug resolver
recommendation decision support system based on multiple
factors (such as priority and severity of the bug report and
current workload of the available resolvers) and not just based
on matching the content of the bug report with the resolved
bug reports of fixers.</p>
      <p>We present our case study on a real-world project using the
IBM Rational ClearQuest as Issue Tracking System.
ClearQuest keeps track of entire bug lifecycle (from reporting to
resolution), state changes and comments posted by project
team members. We consider three roles: Triager, Developer
and Tester. A comment can be posted and the state of a bug
can be changed by Triager, Developer and Tester. Figure 4
shows 9 possible states of a bug report in ClearQuest and
the 81 possible transitions. A comment (in the ClearQuest
Notes Log) consisting of state transition from Submitted to
In-Work contains the Triager and the developer role (from
and to field). Similarly, In-Work to Solved state transition
contains the developer and testers IDs. We parse ClearQuest
Notes Log and annotate each project member ID with one of
the three roles: developer, tester and triager. We then remove
tester and triager and consider only the developers as bug
resolvers for the purpose of predictive model building. This
step of inferring developers is crucial since, triagers and testers
frequently commit on the bug reports and their comments
should not skew the results.</p>
    </sec>
    <sec id="sec-4">
      <title>V. DECISION SUPPORT SYSTEM USER INTERFACE</title>
      <sec id="sec-4-1">
        <title>A. Recommendation and Settings</title>
        <p>Figure 5 shows the snapshot of the decision support system
displaying the Top K recommendation, score for each
recommendation, prior work experience and the current work load
of the proposed resolver. Figure 5 also shows the
collaboration network of the developers. Nodes in the collaboration
network can be filtered using the check-boxes provided in
the screen. The confidence values shown in Figure 5 are
probability estimates for each of the proposed resolver. The
sum of the confidence values or probability estimates across
all possible resolvers (and not just the Top K) sum
upto 1. We display the probability estimates and not just the
rank to provide additional information on the strength of the
correlation between the resolver and the incoming bug report.
Figure 6 shows a snapshot of the settings page consisting of
five tabs: resolvers, components, and training duration, include
and exclude list and train model. We describe and provide a
screenshots for one of the tabs due to limited space in the
paper. We apply a platform-based approach and provide a
configurable settings page so that the decision support system
can be customized according to specific projects. As shown
in Figure 6, a user can add, rename and modify components
component names. A software system evolves over a period
of time and undergoes architectural changes. New components
get added, components gets merged and renamed. We provide
a facility to the user to make sure that the model built on the
training data is in-synch with the software system architecture.
Similar to component configuration, we provide a tab to
customize resolver list. For example, if a developer has left
the organization then its information can be deleted through
the Resolver tab and ensure that his or her name is not shown
in the Top K recommendation. The training instances and the
amount of historical data on which to train the predictive
model can also be configured. The predictive model should
be representative of the current practice and hence we provide
a facility for the user to re-train the model based on recent
dataset.</p>
      </sec>
      <sec id="sec-4-2">
        <title>B. Visual Analytics on Bug Resolution Process</title>
        <p>
          In addition to the Top K recommendation, justification,
developer collaboration network [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ], developer prior work
experience and current workload, we also present interactive
visualizations on the bug resolution process. Francalanci et
al. [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] present an analysis of the performance characteristics
(such as continuity and efficiency) of the bug fixing process.
They identify performance indicators (bug opening and closing
trend) reflecting the characteristics and quality of bug fixing
process. We apply the concepts presented by Francalanci et al.
[
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] in our decision support system. They define bug opening
trend as the cumulated number of opened and verified bugs
over time. In their paper, closing trend is defined as the
cumulated number of bugs that are resolved and closed over
time [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ][
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>Figure 7 displays the opening and closing trend for the Issue
Tracking System dataset used in our case-study. At any instant
of time, the difference between the two curves (interval) can
be computed to identify the number of bugs which are open at
that instant of time. We notice that the debugging process is of
high quality as there is no uncontrolled growth of unresolved
bugs (the curve for the closing trend grows nearly as fast or
has the same slope as the curve for the opening trend).</p>
        <p>Figure 7 shows a combination of Heat Map and a Horizontal
Bar Chart providing insights on the defect proneness of a
component (in-terms of the number of bugs reported) and the
duration to resolve each reported bug. We observe that the
bug fixing time for the Atlas Valves component is relatively
on the lower side in comparison to the sDx component. UBE,
Volume Review and Workflow are the three components on
which maximum numbers of bugs have been reported. The
information presented in Figure 8 is useful to the CCB as the
bug resolver recommendation decision is also based on the
buggy component and the defect proneness of the component.
Figure 9 shows a spectrum of Box plots across various years
and quarters displaying descriptive statistics and five-number
summary on time taken to fix a bug (bug resolution time).
The spectrum of Box plots provides insights to the CCB on
the changes in the distribution of resolution time over several
quarters or time periods.</p>
        <p>
          Figure 10 shows a bubble chart displaying component
diversity and trends on the average number of developers
needed to a resolve a bug across project time-line. Figure
10 reveals that the component diversity was high in July
and October Quarter of the year 2013 which means that the
reported bugs were spread across various components. We
infer that the component diversity decreases in April and July
Quarter for the year 2014 which means that majority of the
bugs were reported within a small number of components.
We also present insight on average number of developers
needed to resolve a bug. We first compute the average number
of developers needed to resolve a bug over the entire 2
years (dataset period) and then color-code the bubble for
each quarter depending on its value being above or below
the average value. Figure 11 displays the number of state
transitions between any of the 81 state transitions. Figure 11
is a Heat Map in which every cell is color coded depending
on the number transitions representing the cell. The Heap
Map is useful to the CCB in gaining insights on process
anti-patterns and inefficiencies. For example, Reopened bugs
increase the maintenance costs, degrade overall user-perceived
quality of the software and lead to un-necessary rework by
busy practitioners [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. Figure 11 reveals several cases of bug
re-opening (such Solved-to-Inwork, Terminated-to-In Decision
transitions).
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>VI. EMPIRICAL ANALYSIS AND RESULTS</title>
      <p>We conduct a series of experiments on real-world data from
Siemens product lines to evaluate the effective of our approach.
We conduct experiments on two projects to investigate the
generalizability of our approach. One of the projects is a Image
processing based product (Project A) deployed in Computed</p>
      <p>RECALL
PRECISION
F-MEASURE</p>
      <p>RECALL
PRECISION
F-MEASURE</p>
      <p>
        Tomography Scan Machine. Project A started in 2012 and has
772 bugs reported till November 2014. Out of the 772 bug
reports present in the Issue Tracking System, 345 have been
solved and validated. We found that 236 issues have been
closed due to either being duplicate bug reports or bug reports
invalidated by the triager due to insufficient information to
reproduce the bug. At the time of conducting the experiments,
a total 78 members (project manager, product manager, testers,
developers, test leads) are working on the project. The second
project (Ultra-Sound Clinical Workflow Management System)
is a relatively larger project (Project B) which started in 2005
and there are 17267 bugs reported till October 2014. Out
of 17267 reported bugs, 12438 are resolved. A total of 253
professionals have worked on the project during the past 9 to
10 years. We consider only the resolved bugs for the purpose
of conducting our experiments. N folds cross validation with
N = 10 and K = [
        <xref ref-type="bibr" rid="ref1">1, 10</xref>
        ] is used for computing the precision and
recall performance evaluation metrics. The formulae used for
calculating precision@K and recall@K in information retrieval
systems are as follows (where K is the number of developers
in the ranked list):
1 XB jPi \ Rij
B i=1 jRij
1 XB jPi \ Rij
      </p>
      <p>B i=1 jPij</p>
      <p>In the formula for Precision and Recall, B denotes the
number of bug reports, R represents the set of actual resolvers
for a bug and P is the set of Predicted resolvers for the bug.</p>
      <p>The calculated values have been shown in Tables I and II.
We observe that the recall values increase as we increase K
(which is quite intuitive). At K = 10, Project A has a recall
of 0:734 with 345 solved bugs, whereas Project B has a recall
of 0:905 with 1000 of the latest solved bugs. We observe
that the precision values are maximum at K = 2 in both the
projects. This is because in both projects the average number
of resolvers per bug is very close to 2.</p>
      <p>We conduct a manual analysis and visual inspection of a
large number of bug reports and identify several instances
in which a bug report is assigned to a bug fixer based on
prior experience, workload, recent activity and severity and
not just based on the closest match in terms of problem
area expertise. We observe that in several cases the top
recommended resolver (by our prediction model purely based
on similar content-based recommendation) does not get the
bug assigned due to factors such as workload and prior work
experience of developers with the project incorporated in our
decision support tool but not within the Decision Tree and
Naive Bayes based classification model. In one of the bug
reports (status transition from in-work to in-work), we see a
developers comments</p>
      <p>Due to workload issue, Alan was able to solve it partially
and it needs to update the resolver.</p>
      <p>Since the bug is related to SRC Component, Todd has
the experience in solving SRC related bugs and assigns
the bug to Todd instead of Ramesh.</p>
      <p>The bug is high priority and assign it to Abhishek.
Assign partial work to Rashmi and partial work to Manju
Please assign this bug to me (I have been working in it
recently).</p>
      <p>Our manual inspection of several bug reports and the
threaded discussions across two active projects in our
organization demonstrates that factors in addition to content
based assignment needs to be presented to the decision maker
(as incorporated in our proposed decision-support system)
for making better triaging decisions. In order to enable the
projects to track the effectiveness and benefits of using the
recommendation system, we proposed simple process metrics
as shown in figure 12. The metrics are calculated for Project
A, considering the data from the bugs that have already been
resolved.</p>
    </sec>
    <sec id="sec-6">
      <title>VII. CONCLUSIONS</title>
      <p>Our survey results demonstrate that there are multiple
factors influencing triaging decision. Terms in bug report
title and description as well as resolver experience with the
project are the two most important indicators for making bug
assignment decision. Our interaction with practitioners in our
organization reveals that justification or reasoning behind a
recommendation, developer collaboration network, developer
work experience and workload are also important and useful
information in addition to the Top K recommendation.
Descriptive statistics, trends and graphs on bug fixing efficiency,
big opening and closing trends, mean time to repair and
defect proneness of components are also important and
complementary information for the Change Control Board while
making triaging decisions. We demonstrate the effectiveness
of our approach by conducting experiments on real-world
CSS/PSS data from our organization and report encouraging
accuracy results. We conclude that an ensemble of classifiers
consisting of Decision Tree and Naive Bayes learners and
incorporating factors such as workload, prior work experience,
recent activity and severity of bugs is an effective mechanism
for the task of automatic bug assignment.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bortis</surname>
          </string-name>
          and A. v. d. Hoek, “
          <article-title>Porchlight: A tag-based approach to bug triaging,”</article-title>
          <source>in Proceedings of the 2013 International Conference on Software Engineering, ICSE '13</source>
          , pp.
          <fpage>342</fpage>
          -
          <lpage>351</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>X.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhou</surname>
          </string-name>
          , “
          <article-title>Accurate developer recommendation for bug resolution</article-title>
          ,” in Reverse Engineering (WCRE),
          <year>2013</year>
          20th Working Conference on, pp.
          <fpage>72</fpage>
          -
          <lpage>81</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>X.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          , “
          <article-title>Dretom: Developer recommendation based on topic models for bug resolution</article-title>
          ,”
          <source>in Proceedings of the 8th International Conference on Predictive Models in Software Engineering, PROMISE '12</source>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>28</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>W.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          , “
          <article-title>Drex: Developer recommendation with k-nearest-neighbor search and expertise ranking</article-title>
          ,
          <source>” in Software Engineering Conference (APSEC)</source>
          ,
          <year>2011</year>
          18th
          <string-name>
            <given-names>Asia</given-names>
            <surname>Pacific</surname>
          </string-name>
          , pp.
          <fpage>389</fpage>
          -
          <lpage>396</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tamrawi</surname>
          </string-name>
          , T. T. Nguyen,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Al-Kofahi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T. N.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          , “
          <article-title>Fuzzy set and cache-based approach for bug triaging</article-title>
          ,”
          <source>in Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering</source>
          , pp.
          <fpage>365</fpage>
          -
          <lpage>375</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sureka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Rastogi</surname>
          </string-name>
          , “
          <article-title>Using social network analysis for mining collaboration data in a defect tracking system for risk and vulnerability analysis</article-title>
          ,
          <source>” in Proceedings of the 4th India Software Engineering Conference</source>
          , ISEC '
          <fpage>11</fpage>
          , (New York, NY, USA), pp.
          <fpage>195</fpage>
          -
          <lpage>204</lpage>
          , ACM,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>C.</given-names>
            <surname>Francalanci</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Merlo</surname>
          </string-name>
          , “
          <article-title>Empirical analysis of the bug fixing process in open source projects</article-title>
          ,” in Open Source Development,
          <source>Communities and Quality</source>
          , pp.
          <fpage>187</fpage>
          -
          <lpage>196</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Lal</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Sureka</surname>
          </string-name>
          , “
          <article-title>Comparison of seven bug report types: A case-study of google chrome browser project</article-title>
          ,
          <source>” in Software Engineering Conference (APSEC)</source>
          ,
          <year>2012</year>
          19th Asia-Pacific, vol.
          <volume>1</volume>
          , pp.
          <fpage>517</fpage>
          -
          <lpage>526</lpage>
          ,
          <year>Dec 2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Shihab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ihara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kamei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ibrahim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ohira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Adams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hassan</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          K.-i. Matsumoto, “
          <article-title>Studying re-opened bugs in open source software,” in Empirical Software Engineering</article-title>
          , pp.
          <fpage>1005</fpage>
          -
          <lpage>1042</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>