<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Vrije Universiteit Brussel Universidad Abierta Interamericana Universite catholique de Louvain Pleinlaan 2 Av. Montes de Oca 745 Place Sainte Barbe 2 Brussels</institution>
          ,
          <addr-line>Belgium Buenos Aires, Argentina Louvain-la-Neuve</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Code critics are a recommendation facility of the Pharo Smalltalk IDE. They signal controversial implementation choices such as code smells at class and method level. They aim to promote the use of good and standard coding idioms for increased performance or a better use of object-oriented constructs. This paper studies relations among code critics by analyzing co-occurrences of code critics detected on the Moose system, a large and mature Smalltalk application. Based upon this analysis, we present a critique on code critics, as a rst step towards an improved grouping of code critics that identi es issues at a higher level of abstraction, by combining lower-level critics that tend to co-occur, as well as improvements in the de nition of the individual critics.</p>
      </abstract>
      <kwd-group>
        <kwd>code critics</kwd>
        <kwd>bad smells</kwd>
        <kwd>co-occurrence</kwd>
        <kwd>Smalltalk</kwd>
        <kwd>Pharo</kwd>
        <kwd>empirical software engineering</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>A plethora of code recommendation tools exists to support developers when
coding a software system. Whereas some of these recommendations remain at a
high level of abstraction (e.g., low coupling and high cohesion), others are much
more speci c (e.g., `classes should not have more than 6 methods').</p>
      <p>
        Research on recommendation systems to detect and correct controversial
implementation choices typically follows a top-down approach. Recommendations
de ned at a high level of abstraction are re ned into the detection of more
concrete symptoms until a straightforward detection strategy is reached. Di erent
recommendation approaches exist that detect issues like design aws [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] or
antipatterns [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. While these approaches discover similar issues, they often vary
signi cantly in the heuristics, metrics and thresholds they use. These di erences
have various causes. Heuristics are incomplete by de nition. The de nition of
many metrics remains open to interpretation resulting in di erent tools that
may provide di erent results for the same metric. And thresholds used tend to
be either absolute values that cannot be reused across di erent applications, or
relative values whose cut point may be arbitrary. For these reasons, it is di cult
to justify that concrete detection strategies and how they are combined into
higher-level recommendations accurately represent all and only those entities
that a higher-level recommendation aims to capture.
      </p>
      <p>As opposed to de ning high-level recommendations as an ad-hoc
combination of lower-level issues, this paper presents a rst step towards `discovering'
higher-level recommendations from a detailed analysis of the occurrence of more
speci c low-level ones. More speci cally, our analysis is based on a study and
possible interpretation of the co-occurrence of low-level recommendations in
several applications.</p>
      <p>The low-level issues analyzed in this particular paper are the so-called code
critics. Code critics are a list of detectors for harmful implementation choices in
Pharo Smalltalk that signal certain defects or performance issues in Smalltalk
source code, mainly in methods and classes. Each critic is de ned with a short
name and a rationale that explains why that implementation choice could be
harmful and, in some cases, also proposes a refactoring. An example of such a
code critic is the critic named `Instance variables not read AND written' with
rationale:
\Checks that all instance variables are both read and written. If an
instance variable is only read, you can replace all of the reads with nil,
since it couldn't have been assigned a value. If the variable is only
written, then we don't need to store the result since we never use it. This
check does not work for the data model classes, or other classes which
use the instVarXyz:put: messages to set instance variables."</p>
      <p>Although code critics sometimes report false positives (like the instVarXyz:
put: messages mentioned in the rationale of the critic above), the Code Critics
browser allows one to `ignore' each reported result individually. Results that
have been ignored are saved within the image1, so that the system remembers
that they have been ignored and does not present them again to the developer
when the same code critics are checked again later.</p>
      <p>Each code critic belongs to one of the following categories: Unclassi ed
rules, Style issues, Coding Idiom Violations, suggestions for Optimization,
Design Flaws, Potential Bugs, actual Bugs and likely Spelling errors. For instance,
the code critic named `Instance variables not read AND written' is categorized
as an Optimization issue.</p>
      <p>This paper is structured as follows: Section 1 detailed the problem and
context of low-level recommendation tools. Section 2 introduces the concept of code
critics in more detail, and Section 3 shows how we de ne the distance function to
calculate if code critics co-occur in the analyzed application. Section 4 presents
critiques on individual critics and several patters of co-occurring critics. Section
5 concludes our work and presents some future work.
1 Smalltalk systems store the entire program and its state in an image le.</p>
    </sec>
    <sec id="sec-2">
      <title>An Introduction of code critics</title>
      <p>Although Pharo's Critic Browser is designed to be launched by a developer from
a menu in the IDE, the tool can also be run programmatically to analyze part
of the image with a selected set of critics. In our experiment, we analyzed 120
code critics, 27 applied to classes, and 93 applied to methods. We excluded the
category of Spelling rules, which check the spelling of comments and identi ers
of classes, methods and variables. We are less interested in these rules as they
do not refer to either the structure or design of the source code, and tend to
generate quite some noise in the results.2</p>
      <p>Id Critic name
CC01 A metamodel class does not override a method that it should override
CC02 Class not referenced
CC03 Class variable capitalization
CC04 De nes = but not hash
CC05 Excessive inheritance depth
CC06 Excessive number of methods
CC07 Excessive number of variables
CC08 Instance variables de ned in all subclasses
CC09 Instance variables not read AND written
CC10 Method de ned in all subclasses, but not in superclass
CC11 No class comment
CC12 Number of addDependent: messages &gt; removeDependent:
CC13 Overrides a `special' message
CC14 References an abstract class
CC15 Refers to class name instead of `self class'
CC16 Sends `questionable' message
CC17 Subclass responsibility not de ned
CC18 Variable is only assigned a single literal value
CC19 Variable referenced in only one method and always assigned rst
CC20 Variables not referenced</p>
      <p>Table 1. Some of the most frequent class-level critics and their identi ers.
2 For the same reason they do not even appear in recent versions of the Critic Browser.
related to tests. We excluded the tests because critics about test code often lead
to false positives. Test code tends to adhere to other idioms than ordinary code.
For instance, test code often contains duplicated code between test methods
(due to similar calls to `assert' or other testing methods). Moreover, test code
often contains trial-and-error code to deal with all cases to be tested, which is
typically not considered good practice in normal code.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Critiques on individual and co-occurring code critics</title>
      <p>Our analysis generates two boolean tables per package: one for its classes and
another for its methods. Each table shows which source code entities su er from
which critics. Each column represents a method or class of the package, and
each row represents which entities are in the result set of a code critic. E.g.,
suppose we analyze the following class-level code critics in the package Compiler
(which is part of the analyzed distribution): `Instance variables not read AND
written' (CC09), `Sends `questionable' message' (CC16), `Excessive number of
variables' (CC07), `Excessive number of methods' (CC06) and `Variables not
referenced' (CC20). Table 3 presents the results3, where the rows identify the
critiqued entities for a corresponding critic in the analyzed package. In other
words, critiqued(c; p) is a sequence of boolean values &lt; c(e1); c(e2); : : : ; c(en) &gt;
where c(ei) = true if and only if ei is the ith entity in package p (by alphabetic
order on its fully quali ed name), and ei is in the result set of code critic c.</p>
      <p>Next, we calculate the distance between pairs of critics based on the
entities they critique. The distance between two code critics c1 and c2, for a given
package p, is calculated by counting the number of classes or methods where the
critics do not match (XOR of the critiqued entities), over the number of classes
3 To limit the size of the example, this table present only a subset of all classes that
were critiqued. However, for the sake of the example, in order to illustrate how the
approach works, we ask the reader to assume that the classes shown in Table 3 are
all the critiqued classes in the package.
isoguuSb ceookdN rceod lilrreaaabV ilreeaobdN teodnN ilreabR eeodN teonnNm rseeodN toodhdN lirceop lre</p>
      <p>e
a g
.Am.lB.nE.itL.V.C.U .M .A .P.M.Dm .iopCm.rrseaP .tceoyB epTVm</p>
      <p>m V sa g
a om dn se iss a e e</p>
      <p>e
re od
cod leN
n b
eE ira
d a
or methods that violate one or both of the critics being analyzed (OR of the
critiqued entities). This distance value varies between zero and one. Values close
to zero mean that a pair of critics tends to a ect the same source code entities.
Dp(c1; c2) =jcritiqued(c1; p) critiqued(c2; p)j</p>
      <p>jcritiqued(c1; p) _ critiqued(c2; p)j</p>
      <p>For instance, Table 5 calculates the distance between `Instance variables not
read AND written' and `Variables not referenced' based on the presented
example. The resulting distance, shown as a shaded cell in Table 4, is 0.83 (i.e., 5/6)
because their results di er in ve classes, but coincide in one class (BlockNode).
Therefore, the critics have low co-occurrence for the results of this package.</p>
      <p>Using the Boolean table 3 and the distance table 4 we proceed to discard pairs
of code critics that do not seem interesting for our analysis, based on three
criteria. First, pairs with high distances (greater than 0.9) are discarded as they tend
not to co-occur often and therefore are likely to represent accidental matches.
Secondly, we discard pairs of critics that always occur together (distance zero)
in the same source code entities, because they are likely to represent alternative
implementations of a same code critic. Thirdly, we exclude all pairs of critics for
which one of the code-critics covers more than 90% of all source code entities
analyzed, because as a consequence of their high coverage they will show a strong
correlation with nearly all other code-critics and thus generate signi cant noise
in the results. In our example, all distances are kept in our analysis. The choice
of thresholds of 0.9 and 90% was based on initial experiments where we tried to
determine what values would constitute a good cut point to discard less relevant
pairs of critics. However, these thresholds should be reevaluated when applying
the approach to other code critics, other applications, or di erent programming
languages.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Identi ed patterns</title>
      <p>Based on the raw results of our initial analysis, this section presents some
interesting critiques which we have observed. Since this is a preliminary research, we
do not claim these critiques to be exhaustive nor complete. In the text below, we
use the word critique to denote the identi ed patterns in our analysis, and critic
to refer to Pharo's code critics. We present our critiques as patterns, consisting
of a short name, a description and some concrete examples. The patterns are
divided in two big categories. The rst category describes the critiques
discovered by analyzing individual code critics. Note that we limited our analysis of
individual code critics to those that appear at least in one of the non-discarded
co-occurrences. The second category describes the critiques which stem from
the observed correlations between pairs of code critics (extracted from their
co-occurrence as explained in Section 3).
4.1</p>
      <sec id="sec-4-1">
        <title>Critiques on Individual Critics</title>
        <p>Here we present our critiques on the individual class-level critics of Table 1.
Misleading name. Some code critics have misleading names and should be
improved. For example, `References an abstract class' (CC14) is misleading.
According to the name, a developer could assume that the code critic identi es
a class B that is referencing an abstract class A. But in fact it detects the
opposite, namely an abstract class A being referred to from somewhere within
the analyzed application. A better name would thus be `Abstract class being
referenced'. The name `Instance variables not read AND written' (CC09) is ill
chosen too because, looking at how this code critic is implemented, it refers to
instance variables which are EITHER read-only, write-only, OR not referenced
at all. A better name for this code critic could therefore be `Instance variables
not fully exploited'.</p>
        <p>Too general. Some critics are too general and could be split into several more
speci c critics. For example, the critic `Instance variables not read AND written'
(CC09) mentioned above could be split into three di erent critics
(`unreferenced instance variables', `only written instance variables', `only read instance
variables'). The critics `Overrides a special message' (CC13) and `Sends
`questionable' message' (CC16) are about speci c messages and could be split into
separate critics for each of those messages. This would however lead to many
individual critics, but they could be presented as a common group to the user,
allowing him to inspect or ignore the details of the individual underlying critics,
if he desires to do so.</p>
        <p>Too tolerant. We also observed that, despite the fact that some critics seem
meaningful and well-de ned, they produce mostly false positives. This happens
because there are often cases where it is acceptable not to adhere to some critics.
However, when a critic produces mainly such false positives, we can wonder
whether it is useful to keep the critic. Nevertheless, our results might be biased,
since we analyzed only one rather well-designed framework (Moose).</p>
        <p>An example of such a critic is `Refers to class name instead of `self class' '
(CC15), for which we discovered mostly acceptable deviations. For example, in
Smalltalk it is quite common and acceptable in methods for checking equality
to write anObject isKindOf: X, to verify that the type of anObject is indeed
of a particular class X (and not some subclass). Similarly, the expression self
class == X is often used to check if a given instance of this class is indeed of
class X. Another case is when you write X new, because you want to be sure
to create an instance of X and not of one of its subclasses. A last example is
when you write an expression like X allsubclasses to refer to the root X of a
relevant class hierarchy, and you want to manipulate the individual classes.</p>
        <p>
          Many of the critics which are too tolerant could be re ned further in order to
avoid catching some of the false positives they produce. For example, if we
consider CC15 again, we note that it often regards an expression like isKindOf: X
used in a method implemented by class X as problematic, but in fact isKindOf:
self class would be even more problematic, because it would get a di erent
meaning in subclasses. This could be solved by making the critic take into
account this case or any other of the above cases as known exceptions to the critic.
Too restrictive. Whereas some critics are too tolerant, others are too
restrictive and could miss interesting cases. For example, `Excessive inheritance depth'
(CC5) uses a threshold of 10 as depth level, but may miss other cases of
excessive depth such as classes with inheritance depth 9. Obviously, there is no
perfect threshold, but we found 20 additional classes with a depth of at least 9
(as compared to only 10 classes with a depth of at least 10) that should have
been reported. We assume the threshold was set high in order to avoid producing
too many results, making it harder for the user to process all reported results.
Redundant representation of results. Another source of noise in the results
could be the amount of results produced by the critic, even if none of them
are false positives. Sometimes, it would su ce to present the results di erently
to avoid such noise. For example, consider `Excessive inheritance depth' (CC5)
again. Currently, it reports all leaf classes of hierarchies that su er from the
critic. But this generates many unnecessary results. It su ces to know the root
of the hierarchy to start xing the problem (and additionally, this could allow
the user to lower the threshold so that the critic becomes less restrictive too).
Missing critics. Some important critics seem to be missing from the list of
code critics. For example, there seem to be little or no critics related to
inheritance issues [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], such as local behavior in a class with respect to its superclass or
subclasses, or good reuse of superclass behavior and state. Local behavior
identies methods de ned and used in the class that are not overridden in subclasses,
often representing internal class behavior, and Reuse of superclass behavior and
state identi es concrete methods that invoke superclass methods by self or
super sends, not rede ning behavior of the class. Code critics regarding inheritance
could identify bad practices when implementing hierarchies.
        </p>
        <p>Good critics. Whereas in this paper we focused mainly on negative critiques on
code critics, we can remark that there are useful and well-designed code critics
too. Our ultimate goal is to keep the good critics while identifying those that
can be improved, in order to come up with a new and better-structured set of
code critics. For example, `De nes = but not hash' (CC04) shows all classes that
override = but not hash. If method hash is not overridden, then the instances
of such classes cannot be used in sets. The implementation of Set assumes that
equal elements have the same hash code. Another example is `Method de ned
in all subclasses, but not in superclass' (CC10) which detects classes de ning a
same method in all subclasses, but not as an abstract or default method in the
superclass. This critic helps us nd similar code that might be occurring in all
the subclasses and that should be pulled up into the superclass.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Patterns of Co-occurring Critics</title>
        <p>Now that we have described some critiques based on an analysis of
individual code critics, we discuss some critiques derived from our analysis of the
cooccurrence of pairs of code critics.</p>
        <p>Redundant Critics. Critics are redundant when they detect the same
problem. This happens for critics that come in two versions: one which just detects
the problem and another one which detects it and at the same time proposes an
automated refactoring to the problem. An example of this is `detect:ifNone: -&gt;
anySatisfy:' (MC01) versus `Uses detect:ifNone: instead of contains:' (MC09).
Whereas MC01 o ers an automated restructuring, in spite of its name MC09
only detects the problem. Although we did discover such cases in our
experiment where we ran the critics directly, Pharo's Critic Browser would only use
one of these critics in order to avoid the user to get repeated results. Observe
that the solution suggested by critic MC09 di ers from the solution proposed by
MC01, which can be confusing. Given that a same critic could have several
possible refactorings, it would therefore be better to keep refactoring and detection
strategies separated, and to have only one detection strategy per critic.
Indirect Correlation. This occurs when the results of two critics overlap
signi cantly, without them having a common root cause. For instance, the following
two correlations seem to occur essentially because one of the critics (CC06 )
generates so many results. They are `Excessive number of methods' (CC06) vs.
`Excessive number of variables' (CC07), and `Sends `questionable' message' (CC16)
vs. `Excessive number of methods' (CC06).</p>
        <p>Overlap Requires Splitting. A third pattern occurs when two critics produce
overlapping results because they have a common root cause. It would be good to
split such critics such that the common part becomes one separate critic and the
non-overlapping parts become other critics. For instance, `Instance variables not
read AND written' (CC09) is overlapping with `Variables not referenced' (CC20)
because both critics detect unreferenced instance variables. While CC09 should
be split as explained in section 4.1 (too general), CC20 could be split in a critic
for class variables and one for instance variables. The critic for `unreferenced
instance variables' would then become a common subcritic for both CC09 and
CC20.</p>
        <p>Overlap Requires Merging. This pattern occurs when two code critics that
regularly occur together could be combined into a single more speci c critic.
For instance, in the Smalltalk language, methods are grouped in method
protocols representing the purpose of the method. Instance creation methods like
new, for example, are put in the `instance-creation' protocol. The
methodlevel critic `Inconsistent method classi cation' (MC02) is triggered when methods
are wrongly classi ed and `Unclassi ed methods' (MC08) are reported when no
protocol was assigned to a method. These critics coincide when an overridden
method is unclassi ed whereas the method it overrides was classi ed. From the
point of view of critic MC02, it is considered as an inconsistent classi cation since
the classi cation of the parent and child method are di erent, whereas from the
point of view of critic MC08 the child method is unclassi ed. Combining them
in a new dedicated critic `Inconsistently unclassi ed methods' makes sense,
because there is an easy refactoring that could be associated to this particular
combination of critics, namely to classify the child method in the same protocol
as the parent one. For cases where the critics do not overlap, the original critics
MC02 and MC08 should still be reported.</p>
        <p>Same niche. Sometimes, code critics seem to correlate just because they both
refer to a speci c kind of source entity. For example, the two independent critics
on abstract classes `References an abstract class' (CC14) and `Subclass
responsibility not de ned' (CC17) often appear together, simply because they are the
only ones that both apply to abstract classes. (This pattern could be considered
as a speci c case of Indirect Correlation.)
Almost subset. This pattern occurs when most of the result set for one critic
in practice always seems to be a subset of that for another critic. For example,
the results for code critic `Variable referenced in only one method and always
assigned rst' (CC19) refers to the same variables reported by `Instance variables
not read AND written' (CC09). Indeed, if a variable is used only in one method
and always assigned rst (CC19), it is likely that this variable will not be read
in that same method (or any other method) and thus is reported by CC09 too.
Ill-de ned critic. Correlations between two critics may arise because one of
them is ill-de ned. If the ill-de ned critic were xed, the correlation would
probably disappear. For example, `Refers to classname instead of self class' (CC15)
correlates with `Sends `questionable' message' (CC16), because CC15 often gives
false positives related to the use of isKindOf:, which is also one of the
questionable messages. If we would x CC15 to avoid those false positives, this correlation
would likely disappear.</p>
        <p>Noisy correlation. This pattern describes critics that seem to be correlated
to many other critics and therefore produce too much noise. They could better
be removed if the overlap with another critic is not strong (likely to be only
accidental matches). For example, `Excessive number of Methods' (CC6) has
this problem, because the more methods a class has, the higher the chance that
the class su ers from other critics as well.</p>
        <p>High-level critics. Whereas in this section we analyzed the co-occurrence
of critics mainly by focusing on their shortcomings, in forthcoming research
we will analyze the results more in-depth and will also identify good, desired
or expected correlations between critics. For example, the correlation between
`Utility methods' (MC10) and `Law of Demeter' (MC03) is not unexpected as it
may indicate an imperative (non object-oriented) programming style.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Discussion, Conclusion and Future Work</title>
      <p>This paper presented our initial results of an analysis of low-level code critics
detected on the Moose system, a large and mature Smalltalk application. The
results of this analysis can help us identify which low-level critics could bene t
from rede nition or refactoring so that they would provide more accurate or
meaningful results, as well as how to combine them into more high-level critics
to improve the recommendations they provide.</p>
      <p>As future work, we plan to provide a more in-depth analysis, including a
deeper analysis of the method-level critics, and propose concrete improvements,
combinations and refactorings of the existing code critics. This analysis could
then be repeated iteratively, to further improve the improved critics, again by
analyzing their correlations, until we eventually reach a stable group of proposed
critics.</p>
      <p>
        Finally, although in this paper we focused on Pharo Smalltalk's code critics
only, we believe the ideas and approach presented in this paper to be easily
generalizable to other code checking tools and programming languages. To con rm
this, we have started to analyze other code checking tools for similar correlations
and improvements: CheckStyle [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], PMD [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and FindBugs [
        <xref ref-type="bibr" rid="ref11 ref4">4, 11</xref>
        ] for Java, Splint
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] or Cppcheck [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] for C, Pylint [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] for Python, FxCop for .NET, PHP Mess
Detector [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] for PHP and Android Lint [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for Android programming. For each of
these tools, we performed an initial analysis on a single application. We observed
that, in spite of the fact that some of these tools focus on checks that are quite
di erent from Pharo's code critics, our approach could still be used to analyze
those tools. Whereas for most tools we indeed found many examples similar to
the critique patterns mentioned in this paper, for some tools we discovered only
very few correlations. This could be due to the particular applications that were
analyzed (indeed, in our analysis of the 51 packages of Moose too, there were
a few packages that did not have many critics). Or it could suggest that, while
the approach remains applicable, it may be less relevant for some of the tools
we analyzed. This may for example be the case for tools that are already quite
mature and o er a stable and orthogonal set of checks. More experiments are
needed to con rm this. This may be the topic of a forthcoming paper.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1. Androidlint. http://tools.android.com/tips/lint. Accessed:
          <fpage>2015</fpage>
          -03-30.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2. Checkstyle. http://checkstyle.sourceforge.net. Accessed:
          <fpage>2015</fpage>
          -03-30.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3. Cppcheck. http://cppcheck.sourceforge.net/. Accessed:
          <fpage>2015</fpage>
          -03-30.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. Findbugs. http://findbugs.sourceforge.net. Accessed:
          <fpage>2015</fpage>
          -03-30.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5. MOOSE. http://www.moosetechnology.org/. Accessed:
          <fpage>2015</fpage>
          -03-30.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. Phpmd. http://phpmd.org/. Accessed:
          <fpage>2015</fpage>
          -03-30.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7. PMD. http://pmd.sourceforge.net/. Accessed:
          <fpage>2015</fpage>
          -03-30.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8. Pylint. http://www.pylint.org/. Accessed:
          <fpage>2015</fpage>
          -03-30.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9. Splint. http://www.splint.org/. Accessed:
          <fpage>2015</fpage>
          -03-30.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. G. Arevalo,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ducasse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gordillo</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Nierstrasz</surname>
          </string-name>
          .
          <article-title>Generating a catalog of unanticipated schemas in class hierarchies using formal concept analysis</article-title>
          .
          <source>Inf. Softw. Technol.</source>
          ,
          <volume>52</volume>
          (
          <issue>11</issue>
          ):
          <volume>1167</volume>
          {
          <fpage>1187</fpage>
          ,
          <string-name>
            <surname>Nov</surname>
          </string-name>
          .
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>D.</given-names>
            <surname>Hovemeyer</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Pugh</surname>
          </string-name>
          .
          <article-title>Finding bugs is easy</article-title>
          .
          <source>In Companion to the 19th Annual ACM SIGPLAN Conference on Object-oriented Programming Systems, Languages, and Applications</source>
          ,
          <string-name>
            <surname>OOPSLA</surname>
          </string-name>
          <year>2004</year>
          , pages
          <fpage>132</fpage>
          {
          <fpage>136</fpage>
          . ACM,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>R.</given-names>
            <surname>Marinescu</surname>
          </string-name>
          .
          <article-title>Detecting design aws via metrics in object oriented systems</article-title>
          .
          <source>In Proc. of the Technology of Object-Oriented Languages and Systems (TOOLS)</source>
          , pages
          <fpage>173</fpage>
          {
          <fpage>182</fpage>
          .
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>N.</given-names>
            <surname>Moha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-G.</given-names>
            <surname>Gueheneuc</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Leduc</surname>
          </string-name>
          .
          <article-title>Automatic generation of detection algorithms for design defects</article-title>
          .
          <source>In Proc. of the Int'l Conf. on Automated Software Engineering (ASE)</source>
          , pages
          <fpage>297</fpage>
          {
          <fpage>300</fpage>
          . IEEE Computer Society,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>