<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Modified Delta Maintainability Model of Object-Oriented Software</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Pavlo Skladannyi</string-name>
          <email>p.skladannyi@kubg.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olena Nehodenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Svitlana Shevchenko</string-name>
          <email>s.shevchenko@kubg.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oksana Zolotukhina</string-name>
          <email>zolotukhina.oks.a@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitalii Nehodenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Borys Grinchenko Kyiv University</institution>
          ,
          <addr-line>18/2 Bulvarno-Kudryavska str., Kyiv, 04053</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>State University of Telecommunications</institution>
          ,
          <addr-line>7 Solomenska str., Kyiv, 03110</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>117</fpage>
      <lpage>124</lpage>
      <abstract>
        <p>Modern software systems are increasingly integrated into vital society areas, from managing critical infrastructure to piloting vehicles. That is why one of the most important priorities is the reduction of possible software defects. The speed of development of social processes and technologies determines the need for adaptation, which in turn requires software adjustments. Analysis of various definitions and aspects of maintainability, as well as established models and approaches to measuring object-oriented software, remains a relevant issue. This analysis makes it possible to determine the possibilities of improving the efficiency of the assessment by methods of statistical analysis. Predictive assumptions about object development include maintainability of object-oriented software. At the same time, the method of modifying Delta Maintainability Model (DMM) by expanding the measurable properties of the source code is used. It is important to demonstrate the stability and effectiveness of object-oriented software change measurement by conducting comparative analysis for source code changes, which makes it possible to measure maintainability in processes with continuous delivery and uninterrupted integration methodological approaches. At the same time, the interpretation of the assessment results makes it possible to establish a causal relationship and eliminate shortcomings.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Quality of software</kwd>
        <kwd>delta maintainability model</kwd>
        <kwd>DMM</kwd>
        <kwd>SIG maintainability model</kwd>
        <kwd>SIGMM</kwd>
        <kwd>object-oriented software</kwd>
        <kwd>methods of statistical analysis</kwd>
        <kwd>Mozilla Rhino</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Research by C. Jones shows a steady
increase in the involvement of engineers in
software support jobs, from 9.09% percent in
1950 to 72.73% in 2000 and a projected
involvement rate of 77.27% in 2025 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The
requirements to increase the level of quality and
maintainability are the reason for many studies
and the constant search for software
measurement and evaluation methodologies.
Software maintainability is a well-researched
topic [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Most of the research is based on the
importance of software maintainability,
emphasizing the relationship with adaptability,
and reducing the number of defects [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The
quality of the software, specifically,
maintainability differs from other properties by
the complexity of the factors that determine
them, so their adjustment is almost always
difficult and long-term [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Scientific studies devoted to software
quality problems include publications of
authors: C. Jones [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], R. Plösch, H. Gruber,
C. Korner, M. Saft [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], A. Madi, O.K. Zein [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>Improving the quality and maintainability of
object-oriented software by reducing the
number of defects is important for any
development.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Maintainability in Modern</title>
    </sec>
    <sec id="sec-3">
      <title>Software Quality Models</title>
      <p>Software quality models define
maintainability as one of the main attributes.
Most of the methods of measuring
maintainability, consider not only
maintainability itself, but are also part of quality
models. Maintainability is a complex concept
that has been repeatedly examined by studies
offering different formulations and
measurement approaches. However, most of
the studied approaches, to some extents, have
the following aspects:
 Ambiguity of terminology and definitions of
quality criteria.
 Abstractness, absence of definitions of
formulations and methods of measurement.
 Complexity or impossibility of
interpretation.
 Conducting a causal analysis of
measurement results.</p>
      <p>
        McCall’s model [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] consists of a 3-level
hierarchy and determines the relationship
between software quality attributes.
Maintainability, as an internal quality factor, is
related to product modification and is defined
as the amount of effort required to identify and
correct a program defect in the operating
environment. Maintainability, like all internal
quality factors, is measured indirectly through
related software properties: simplicity, brevity,
informativeness, and modularity.
      </p>
      <p>It is offered to measure the properties by
ranking from 1 (goal not achieved) to 10
(excellent implementation), without indicating
specific metrics or methods of measurement.
An example of the Fig. 1 according to the
model, maintainability is determined by the
levels of the hierarchy.</p>
      <p>
        Quality Model for Object-oriented Design
(QMOOD), offered by Bansia and Davis in
2002 [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. This model consists of a 4-level
hierarchical structure. The model also includes
a set of metrics designed to measure quality
attributes. The definition of quality attributes,
with some modifications, is based on the
software quality model ISO 9126 [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The
model also defines design attributes and their
corresponding metrics. Design attributes, in
turn, are related to quality attributes.
Maintainability, which is defined by ISO 9126
as a quality attribute, implies a certain stage of
software completion, therefore the model is
focused only on its sub-characteristic—
understandability, which should allow using the
model at earlier stages of development.
      </p>
      <p>
        The connections between design properties
and quality attributes [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] in the model
(QMOOD) were established. Basic metrics for
measuring design properties can be defined.
      </p>
      <p>Design Size in Classes (DSC) is the number
of all classes provided by the program design.</p>
      <p>Number of Hierarchies (NOH) is the amount
of class hierarchies of the program.</p>
      <p>Average Number of Attributes (ANA) is the
average of the number of classes inherited by a
class and is calculated by counting the number
of classes along all inheritance paths from the
root class(es) to all classes of the inheritance
structure.</p>
      <p>Data Access Metric (DAM) is a value
between 0 and 1 and is the ratio of the number
of private (protected) attributes of a class to the
total number of attributes defined by the class.</p>
      <p>Direct Class Coupling (DCC) is the number
of classes on which the class depends directly,
either by defining an attribute or passing
message (parameters) in methods.</p>
      <p>
        Class Association Method (CAM) is a value
in the range from 0 to 1, and is a calculation of
the interdependence of class methods based on
the list of method parameters [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The metric
is calculated by summing the intersection of the
method parameters with the maximally
independent set of parameters of all types in the
class.
      </p>
      <p>Measure of Aggregation (MOA) this metric
measures the degree of part-to-whole
connection implemented by attributes. The
value is the sum of the number of declared data
whose types are user-defined classes.</p>
      <p>Measure of Functional Abstraction (MFA)
ranges from 0 to 1 and is the ratio of the number
of methods inherited by a class to the total
number of methods accessed from other
methods of the class.</p>
      <p>Number of Polymorphic Methods (NOP) is
the sum of the number of methods that can
exhibit polymorphic behavior.</p>
      <p>Class Interface Size (CIS) is a quantitative
measure of the count of public methods of a
class.</p>
      <p>Number of Methods (NOM) is a quantifier
of the count of all methods defined by a class.</p>
      <p>
        The quality model ISO/IEC 25010:2016
“Systems and software engineering” defines the
Security Quality Requirements Engineering
(SQuaRE). System and software quality models
[
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ] is a replacement for the ISO 9126
standard [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] (Fig. 2). The standard extends the
quality model with two new high-level
characteristics: compatibility (is a new
characteristic) and security (in the previous
standard is a sub-characteristic of functional
suitability).
      </p>
      <p>The standard also introduced changes to the
maintainability structure. New
sub-characteristics have been introduced: “modularity” and
“reusability.” The subcharacteristics
“variability” and “stability” were replaced by the new
characteristic “modification.” The standard
provides the following definitions of
maintainability:
 Maintainability is defined as the degree of
effectiveness and efficiency in which a
product or system can be modified by
designated service personnel. At the same
time, maintainability can be construed as the
inherent ability of a product or system to
facilitate maintenance work or as quality
during use;
 Modularity is the degree to which a system
or computer program is composed of
discrete components such that a change in
one component has minimal impact on other
components.
 Reusability is the degree to which an asset
can be used in more than one system or in
the construction of other assets.
 Analysability is defined as the degree of
effectiveness and efficiency in which it is
possible to assess the effect on the product
or system of the intended change in one or
more parts, or to diagnose non-completion,
the causes of failures, or to identify the parts
to be modified.
 Modification is the degree of effectiveness
and efficiency in which a product or system
can be modified without introducing defects
or degrading the existing quality of the
product. Modification is a combination of
variability and stability.
 Testability is the degree of effectiveness and
efficiency to which test criteria can be
established for a system, product, or
component, and tests must be performed to
determine whether those criteria have been
met.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Maintainability Model (SIG-MM)</title>
      <p>
        The model is independent of the
programming language and software
architecture, has indicators that are easy to
understand and explain, and is based on defined
relationships of system-level quality
characteristics defined by the ISO 9126-1
standard [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] with the characteristics of the
source code properties and their metric
indicators. To a large extent, this approach is
based on the need to identify causal
relationships between source code properties
and maintainability, because the latest is a
complex and multi-component quality attribute.
      </p>
      <p>The simple symbolic scale ++ / + / o / – / –
– is used to rank the results of evaluating a
particular property of the code.</p>
      <p>The size of the program is one of the simple
and direct indicators of maintainability,
because the larger the size, the more effort is
required for cognitive perception, making
changes, testing.</p>
      <p>
        The indicator is set for each language
separately on the basis of research [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] that
determine the relationship between the average
number of lines of code (LOC) in a separate
programming language per one functional point
and the number of functional points that can be
produced by one person in one month.
      </p>
      <p>For the purposes of the model under
consideration, the size of the program is
determined by the person-years required to
create the program (Table 2).</p>
      <p>Thus, a software product requiring 160
person-years is considered too large, and for
systems implemented in Java is equal to 1.3
million lines of code or 2.6 for the COBOL
language.</p>
      <p>The complexity per unit of the program is a
property of the source code and is defined as the
degree of internal complexity of the units of the
source code that it consists of (Table 3).</p>
      <p>With further aggregation of complexity per
program unit to determine the ratio of lines of
code of each risk level as a percentage. That is,
if the program consists of 20,000 lines of code,
and at the same time the sum of the lines of code
of the program units with a high risk of
complexity is 1,000 lines of code, the aggregate
value for the high risk category will be 5%
(Table 4).</p>
      <sec id="sec-4-1">
        <title>The maximum relative amount</title>
        <p>of lines of code (LOC), %</p>
      </sec>
      <sec id="sec-4-2">
        <title>High risk</title>
      </sec>
      <sec id="sec-4-3">
        <title>Rank</title>
        <p>++
+
o
–
––</p>
        <p>Thus, for example, if the rank of the
program is defined as “+”, the amount of lines
of code with high risk does not exceed 5%, with
high risk 15% and 50% of lines of code are
within the limits of moderate risk.</p>
        <p>Duplication (or cloning of code) reduces the
cognitive perception of the program, the
possibility of making changes, and
unmotivatedly increases the size of the
program. And it is defined as the repetition of a
block of code for more than 6 lines, while
spaces at the beginning of the lines are not taken
into account to determine the repetition
(Table 5).</p>
        <p>The size of the program unit is an important
indicator, because large-sized program units
require more costs to support, also, this
indicator additionally indirectly displays the
possible complexity. The indicator is defined as
the number of lines of code (LOC) followed by
size categorization and ranking similar to the
definition of complexity per program unit.</p>
        <p>Module testing is calculated as a relative
indicator of program coverage by program unit
tests. This practice is not static analysis and
refers to dynamic code analysis.</p>
        <p>The calculation of the overall assessment of
the system is carried out by the average
weighting of the indicators of each property of
the source code.</p>
        <p>For example: program size is rated as small
“++”, with very high complexity per program
unit “––”, high duplication and program unit
size “–”, and moderate testing. Accordingly, the
analyzability of such software as well as the
stability are average, while variability and
testability are low, which is averaged by a low
“–” maintainability assessment. However, these
results are more effective in determination of
causal relationship. Thus, in order to increase
maintainability, it is necessary to carry out
refactoring aimed at program units of very high
complexity with the aim of reducing it, and
reducing the size of the program units, as well
as removing duplications.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. A modified Model of Delta</title>
    </sec>
    <sec id="sec-6">
      <title>Maintainability of Object</title>
    </sec>
    <sec id="sec-7">
      <title>Oriented Software</title>
      <p>The Maintainability Model (SIG-MM) does
not have certain disadvantages, but it is based
on the measurement of the entire source code of
the software product, which causes the weak
representativeness of the measurement results
with minor changes in the source code.</p>
      <p>
        An example of this is entry #402331 in the
bug registration system of the Mozilla Rhino
software product [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. The specified defect was
fixed by commit #262602 [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], which is in the
measurements range from –5 to 5. The
maintainability model (SIG-MM) has a rating
of –0.007. The specified result does not show
any significant changes in maintainability,
which is not true, because the 200 lines of code
introduced by the changes have a significant
negative impact on maintainability. However,
in relation to the size of all Mozilla Rhino code
terms, compared to which the changes were
validated, the indicator received an
unrepresentative result [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>
        There is a Delta Maintainability Model
(DMM), which does not contain the above
shortcomings, but does not take into account
the specifics of the object-oriented
programming paradigm. It is intended to
compare and analyze partial changes to the
source code, not the program as a whole. The
model integrates with version control systems,
allowing integration with DevOps tools for use
in analyzing ongoing source code assessments.
Risk ranking is based on the threshold values of
the Maintainability Model (SIG-MM) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        This article proposes a modification of the
delta Maintainability Model (DMM) by
expanding the measurable properties of the
source code, determining their relationship with
the sub-characteristics of maintainability
defined by ISO/IEC 25010:2016, and
determining measurement methods and
threshold values [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ].
      </p>
      <p>Further the basic model is denoted by
DMM, the resulting indicator is DMMS.
References to the proposed model are denoted
as DMM+, the indicator, respectively, DMMS+</p>
      <p>
        The calculation takes place taking into
account the order established for the Delta
Maintainability Model (DMM) [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] with the
following changes in definitions:
      </p>
      <p>RC = {low, high};</p>
      <p>CP = {class Connectivity, class Difficulty,
method Difficulty, the Number of methods,
method Size, the Number of parameters,
Dublication, module Dependency}.</p>
      <p>
        Philo, Tarsio G.S., and M. Bigonya [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]
conducted a research of 111 software systems
and proposed the determination of threshold
values of object-oriented software metrics
(Table 6).
      </p>
      <p>The practical validation of the proposed
model was carried out by analyzing software
products with open source code, implemented
using an object-oriented programming
paradigm, a development process using version
control systems and a significant number of
participants in the development process and a
long history of code changes.</p>
      <p>Because the analysis of the source code of
software products requires the study of the
abstract syntax tree (AST), therefore, in order
to simplify the implementation of the
application intended for analysis, all software
products are selected with the requirement of
implementing the object-oriented part with a
common programming language.</p>
      <p>For the comparative analysis, six software
products of different functional purposes, with
open source code, with the implementation of
the object-oriented part in the Python
programming language, were chosen.</p>
      <p>The scope of analysis covers only changes
in files (modules) that contain instructions in
the Python programming language and have the
extension “*.py”, while changes to files with
instructions for unit testing are not taken into
account.</p>
      <p>In order to carry out research and
measurements, a program was implemented
with command line interface support and
simultaneous measurement of DMM and
DMM+ model indicators.</p>
      <p>
        According to the results of the analysis, a
significant indicator of the Pearson correlation
coefficient ranging from 0.77 to 0.86
demonstrates a strong positive correlation
between DMMS and DMMS+ values. Thus,
DMMS+ metrics along with DMMS reflect the
relationship to source code changes that affect
maintainability. Values of DMMS indicators
and validation were confirmed [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] in the
course of empirical research.
      </p>
      <p>Correlation of DMMS and DMMS+
indicators according to the analysis of 1000
changes made in the repository:
1. Tensorflow r = 0.86.
2. Sentry r = 0.8.
3. Django r = 0.82.
4. Odoo r = 0.84.
5. Saleor r = 0.77.
6. Zulip r = 0.77.</p>
      <p>Despite the positive correlation, the
indicators show fluctuation according to the
indicator of the absolute average difference. It
is important to show the changes in the
indicator of the absolute average difference
between the indicators of DMMS and DMMS+
based on the results of the analysis of 1000
changes made to each of the repositories
(Fig. 3).</p>
    </sec>
    <sec id="sec-8">
      <title>5. Conclusions</title>
      <p>Summarizing the above considerations, it
should be noted that in this work, the analysis
of existing models and measurement metrics of
object-oriented software was carried out, their
advantages and disadvantages were
determined.</p>
      <p>A modified delta model of object-oriented
software maintainability, by expanding the
measurable properties of the source code,
determining their relationship with the
maintainability sub-characteristics defined by
ISO/IEC 25010:2016. Measurement methods
and threshold values are defined. Practical
validation of the offered model was carried out
by analyzing open source software products
implemented using an object-oriented
programming paradigm, a development process
using version control systems and a significant
number of participants in the development
process and a long history of code changes.</p>
      <p>The stability and effectiveness of measuring
changes in object-oriented software has been
demonstrated by means of a comparative
analysis of changes made to the source code,
which makes it possible to measure compliance
in processes with methodological approaches of
continuous delivery and uninterrupted
integration. At the same time, the interpretation
of the evaluation results makes it possible to
establish a causal relationship and eliminate
shortcomings.</p>
    </sec>
    <sec id="sec-9">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <source>The Economics of Software Maintenance in the Twenty First Century</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Grechaninov</surname>
          </string-name>
          , et al.,
          <article-title>Decentralized Access Demarcation System Construction in Situational Center Network</article-title>
          ,
          <source>in Workshop on Cybersecurity Providing in Information and Telecommunication Systems II</source>
          , vol.
          <volume>3188</volume>
          , no.
          <issue>2</issue>
          ,
          <issue>2022</issue>
          , pp.
          <fpage>197</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Buriachok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sokolov</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <article-title>Skladannyi Security Rating Metrics for Distributed Wireless Systems</article-title>
          ,
          <source>in 8th International Conference on “Mathematics. Information Tech-nologies. Education</source>
          ,” vol.
          <volume>2386</volume>
          ,
          <year>2019</year>
          , pp.
          <fpage>222</fpage>
          -
          <lpage>233</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Kipchuk</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , et al.,
          <source>Investigation of Availability of Wireless Access Points based on Embedded Systems</source>
          , in IEEE International Scientific-Practical Conference Problems of Infocommunications,
          <source>Science and Technology</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>246</fpage>
          -
          <lpage>250</lpage>
          . doi:
          <volume>10</volume>
          .1109/picst47496.
          <year>2019</year>
          .
          <volume>9061551</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>[5] The Cost of Poor Software Quality in the US: A 2020 Report: The Consortium for Information &amp; Software Quality (CISQ) 4</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Albrecht</surname>
          </string-name>
          , Measuring Application Development Productivity,
          <source>in Proceedings of the Joint SHARE, GUIDE, and IBM Application Development Symposium</source>
          ,
          <year>1979</year>
          , pp.
          <fpage>83</fpage>
          -
          <lpage>92</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>McCall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Richards</surname>
          </string-name>
          , G. Walters,
          <source>Factors in Software Quality</source>
          , vol. III,
          <article-title>Preliminary Handbook on Software Quality for an Acquisiton Manager</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V. R.</given-names>
            <surname>Basili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. C.</given-names>
            <surname>Briand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. L.</given-names>
            <surname>Melo</surname>
          </string-name>
          ,
          <article-title>A validation of Object-Oriented Design Metrics as Quality Indicators</article-title>
          ,
          <source>IEEE Transactions on Software Engineering</source>
          , vol.
          <volume>22</volume>
          , no.
          <issue>10</issue>
          ,
          <year>1996</year>
          , pp.
          <fpage>751</fpage>
          -
          <lpage>761</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9] ISO/IEC 9126-
          <fpage>2001</fpage>
          . Software Engineering.
          <source>Product Quality</source>
          <volume>1</volume>
          ,
          <string-name>
            <given-names>Quality</given-names>
            <surname>Model</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bansiya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Davis</surname>
          </string-name>
          ,
          <string-name>
            <surname>Class Cohesion Metric For Object-Oriented Designs</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Object-Oriented</surname>
            <given-names>Programming</given-names>
          </string-name>
          , vol.
          <volume>11</volume>
          , no.
          <issue>8</issue>
          ,
          <issue>1999</issue>
          , pp.
          <fpage>47</fpage>
          -
          <lpage>52</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Nutt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sagic</surname>
          </string-name>
          ,
          <string-name>
            <surname>Deciding Equivalances Among Conjunctive Aggregate Queries</surname>
          </string-name>
          ,
          <source>J. ACM 54</source>
          ,
          <year>2007</year>
          . doi:
          <volume>10</volume>
          .1145/1219092.1219093.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>[12] ISO/IEC 25010:2011 Systems and Software Engineering. Systems and Software Quality Requirements and Evaluation. System and Software Quality Models.</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>[13] ІSО/ІЕС 25010:2016 Systems and Software Engineering</source>
          .
          <article-title>Requirements for the Quality of Systems and Software Tools and its Evaluation (SQuaRE). Models of system and software quality` (in Ukrainian)</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Rhino</given-names>
            <surname>Graveyard</surname>
          </string-name>
          .
          <source>Bug</source>
          <volume>402331</volume>
          , https://bugzilla.mozilla.org/show_bug.cgi ?id=
          <fpage>402331</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>[15] Mozilla / Rhino. Fix bug 402331, https://github.com/mozilla/rhino/commit/ 262602.</mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Software</given-names>
            <surname>Productivity Research</surname>
          </string-name>
          <string-name>
            <surname>LCC</surname>
          </string-name>
          ,
          <article-title>Programming Languages Table, ver</article-title>
          . 2006b,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>M. di Biase</surname>
          </string-name>
          , et al.,
          <source>The Delta Maintainability Model: Measuring Maintainability of Fine-Grained Code Changes, in 2019 IEEE/ACM International Conference on Technical Debt (TechDebt)</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>113</fpage>
          -
          <lpage>122</lpage>
          . doi:
          <volume>10</volume>
          .1109/TechDebt.
          <year>2019</year>
          .
          <volume>00030</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>I.</given-names>
            <surname>Heitlager</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kuipers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Visser</surname>
          </string-name>
          ,
          <article-title>A Practical Model for Measuring Maintainability</article-title>
          ,
          <source>in 6th International Conference on the Quality of Information and Communications Technology</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>30</fpage>
          -
          <lpage>39</lpage>
          . doi:
          <volume>10</volume>
          .1109/QUATIC.
          <year>2007</year>
          .
          <volume>8</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>M. di Biase</surname>
          </string-name>
          , et al.,
          <source>The Delta Maintainability Model: Measuring Maintainability of Fine-Grained Code Changes Technical Report</source>
          , J. Cohen (Ed.), Special issue: Digital Libraries, vol.
          <volume>39</volume>
          ,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>T. G. S.</given-names>
            <surname>Filó</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bigonha</surname>
          </string-name>
          .
          <article-title>A Catalogue of Thresholds for Object-Oriented Software Metrics</article-title>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>