=Paper= {{Paper |id=Vol-2286/invited_paper_4 |storemode=property |title=A focus group for operationalizing software sustainability with the MEASURE platform |pdfUrl=https://ceur-ws.org/Vol-2286/invited_paper_4.pdf |volume=Vol-2286 |authors=Nelly Condori-Fernandez,Alessandra Bagnato,Eva Kern |dblpUrl=https://dblp.org/rec/conf/esem/Condori-Fernandez18 }} ==A focus group for operationalizing software sustainability with the MEASURE platform== https://ceur-ws.org/Vol-2286/invited_paper_4.pdf
          A focus group for operationalizing software
          sustainability with the MEASURE platform
              Nelly Condori-Fernandez                         Alessandra Bagnato                          Eva Kern
          Universidade da Coruña, Spain                             Softeam                  Leuphana University Lueneburg
   Vrije Universiteit Amsterdam, The Netherlands                 Paris, France               Environmental Campus Birkenfeld
            n.condori.fernandez@udc.es                   alessandra.bagnato@softeam.fr                   Germany
             n.condori-fernandez@vu.nl                                                             eva.kern@leuphana.de



   Abstract—Measuring the sustainability of software products        Section III. Section IV illustrates the validity of the research
is sill in the early stages of development. However, there are       done before summarizing and discussing the results of our
different approaches how to assess sustainability issues of soft-    study in Section V.
ware and its engineering - including metrics with a practical
orientation as well as more theoretical models covering software
sustainability. As an example for one step in moving forward                               II. BACKGROUND
bringing existing approaches together, the paper presents a focus
group study conducted to find out in which extent the quality        A. The MEASURE platform
attributes related to the technical sustainability can be measured      The MEASURE ITEA3 consortium (Softeam R&D, 2017)
by using existing metrics available at the MEASURE platform.
Our first results show that the extent of measurability varies
                                                                     [12] aims to develop a comprehensive set of tools for au-
across the software development phases. Functional correctness,      tomated and continuous measurement over all stages of the
robustness, maturity, and testability are the most measurable        software development life cycle (specification, design, devel-
quality attributes.                                                  opment, implementation, testing, and production). It includes
   Index Terms—technical sustainability, measurement, focus          the development of better metrics and ways to analyses the
group, software metrics
                                                                     big data produced by continuous measurements, the validation
                                                                     of those metrics by the integration of the metrics and tools
                      I. I NTRODUCTION
                                                                     into running processes in various industrial partners, and the
   Assessment based on the notion of sustainability, as a soft-      creation of decision support tools for project managers through
ware quality property, is still emerging and poorly understood       the visualization of the collected data.
[1]. Consequently, how software should be assessed against              This European MEASURE ITEA 3 project develops a
sustainability concerns is still immature even though it is          framework of metrics [13], [14] bottom-up with a list of in-
attracting increasing attention from both research and practice.     dustry partners and integrated them into a systematic structure
   This is especially the case when it comes to technical            to help creating a reference for companies to improve their
sustainability of software. According to [2], [3] technical          assessment all phases of the software development life cycle
sustainability has the central objective of long-time usage of       metrics. MEASURE work is based on the OMGs Structured
systems and their adequate evolution with changing surround-         Metrics Metamodel (SMM) models [15]. The MEASURE
ing conditions and respective requirements. However, so far,         platform consists of a web application that allows to deploy,
there is a knowledge gap how to transfer theoretical knowledge       configure, collect, compute, store, combine and visualize mea-
into practical routines [4]–[7]. Here, software measurement          surements by execution of software measures that may be
can help in creating transparency into software properties and       defined according to the SMM specification. The MEASURE
in providing information on sustainability issues of software to     project can develop a body of knowledge that shows software
developers. Sustainability issues of software are discussed in       engineers why, how and when to measure quality of process,
more details in [8]–[10]. Thus, in the following paper, we will      products and projects. Nowadays, an emergent quality property
concentrate on the presentation of bringing the metrics of a         of the software systems is sustainability. Although there is an
measurement platform - the MEASURE platform - and aspects            urgent demand for innovative solutions and smart applications
of a Software Sustainability Model [11] together. Doing so,          for a sustainable society worldwide, sustainability measure-
we bring practical and scientific approaches in assessing the        ment and assessment is a big challenge. The MEASURE
technical sustainability of software products together.              project developed a set of 150 metrics related to different
   The paper is structured as follows: Section II presents the       aspects of software engineering. With the work done within
MEASURE platform and the Software Sustainability Model.              this paper and the focus group we contribute to address
These are the basic information of the focus group workshop.         sustainability under a multi-dimensional perspective on the
The design of focus group, including a description of the            entire software development life cycle. Figure 1 illustrates a
participants, research questions, and methods, is introduced in      typical dashboard in the MEASURE platform.
                                         Fig. 1: Dashboard of the MEASURE platform


B. The software sustainability-quality model                                       III. F OCUS GROUP STUDY DESIGN
   Lago et al. [3] and Venters et al. [16] agree on defining        A. Goal and research questions
software sustainability in terms of multiple and interdependent
dimensions (e.g. economic, technical, social, environmental,           The goal of our focus group study, according to the
individual). Several efforts have been put to define software       Goal/Question/Metric template, is as follows:
sustainability in terms of quality requirements (e.g. [10], [16]–   Analyze metrics from the MEASURE platform and Software
[19]). For instance, Condori-Fernandez and Lago [19] pro-           Sustainability-Quality Model [11]
vided (i) a detailed characterization of each software sustain-     for the purpose of operationalizing quality attributes that
ability dimension, which is a first step towards its respective     contribute to technical sustainability
operationalization, and (ii) a list of direct dependencies among    from the viewpoint of software engineer (researcher or practi-
the four sustainability dimensions: economic, technical, social,    tioner)
and environmental.                                                  in the context of the MeGSuS workshop1 .
   The economic dimension aims to ensure that software-                Our focus group study represents an early assessment
intensive systems can create economic value. It is taken care       exercise of the MEASURE platform. We define the following
of in terms of budget constraints and costs as well as market       research question:
requirements and long-term business objectives that get trans-
lated or broken down into requirements for the system under           RQ1 : In which extent can the MEASURE platform be
consideration. The social dimension aims to allow current           useful for measuring technical sustainability?
and future generations to have equal and equitable access
to the resources in a way that preserves their socio-cultural          For determining the potential usefulness of MEASURE for
characteristics and achieve healthy and modern society. The         operationalizing the sustainability-quality attributes, from our
environmental dimension seeks to avoid that software-intensive      research question, we set out three specific questions to our
systems harm the environment they operate in. And, the              participants:
technical dimension is concerned with supporting long-term            RQ1.1 : Do you agree with the contribution of the selected
use and appropriate evolution/adaptation of software-intensive               quality attributes as contributors to technical
systems in constantly changing execution environment. Based                  sustainability?
on these definitions, quality attributes (QA) that contribute
to the corresponding sustainability dimensions of software-           RQ1.2 : In which phase of the software development life
intensive systems were identified [19]. As a result of this char-            cycle, do you think it would be feasible to measure
acterization per sustainability dimension in terms of quality                the list of quality attributes?
attributes and identification of direct dependencies, a software
sustainability-quality model was proposed, which can be found
in [11].                                                              1 http://eseiw2018.wixsite.com/megsus18
  RQ1.3 : Which metrics from the MEASURE platform can               TABLE I: Technical sustainability-quality attributes identified
         be useful for measuring technical sustainability?          in [19]
                                                                          ID      Characteristics          Quality attributes
                                                                          QA1     Functional suitability   Functional correctness
B. Participants                                                           QA2     Compatibility            Interoperability
                                                                          QA3     Reliability              Availability
   For answering our research question, we considered it advis-           QA4     Functional suitability   Functional appropriateness
able that our participants should have a very good knowledge              QA5     Satisfaction             Usefulness
competence on software measurement, as well as interest in                QA6     Reliability              Fault tolerance
                                                                          QA7     Maintainability          Modifiability
any research topic related to software sustainability. Both               QA8     Satisfaction             Trust
criteria were successfully satisfied by our eight participants,           QA9     Context coverage         Context completeness
attendees of the MeGSuS workshop. Two of them were                        QA10    Effectiveness            Effectiveness
                                                                          QA11    Robustness               Robutsness
practitioners. All of them contributed to the workshop focusing           QA12    Portability              Adaptability
on software measurement and showed their interest in the topic            QA13    Performance efficiency   Time behaviour
by that.                                                                  QA14    Maintainability          Modularity
                                                                          QA15    Maintainability          Testability
C. Instrumentation and data collection                                    QA16    Reliability              Recoverability
                                                                          QA17    Compatibility            Coexistence
   The focus group study was organized in four small groups,              QA18    Reliability              Maturity
to run the study, the following instrumentation was distributed           QA19    Efficiency               Efficiency
                                                                          QA20    Survivability            Survivability
among the groups:                                                         QA21    Performance efficiency   Capacity
   • Technical sustainability definition                                  QA22    Security                 Integrity
   • List of quality attributes and corresponding definitions of
     the attributes
   • Metrics from the MEASURE platform, whose definitions           D. Procedure
     were accessible via a wiki website2                               As shown in Figure 3, the procedure of our focus group
   After reading and clarifying the definitions, the participants   study involves the following four phases:
selected the phase of the software development, they felt most         1) Preparation phase: This phase has two objectives: i) to
familiar with. Regarding our two first specifics questions,         get a common understanding on what software sustainability
verbal data was collected, whereas for our third question, a        means regarding technical sustainability dimensions, ii) to
large sheet of paper containing a grid was used by each focus       decide which sustainability dimensions are going to be used
group.                                                              in the next phase. This phase has been carried out by the
                                                                    organizers of the focus group, consisting of one moderator
                                                                    and two assistants.
                                                                       After having a discussion (before realizing the focus group),
                                                                    and considering also the time allocated for this study as part
                                                                    of the MeGSuS workshop, the researchers decided to work
                                                                    with the technical sustainability dimension.
                                                                       The activities of the next phases were carried out during the
                                                                    focus group.
                                                                       2) Phase 1: What?: The objective of this phase is to
                                                                    validate the contribution of the corresponding QAs to the
                                                                    technical sustainability dimension. Thus, in this phase, par-
                                                                    ticipants answered RQ1.1 . The moderator introduced briefly
                                                                    the motivation of the focus group, presented an overview of
Fig. 2: Matrix used for mapping selected metrics with the           the sustainability-quality model as well as a plan of activities
quality attributes (QA)                                             to be carried out. The outcome of this phase is a list of selected
                                                                    QAs that will be analyzed in the following phases.
  As shown in Figure 2, participants used an ”X” for repre-            The average time taken for this phase was about 10 minutes.
senting the relation: ”M can measure QA” or ”QA can be
                                                                       3) Phase 2: When?: The objective of this second phase
measured by M”.
                                                                    is to discuss on which phases of the software life cycle the
Those ”X” enclosed by a circle were used to identify a set of
                                                                    selected qualities could be measured. Thus in this phase, based
basic metrics that can measure a QA.
                                                                    on their participants experience, RQ1.2 was answered. The
  Table I shows the twenty two quality attributes of technical
                                                                    average time taken was about 5 minutes.
sustainability that were analyzed by our focus group partici-
pants.                                                                 4) Phase 3: How?: The objective of this third phase is
                                                                    to assess the usefulness of the metrics from the MEASURE
  2 https://github.com/ITEA3-Measure/Measures/wiki                  platform. Thus, in this phase, participants answered RQ1.3 .
                                                    Fig. 3: Focus group procedure


   It took approximately 25 minutes. All participants of the               orientation and one theoretical model - having a common
four focus groups shared their mapping results, by empha-                  focus, the direction of the focus group was specifically
sizing the reasoning behind the mappings, difficulties of un-              predefined. This ensured that the focus of discussion was
derstanding the purpose of some metrics and discussing open                also set on the technical sustainability dimension.
questions on the connection of the issues. In this phase, we
were open to new metrics that could be suggested by the                              V. R ESULTS AND DISCUSSION
participants. However, due to time restrictions, this data was
not collected.                                                           In order to answer our main research question related to the
                                                                      usefulness of the MEASURE platform for measuring the tech-
                  IV. T HREATS TO VALIDITY
                                                                      nical sustainability dimension, we analyzed the collected data
   We identified the following threats to validity [20] of our        from each focus group (see matrix, Figure 6). Usefulness of
study.                                                                the platform is analyzed regarding coverage and measurability
   • External validity. It is the ability to generalize the results   aspects, which are discussed as follows.
     from a sample to a population. As focus groups tend to
     use rather small, homogeneous samples, generalization            A. Analyzing the coverage of the MEASURE platform
     is the main limitation of our study. Our study involved
     four mini-groups, with people from different countries,             Considering the total of metrics available at the MEASURE
     but most of them were researchers. To mitigate this threat,      platform [21], [22], [13], which are organized by software
     we are going to replicate this first focus group, involving      development phase, Table II shows the percentage of software
     more groups representing a diverse sample of people.             metrics selected by the participants of each mini-focus group
   • Internal validity. It is strengthened by a moderator             as useful for measuring any QA of the technical sustainability
     providing an appropriate amount of guidance without              dimension. According to these results, we observe that all
     introducing any of his/her own opinion or stifling free          metrics available for the specification phase (100%) could be
     expression. In order to reduce this threat, the moderator        related with the corresponding QAs, whereas only 25% of 51
     used an introductory material (Powerpoint-slides) for            metrics for the implementation phase were related.
     contextualizing the focus group study.                              Next we present the selected metrics by each mini-focus
   • Construct validity. It is concerned with whether the             group.
     focus group is actually measuring what they are trying              1) Metrics for the specification phase: Table III : ”Selected
     to measure. In our focus group, we focus on investigate          metrics of the specification phase” shows the 10 selected
     the coverage and measurability aspects. By using two             specification phase metrics that were mapped with the QAs
     different existing approaches - one with a more practical        of the technical sustainability dimension.
TABLE II: Percentage of metrics selected by the focus group       phase (13 metrics). In case of the specification phase, espe-
participants per development phase                                cially functional correctness and functional appropriateness are
                    Number of          Total
                                                                  of high importance, meaning covered by many metrics (5 of
Phase                                             Percentage      10 metrics). Efficiency is connected with most of the metrics
                 selected metrics   of metrics3
Specification          10               10          100%          of the design phase while the quality attribute is not covered
Design                 16               35           46%          in the other phases analyzed. Overall, functional suitability is
Implementation         13               51           25%          connected to many of the proposed metrics for the analyzed
Testing                15               22           68%
                                                                  software development phases.


   2) Metrics for the design phase: Table IV: ”Selected                                VI. C ONCLUSIONS
metrics of the design phase” shows the 18 selected design
                                                                     In this paper, we describe the result of the the focus group
phase metrics that were mapped with the QAs of the technical
                                                                  designed to discuss on ”What, when and how to measure
sustainability dimension.
                                                                  software sustainability”. The authors organized the study
   3) Metrics for the implementation phase: Table V : ”Se-
                                                                  within the MeGSuS 2018: 4th International Workshop on
lected metrics of the implementation phase” shows the 13
                                                                  Measurement and Metrics for Green and Sustainable Software
selected implementation phase metrics that were mapped with
                                                                  Systems [23].
the QAs of the technical sustainability dimension.
   4) Metrics for the testing phase: Table VI: ”Selected             Through the focus group, we found a good number of
metrics of the testing phase” shows the 21 selected testing       metrics that were selected from the MEASURE platform as
phase metrics that were mapped with the QAs of the technical      ”potentially” useful for measuring quality attributes of the
sustainability dimension.                                         technical sustainability dimension along certain phases of the
                                                                  software development life cycle (i.e. design, specification,
   As shown in the matrix (Figure 2 in Appendix), the spec-
                                                                  testing, implementation). This result provides evidence on the
ification, design, implementation, and testing metrics where
                                                                  coverability of the MEASURE platform for the specification,
associated to the quality attributes presented in Table I.
                                                                  design, implementation and testing phases.
B. Analyzing the measurability of the quality attributes             Moreover, the study has also shown that most of the
  Considering the twenty-two QAs of the technical sustain-        technical sustainability-quality attributes are measurable. The
ability dimension (see Table I), Figure 4 shows the percentages   results can be appreciated and are summarized in Figure 5,
of QAs that can be measured by at least one of the selected       where we can highlight the following results: .
metrics. Most of the QAs can be measured at the specification       • Metrics for the specification phase were distributed
phase (82%, 18 of 22 QAs), followed by design (59%, 13                among the various QAs with higher metrics related to
of 22 QAs), testing (32%, 7 of 22 QAs) and implementation             QA1 and QA4.
(23%, 5 of 22 QAs).                                                 • Metrics for the design phase were distributed among the
                                                                      various QAs with higher metrics related to QA19, QA15
                                                                      and QA14.
                                                                    • Metrics for the implementation phase focus, according to
                                                                      the results of the focus group, on a limited number of QAs
                                                                      (QA15, QA7, QA22) related to technical sustainability
                                                                      dimension. The subgroup considered that maintenability
                                                                      / testability, maintenability / modifiability, and security
                                                                      were the QAs associated to the higher number of imple-
                                                                      mentation metrics (see Table V) .
                                                                    • Metrics for the testing phase focus on a limited num-
                                                                      ber of QAs (QA1, QA11, QA18) related to technical
                                                                      sustainability dimension. The sub-group considered that
Fig. 4: Percentage of measurable quality attributes per phase         functional suitability, robustness, and reliability were the
                                                                      QAs associated to the higher number of testing metrics
   This indicates that most of the QAs related to the technical       (see Table VI).
sustainability can be qualified as measurable. In order to        Functional correctness, robustness, maturity and testability are
represent the extent of measurability for each development        the most measurable quality attributes considering the four
phase, we calculated the number of available metrics selected     phases. The focus group acknowledged that the technical
from the platform for measuring each QA (see Figure 5).           sustainability dimension [19] could be operationalized by
According to these results, we observe that our participants      the MEASURE platform implemented metrics. For validating
found that functional correctness, robustness and maturity can    these results, we are going to replicate the study to find both
be measured by using a good number of metrics at the testing      similarities and differences.
                  Fig. 5: Measurability: Number of metrics per quality attribute related to technical sustainability


   Another future work of the team include providing MEA-                         [7] C. Pang, A. Hindle, B. Adams, and A. E. Hassan, “What do pro-
SURE visualization dashboards to support users in the evalua-                         grammers know about software energy consumption?” IEEE Software,
                                                                                      vol. 33, no. 3, pp. 83–89, 2016.
tion of the technical sustainability of a given software artifact.                [8] C. Calero and M. Piattini, “Introduction to green in software engineer-
                                                                                      ing,” in Green in Software Engineering. Springer, 2015, pp. 3–27.
                         ACKNOWLEDGMENT                                           [9] L. M. Hilty and B. Aebischer, “Ict for sustainability: An emerging
                                                                                      research field,” in ICT Innovations for Sustainability. Springer, 2015,
   We thank all the participants who took part in our focus                           pp. 3–36.
group study: Jérôme Rocheteau from the Institut catholique                     [10] E. Kern, L. M. Hilty, A. Guldner, Y. V. Maksimov, A. Filler, J. Gröger,
                                                                                      and S. Naumann, “Sustainable software productstowards assessment
d’arts et métiers (Icam) site of Nantes, Birgit Penzen-                              criteria for resource and energy efficiency,” Future Generation Computer
stadler from California State University Long Beach, Shola                            Systems, vol. 86, pp. 199–210, 2018.
Oyedeji from Lappeenranta University of Technology, Denisse                      [11] O. Condori Fernandez and P. Lago, A Sustainability-quality
                                                                                      Model: (version 1.0). VU Technical Report, 11 2018. [Online].
Muñante from University of Bordeaux, Diogo Silveira Mendoa                           Available: https://research.vu.nl/en/publications/a-sustainability-quality-
from Pontifical Catholic University of Rio de Janeiro, and                            model-version-10
Thibault Beziers la Fosse from Laboratoire des Sciences                          [12] Softeam R&D, “MEASURE project website,” http://measure.softeam-
                                                                                      rd.eu/, Oct. 2017, last accessed on 2018-11-01.
du Numrique de Nantes, Software Modeling Group (LS2N-
                                                                                 [13] A. Abherve, A. Bagnato, A. Stefanescu, and A. Baars, “Github
NAOMOD).                                                                              project for the MEASURE platform,” https://github.com/ITEA3-
   The research presented in this paper is partially funded by                        Measure/MeasurePlatform/graphs/contributors, Sep. 2017, last accessed
                                                                                      on 2018-11-01.
the ITEA3 Project no. 14009 called MEASURE (1st December
                                                                                 [14] A. Abherve, “Github project for the SMM Measure API library,”
2015 and running till 31st August 2019).                                              https://github.com/ITEA3-Measure/SMMMeasureApi, Aug. 2017, last
                                                                                      accessed on 2018-11-01.
                              R EFERENCES                                        [15] Object Management Group, “The Software Metrics Meta-Model Spec-
                                                                                      ification 1.1.1,” http://www.omg.org/spec/SMM/1.1.1/, Apr. 2016, last
 [1] P. Lago, “Software and sustainability [inaugural lecture],”                      accessed on 2018-11-01.
     http://dare.ubvu.vu.nl, Jan. 2016.                                          [16] C. Venters, L. Lau, M. Griffiths, V. Holmes, R. Ward, C. Jay, C. Dibs-
 [2] B. Penzenstadler and H. Femmer, “A generic model for sustainability              dale, and J. Xu, “The blind men and the elephant: Towards an empirical
     with process-and product-specific instances,” in Proceedings of the 2013         evaluation framework for software sustainability,” Journal of Open
     workshop on Green in/by software engineering. ACM, 2013, pp. 3–8.                Research Software, vol. 2, no. 1, 2014.
 [3] P. Lago, S. A. Koçak, I. Crnkovic, and B. Penzenstadler, “Framing          [17] C. Calero, M. Á. Moraga, and M. F. Bertoa, “Towards a software
     sustainability as a property of software quality,” Communications of the         product sustainability model,” CoRR, vol. abs/1309.1640, 2013.
     ACM, vol. 58, no. 10, pp. 70–78, 2015.                                           [Online]. Available: http://arxiv.org/abs/1309.1640
 [4] S. Selyamani and N. Ahmad, “Green computing: the overview of aware-         [18] A. Raturi, B. Penzenstadler, B. Tomlinson, and D. Richardson,
     ness, practices and responsibility among students in higher education            “Developing a sustainability non-functional requirements framework,”
     institutes,” J. Inf. Syst. Res. Innov, 2015.                                     in Proceedings of the 3rd International Workshop on
 [5] R. Chitchyan, C. Becker, S. Betz, L. Duboc, B. Penzenstadler, N. Seyff,          Green and Sustainable Software, ser. GREENS 2014.                     New
     and C. C. Venters, “Sustainability design in requirements engineering:           York, NY, USA: ACM, 2014, pp. 1–8. [Online]. Available:
     state of practice,” in Proceedings of the 38th International Conference          http://doi.acm.org/10.1145/2593743.2593744
     on Software Engineering Companion. ACM, 2016, pp. 533–542.                  [19] N. Condori-Fernandez and P. Lago, “Characterizing the contribution
 [6] I. Manotas, C. Bird, R. Zhang, D. Shepherd, C. Jaspan, C. Sadowski,              of quality requirements to software sustainability,” Journal of Systems
     L. Pollock, and J. Clause, “An empirical study of practitioners’ perspec-        and Software, vol. 137, pp. 289 – 305, 2018. [Online]. Available:
     tives on green software engineering,” in Software Engineering (ICSE),            http://www.sciencedirect.com/science/article/pii/S0164121217302984
     2016 IEEE/ACM 38th International Conference on. IEEE, 2016, pp.             [20] R. A. Krueger, Focus groups : a practical guide for applied
     237–248.                                                                         research / Richard A. Krueger ; foreword by Michael Quinn Patton.
     Sage Publications Newbury Park, Calif, 1988. [Online]. Available:
     http://www.loc.gov/catdir/enhancements/fy0654/87033413-t.html
[21] A. Abherve and A. Bagnato, “Repository of measures specification
     in smm,” https://github.com/ITEA3-Measure/Measures/wiki, Aug. 2017,
     last accessed on 2018-12-01.
[22] A. Bagnato and A. Abherve, “Repository of measure implementations,”
     https://github.com/ITEA3-Measure/Measures, Aug. 2017, last accessed
     on 2018-12-01.
[23] E. K. Alessandra Bagnato, Nelly Condori Fernandez, “4th work-
     shop on measurement and metrics for green and sustainable
     software systems (megsus18) october 9, 2018 - oulu, finland,”
     http://eseiw2018.wixsite.com/megsus18, Aug. 2018, last accessed on
     2018-12-01.
                                                         A PPENDIX




Fig. 6: Mapping between quality attributes related to technical sustainability dimension and metrics from the MEASURE
platform. (X = ”Metric can measure quality attribute” or rather ”Quality attribute can be measured by metric”, ? = ”connection
needs to be discussed”)
               TABLE III: Selected metrics of the specification phase

ID     Short name                Description
       Number of                 Total number of requirement defined in the
SM1
       Requirement               selected scope.
SM2    Number of Tests           Total number of tests defined in the selected scope.
       Requirements
SM3    Satisfaction Quality      Percentage of requirements that have been satisfied.
       Indice
       Requirement
SM4    Traceability To           Percentage of requirements that have been satisfied.
       Implementation Indice
       Requirement               The average number of requirements tracing
SM5
       Coverage Indice           an architecture model.
       Requirement               The average number of sub requirements
SM6
       Complexity Indice         defined to rafine an existing requirement.
                                 Total number of risks defined
       Number
SM7                              in the selected scope.
       Of Risks
                                 .
       Number Of                 Total number of requirement defined
SM8
       Business Rules            in the selected scope.
       Number                    Total number of goals defined
SM9
       Of Goals                  in the selected scope.
       Requirement
SM10   Traceability To           The % of requirement of tracing a test model.
       Test Indice
                    TABLE IV: Selected metrics of the design phase

ID     Short name                Description
                                 The number of direct subclasses of a class.
       Class
DM1                              A class implementing an interface counts
       Complexity Index
                                 as a direct child of that interface.
       Package
DM2                              The average number of dependencies from a package.
       Dependecies Ratio
       Number
DM3                              Total number of methods defined in the selected scope.
       of Methods
       Software Component        The number of software compoments identified
DM4
       decomposition             in an application architecture.
DM5    Number of Classes         Total number of classes in the selected scope
DM6    Number of Interfaces      Total number of interfaces in the selected scope.
DM7    Number of Methods         Total number of methods defined in the selected scope.
DM8    Number of Components      Total number of Components defined in the selected scope.
DM9    Number of Packages        Total number of Packages defined in the selected scope.
       Class
DM10                             The average number of dependencies from a class.
       Dependency Ratio
       Package
DM11                             The average number of dependencies from a package.
       Dependency Ratio
       Model Abstractness        The% of abstract classes (and interfaces)
DM12
       Index                     divided by the total number of types in a package.
DM13   Number of Fields          Total number of fields defined in the selected scope.
DM14   Number of Use Cases       Total number of use cases defined in the selected scope.
DM15   Number of Actors          Total number of actors defined in the selected scope.
                                 Total number of interfaces component types in the
       Number of                 java Modelio model. Along with the total number of data,
DM16
       Component Types           this metric provides an idea of the functional richness
                                 of the modelled application.
                                 Total number of aggregated components in the
       Number of                 java Modelio model. Along with the number of
DM17
       Aggregated Components     composed components, this metric reflects the usage
                                 of the software decomposition in the modelled application.
       Number of                 Count the number of Interface annotated
DM18
       Composed Components       @ComposedComponent in Java Model.
             TABLE V: Selected metrics of the Implementation phase

ID     Short Name                  Description
       Cognitive                   defining how hard it is to understand
IM1
       Complexity                  the code’s control flow.
                                   defining as A = 0 vulnerability,
                                   B = at least 1 minor vulnerability,
       Security
IM2                                C = at least 1 major vulnerability,
       Rating
                                   D = at least 1 critical vulnerability,
                                   E = at least 1 blocker vulnerability
       Security remediation        Effort to fix all vulnerability issues
IM3
       effort on new code          found on the code changed in leak periods
       Security
IM4                                Effort to fix all vulnerability issues.
       remediation effort
                                   A = 0 bug, B = at least 1 minor bug,
                                   C = at least 1 major bug,
IM5    Reliability rating
                                   D = at least 1 critical bug,
                                   E = at least 1 blocker bug
       Reliability remediation
IM6                                Effort to fix all bug issues.
       effort
       Reliability remediation     Effort to fix all bug issues found on
IM7
       effort on new code          the code changed in the leak period
IM8    File complexity             Average complexity by file.
IM9    Code Smells                 Number of code smells.
       New issues                  Number of new issues with severity
IM10
       by severity                 (blocker,critical, major, minor)
IM11   New Issues                  Number of new issues
                                   Rating given to your project related
IM12   Maintainability rating
                                   to the value of your Technical Debt Ratio.
IM13   Technical debt              Effort to fix all maintainability issues.
                                 TABLE VI: Selected metrics of the testing phase

ID     Metric                 Description
                              On each line of code containing some boolean expressions,
                              the condition coverage simply answers the following question:
       Condition
TM1                           ’Has each boolean expression been evaluated both to true and false?’.
       Coverage
                              This is the density of possible conditions in flow control
                              structures that have been followed during unit tests execution.
                              On each line of code containing some boolean expressions,
       Condition
                              the condition coverage simply answers the following question:
TM2    Coverage
                              ’Has each boolean expression been evaluated both to true and false?’.
       On New Code
                              This is the density of possible conditions in flow control structures
                              that have been followed during unit tests execution.
       Condition
TM3                           List of covered conditions.
       Coverage Hits
TM4    Conditions By Line     Number of conditions by line.
       Covered
TM5                           Number of covered conditions by line.
       Conditions By Line
                              It is a mix of Line coverage and Condition coverage.
                              Its goal is to provide an even more accurate
TM6    Coverage
                              answer to the following question: How much of the
                              source code has been covered by the unit tests?
                              It is a mix of Line coverage and Condition coverage.
                              Its goal is to provide an even more accurate answer
       Coverage
TM7                           to the following question: How much of the source code
       On New Code
                              has been covered by the unit tests?
                              Restricted to new / updated source code.
                              On a given line of code, Line coverage simply answers
                              the following question:
TM8    Line Coverage
                              Has this line of code been executed during the execution of the unit tests?.
                              It is the density of covered lines by unit tests:
                              On a given line of code, Line coverage
                              simply answers the following question:
       Line Coverage
TM9                           Has this line of code been executed during the execution of the unit tests?.
       On New Code
                              It is the density of covered lines by unit tests
                              Restricted to new / updated source code.
TM10   Line Coverage Hits     List of covered lines.
TM11   Lines To Cover         Number of lines of code which could be covered by unit tests
       Lines To Cover         Number of lines of code which could be covered by unit tests
TM12
       On NEw Code            Restricted to new / updated source code.
       Skipped
TM13                          Number of skipped unit tests.
       Unit Tests
       Uncovered
TM14                          Number of conditions which are not covered by unit tests.
       Conditions
       Uncovered Conditions   Number of conditions which are not covered by unit tests.
TM15
       On New Code            Restricted to new / updated source code.
       Uncovered              Number of conditions which are not covered by unit tests.
TM16
       Lines On New Code      Restricted to new / updated source code.
TM17   Unit Tests             Number of unit tests.
TM18   Unit Tests Duration    Time required to execute all the unit tests.
TM19   Unit Test Errors       Number of unit tests that have failed.
TM20   Unit Test Failures     Number of unit tests that have failed with an unexpected exception.
       Unit Test Success
TM21                          Test success density = (Unit tests - (Unit test errors + Unit test failures)) / Unit tests * 100
       Density Percent