=Paper= {{Paper |id=Vol-2843/shortpaper36 |storemode=property |title=Automation of control over the formation of skills in the development of software documentation using a group expert assessment (short paper) |pdfUrl=https://ceur-ws.org/Vol-2843/shortpaper036.pdf |volume=Vol-2843 |authors=Ivan S. Polevshchikov }} ==Automation of control over the formation of skills in the development of software documentation using a group expert assessment (short paper)== https://ceur-ws.org/Vol-2843/shortpaper036.pdf
    Automation of control over the formation of skills in the
     development of software documentation using a group
                      expert assessment *

                                       Ivan S. Polevshchikov1,2
    1
      Perm National Research Polytechnic University, 29, Komsomolsky prospekt, Perm, 614990,
                                       Russian Federation
    2
      Moscow State University of Food Production, 11, Volokolamskoe shosse, Moscow, 125080,
                                       Russian Federation
                             i.s.polevshchikov@gmail.com



          Abstract. The article is devoted to the development of e-learning tools and dis-
          tance learning technologies in the training of IT specialists. A methodology for
          group expert assessment of the quality of software documentation has been de-
          veloped. The use of the methodology allows, on the basis of mathematical
          methods, to control the formation of skills among IT specialists. On the basis of
          the methodology, a prototype of a subsystem for a group expert assessment of
          the quality of software documentation for an automated system for processing
          information on the development of competencies in the training of IT specialists
          has been developed. The use of the subsystem will reduce the complexity of the
          work of experts. The developed methodology and subsystem can be used: to
          control the formation of professional skills among trainees during training in
          educational organizations, during in-house training in IT companies and IT de-
          partments of enterprises; to assess the quality of real-life tasks for the prepara-
          tion of software documentation by novice IT specialists.

          Keywords: IT specialist, Software documentation, Knowledge and skills con-
          trol, Group peer review, Automated training systems.


1         Introduction

Documentation creation is an integral part of the life cycle of the development of
complex software systems [1-2]. The quality of the created documentation at each
stage of the life cycle affects the implementation of subsequent stages and the result
of the development of a software product in general.
   With regard to software documentation (examples of which are requirements for
software systems, test cases, defect reports, etc. [1]), various criteria are used to assess
the quality of its preparation (with the aim of subsequent elimination of deficiencies).


*
    Copyright © 2021 for this paper by its authors. Use permitted under Creative Commons License Attribu-
tion 4.0 International (CC BY 4.0).
An IT specialist who is responsible for developing software documentation should
know these criteria and have the skills to apply them to the execution of software
projects.
   Accordingly, in the course of professional training or retraining of IT specialists,
the processes of formation and control of these knowledge and skills should be pro-
vided. In particular, according to the results of previous studies [3], a methodology for
assessing the quality of exercises for the development of test documentation by one
expert was proposed.
   An urgent task is the development of this methodology in relation to the assess-
ment by several experts of the implementation of exercises for the development of
program documentation. For example, several experts can be involved: in an educa-
tional institution in the assessment of Olympiad tasks for the development of program
documentation; when assessing the preparation of documentation by a novice IT spe-
cialist at the enterprise (experts can be the most qualified specialists). Involvement of
several experts will allow taking into account the opinions of different specialists
regarding the quality of the document created by the student.
   The results of research in the field of solving this urgent problem are presented be-
low.


2      Analysis of methods for assessing the quality of software
       documentation

The works of some authors present the desired properties of software documentation.
These properties can be used as criteria for assessing the quality of its compilation.
For example, Kulikov S. highlights the following properties of test documentation [1]:
─ for software requirements: completeness, atomicity, consistency, continuity, un-
  ambiguity, feasibility, relevance, traceability, modifiability, ranking, verifiability;
─ for test cases: correct technical language; balance between specificity and general-
  ity; balance between simplicity and complexity; ensuring a high probability of er-
  ror detection; sequence of actions to achieve a single goal; lack of unnecessary ac-
  tions; non-redundancy in relation to other test cases; the ability to most clearly
  demonstrate the identified error; traceability; possibility of reuse; compliance with
  accepted design templates and company traditions;
─ for defect reports: filling of all fields with accurate and correct information; correct
  technical language; the specificity of the description of the steps; no unnecessary
  actions, long descriptions of actions; no duplicates; obviousness and clarity; trace-
  ability; a separate report for each new defect; compliance with accepted design
  templates and company traditions.

   Orlov S. in [2] identifies the following properties of detailed requirements for
software systems: tracking, testability, unambiguity, priority, completeness, consis-
tency.
    Based on this review, it can be seen that for different types of software documenta-
tion the criteria are largely similar. Each specific organization can use its own as-
sessment criteria.
    The use of computer technologies in the training of IT specialists, and, in particu-
lar, automated systems for assessing knowledge and skills, allows to improve the
process of forming professional competencies [4-10]. Based on the results of previous
studies [3], a methodology for expert assessment (by one expert) of the quality of
exercises for the development of software (in particular, test) documentation was
developed. This methodology is the basis for the development of software modules
for an automated control system for knowledge and skills in the training of IT special-
ists.
    Based on the analysis of criteria for assessing the quality of software documenta-
tion, existing approaches to automating the control of professional knowledge and
skills and the use of modern mathematical methods [3; 11], it is proposed to develop
the previously created and described methodology [3] in relation to the problem of
document assessment by a group of experts (specialists).


3      Methodology for group expert assessment of software
       documentation in the training of IT specialists

Let us consider the created methodology for group expert assessment of software
documentation in the training of IT specialists (using the example of evaluating a
practical task for the development of test documentation when training beginner IT
specialists at an enterprise). The proposed methodology includes steps:
Step No. 1. A group of experts prepares a practical task (exercise) for subsequent
implementation by trainees. Consider an example in which such an exercise is to de-
velop a defect report for a program.
   During the preparation of the task, it is necessary to determine a set of quality indi-
cators to assess its implementation: A  {ai | i  1, N ind. } , where ai is a separate qual-
ity indicator, N ind. is the total number of indicators.
   For the example under consideration, we will choose 5 quality indicators based on
the recommendations presented in [1] when creating reports on defects: a1 – filling
of all fields with accurate and correct information; a 2 – correct technical language;
a3 – the specificity of the description of the steps; a 4 – absence of unnecessary ac-
tions, too long descriptions of actions; a5 – traceability.
Step No. 2. The weights of quality indicators wi (showing the significance of each
indicator ai in assessing the performance of the task) are determined by a group of
N exp. experts based on the following algorithm (using the method of direct assessment
[3; 11]):
   2.1. Each j -th expert ( j  1, N exp . ) compares with the i -th quality indicator the as-
sessment of its significance b ji , measured on a certain scale (for example, 10-point).
                               
The result is a matrix B  b ji .
   For example, as a result of evaluating the N ind.  5 indicators (described above) by
                                                 10 10 9 4 6 
N exp .  3 experts, a matrix was obtained: B   3 7 1 4 1  .
                                                  2 1 6 1 10 
                                                             
                                     N ind.
   2.2. The formula w ji  b ji      b
                                     g 1
                                              jg
                                                   calculates the weight of the i -th quality indica-

tor based on the assessment of the j -th expert. The result is a matrix of weights
      
W  w ji . In particular, based on the matrix B from the example above, we get the
               0,2564 0,2564 0,2308 0,1026 0,1538 
matrix: W   0,1875 0,4375 0,0625 0,2500 0,0625  .
                                                                  
               0,1000 0,0500 0,3000 0,0500 0,5000 
                                                                  
   2.3. Calculation of the same initial values of the competence coefficients for each
j -th expert (at the iteration t  0 ): k 0j  1 N exp .  0,3333 .
   2.4. Go to the next iteration (increase t by 1).
   2.5. Calculation of the group assessment of the weight of each i -th quality indica-
                                  N exp .                       N ind .
tor at the t -th iteration: w t   ( w  k t 1 ) . Here
                              i
                                   j 1
                                              ji    j            w  1.
                                                                 i 1
                                                                           t
                                                                           i

                                                                    N ind. N exp .
   2.6. Calculation of the normalization factor: t    ( w t  w ) .
                                                                                     i   ji
                                                                     i 1 j 1

   2.7. Calculation of coefficients of expert competence:
          1 N ind .
   k tj  t  ( wit  w ji ) at j  1, N exp .  1 ;
          i 1
            N exp . 1                                                                        N exp .
   k tj  1   k vt at j  N exp . (according to the normalization condition  k tj  1 ).
              v 1                                                                             j 1

   2.8. Checking the condition max wit  wit 1  ε , where ε is some calculation accu-
racy (for example, ε  0,001 [11]). If the condition is true, then the process of finding
the group assessments of weights ends. If false, then it returns to step 2.4.
Step No. 3. A group of Nstud. trainees (for example N stud.  4 ) performs a practical
task in the allotted time.
Step No. 4. Each expert checks the completed practical task. As a result, we obtain a
set of matrices D  {D j | j  1, N exp.} , where D j  d jqi , d jqi  [0;1] is the assessment by
the j -th expert of the task performed by the q -th student ( q  1, Nstud. ) according to
the i -th quality indicator. An example of a matrix with the assessments of the first
               0,90 0,75 0,75 0,50 0,80 
                                            
expert: D   0,50 0,30 0,65 0,40 0,50  .
          1    0,75 0,60 0,40 0,60 0,75
                                            
               0,25 0,25 0,50 0,40 0,50 
                                            
Step No. 5. Based on the difference in the assessments for each quality indicator, set
by experts at step 4, the generalized weights of the quality indicators wji are calcu-
lated. The sequence of actions for calculating these weights:
   5.1. Calculation of average grades for each i -th quality indicator. We get the ma-
trix Davg.  d ji  (here j  1, N exp. , i  1, N ind. ). Here d ji   d jqi N stud. .
                                                                       N stud.


                                                                        q 1

   5.2. Calculation of the scatter values for each i -th quality indicator. We get the
                                     N stud.

                             d jqi  d ji
                  
matrix R  R ji , where R  q 1
                         ji
                                              .
                               N stud.  d ji
  5.3. Calculating the sum of the scatter values. We get the matrix Rsum.  R j ,              
                 N ind.
where R j   R ji .
                 i 1
   5.4. Calculation of weights wji based on the scatter of assessments obtained in step

                                    
№4. We get the matrix W   wji , where w       .
                                                     ji
                                                          R ji
                                                Rj
   5.5. Calculation of directly generalized weights of quality indicators wji . We get
                           
the matrix W   wji , where wji  αwi  βwji . Here α and β are the coefficients of
significance of the weights wi and wji , respectively. For example, α  β  0,5 [3,
11].
Step No. 6. Complex assessments of the task performed by each q -th student are
calculated (for each j -th expert). We get the matrix Dcmp.  L jq , where               
       N ind .
L jq   wji d jqi .
         i 1
Step No. 7. The group assessments of the tasks performed by each q -th student are
calculated (based on the complex assessments of each j -th expert):
    7.1. Calculation of the same initial values of the competence coefficients for each
 j -th expert (at the iteration h  0 ): k 0j  1 N exp .  0,3333 .
   7.2. Go to the next iteration (increase h by 1).
   7.3. Calculation of the group assessment of each q -th student at the h -th iteration:
      N exp .

Lhq   L jq k hj 1 .
       j 1
                                                        N stud. N exp .
  7.4. Calculation of the normalization factor: h    ( Lh  L ) .
                                                                          q   jq
                                                         q 1 j 1

  7.5. Calculation of coefficients of expert competence:
         1 N stud.
  k hj  h  ( Lhq  L jq ) at j  1, N exp .  1 ;
            q 1
            N exp . 1                                                             N exp .
   k hj  1   kvh at j  N exp . ( according to the normalization condition  k hj  1 ).
              v 1                                                                  j 1

  7.6. Checking the condition max Lhq  Lhq1  ε , where ε is some calculation accu-
racy (for example, ε  0,001 [11]). If the condition is true, then the process of finding
the group assessments of students ends. If false, then it returns to step 7.2.
    The higher the group assessment, the better the quality of the document.
    The functional requirements for the subsystem of the automated training system
(ATS), through which the assessment of exercises according to the proposed method-
ology is implemented, are presented by the Use Case UML diagram in Figure 1
(based on improvements to a similar diagram from [3]).
    As shown in Fig. 1, the expert team leader has access to all the functions of a regu-
lar expert. But in addition, there are functions for determining group assessments of
weights, document quality, compiling a final list of comments and recommendations
for the trainee (taking into account the comments and recommendations of other ex-
perts). The subsystem allows you to simplify many time-consuming calculations (use
cases 4.1, 5.1, 8.1, 10.1). In this case, the person makes the final decision on the
grades. You can convert the grouped document quality assessments (use case 10) to a
different scale.
    This methodology and the ATS subsystem can also be used, for example, when de-
fending coursework and final qualification works by university students (future IT
specialists). Moreover, in addition to the criteria for assessing the quality of documen-
tation directly, it is possible to take into account other criteria, for example, the level
of preparation of the report and presentation.
    Fig. 1. Functional requirements for the ATS subsystem for group assessment of documents.


4        Conclusion
Thus, according to the results of the study, the following conclusions were made:

─ A methodology for group expert assessment of the quality of software documenta-
  tion has been developed. The use of the methodology allows, on the basis of
  mathematical methods, to control the formation of skills among IT specialists;
─ On the basis of the methodology, a prototype of a subsystem for a group expert
  assessment of the quality of software documentation for an automated system for
  processing information on the development of competencies in the training of IT
  specialists has been developed. The use of the subsystem will reduce the complex-
  ity of the work of experts;
─ The developed methodology and subsystem can be used:
   to control the formation of professional skills among trainees during training in
     educational organizations, during in-house training in IT companies and IT de-
     partments of enterprises;
   to assess the quality of real-life tasks for the preparation of software documenta-
     tion by novice IT specialists.


5        Acknowledgments

The research is supported by a stipend of the President of the Russian Federation to
young scientists and post-graduate students (No. SP-100.2018.5), which was assigned
by the grants Council of the President of the Russian Federation.
References
 1. Kulikov, S.S.: Software Testing. Basic course. Minsk: Four quarters (2017).
 2. Orlov, S.A.: Software Engineering. Textbook for universities. 5th edition updated and ex-
    panded. Third generation standard. SPb.: Peter (2016).
 3. Fayzrakhmanov, R.A., Polevshchikov, I.S., Bobrova, I.A.: Improving the Process of Train-
    ing Specialists in the Development of Program Documentation Based on Automated As-
    sessment of the Quality of Skills Formation, XVIII All-Russian Scientific and Practical
    Conference "Planning and Provision of Personnel Training for the Industrial and Economic
    Complex of the Region" (November 20-21, 2019), Sat. reports, 21–25, SPb .: Publishing
    house of ETU "LETI" (2019).
 4. Bouhnik, D., Carmi, G.: E-learning Environments in Academy: Technology, Pedagogy
    and Thinking Dispositions. Journal of Information Technology Education: Research, 11,
    201–219 (2012).
 5. Kovacic, Z., Green, J.: Automatic Grading of Spreadsheet and Database Skills. Journal of
    Information Technology Education: Innovations in Practice, 11, 53–70 (2012).
 6. Lisitsyna, L.S., Smetyuh, N.P., Golikov, S.P.: Models and Methods for Adaptive Man-
    agement of Individual and Team-Based Training Using a Simulator. IOP Conference Se-
    ries: Earth and Environmental Science, 66(1), 012010 (2017).
 7. Alshammari, M.T., Qtaish, A.: Effective Adaptive E-Learning Systems According to
    Learning Style and Knowledge Level. Journal of Information Technology Education: Re-
    search, 18, 529–547 (2019).
 8. Candel, C., Vidal-Abarca, E., Cerdán, R., Lippmann, M., Narciss, S.: Effects of timing of
    formative feedback in computer‐assisted learning environments. J Comput Assist Learn,
    36(5), 718–728 (2020).
 9. Chatwattana, P., Phadungthin, R.: Web-based virtual laboratory for the promotion of self-
    directed learning. Global Journal of Engineering Education, 21(2), 157–164 (2019).
10. Gero, A., Stav, Y., Wertheim, I., Epstein, A.: Two-tier multiple-choice questions as a
    means of increasing discrimination: case-study of a basic electric circuits course. Global
    Journal of Engineering Education, 21(2), 139–144 (2019).
11. Gudkov, P.A.: Methods of Comparative Analysis. Tutorial. Penza: Publishing house of the
    Penza State University (2008).