<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Information Technologies and Security, December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Military Institute of Telecommunications and Informatization named after Heroes of Kruty</institution>
          ,
          <addr-line>Knyaziv Ostrozkyh Street 45/1, Kyiv, 01011</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Taras Shevchenko National University</institution>
          ,
          <addr-line>Volodymyrska Street 64/13, Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>19</volume>
      <issue>2024</issue>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>An approach to assessing the level of assimilation of educational material by education seekers is proposed. This approach involves using adaptive Item Response Theory (IRT) models to evaluate knowledge based on the one-parameter Rasch model, design a combined testing procedure, and verify the balance of test tasks. This enables the determination of the objective level of preparedness of the education seeker according to the testing procedure and the assessment of the quality of the preparation for test tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;knowledge assessment</kwd>
        <kwd>adaptive testing</kwd>
        <kwd>Rasch parametric model</kwd>
        <kwd>test task complexity 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Modern higher education institutions in developed European countries are actively working to
improve the education system [1], which can integrate into the European educational and scientific
space [2]. One of the driving mechanisms for such improvement is ensuring fair competition in the
educational services market with an appropriate quality of higher education.</p>
      <p>Training high-quality specialists should involve not only modern methods and means of
acquiring new knowledge, but also an objective assessment of their level of assimilation [3]. The
effectiveness of the evaluation depends on compliance with the didactic principles of systematicity
and objectivity, as emphasized in the work [4, 5].</p>
      <p>Currently, a relatively wide range of research assesses the knowledge of education seekers. Their
analysis has shown that the results of knowledge control of existing systems are not sufficiently
informative and objective. An important parallel direction of improvement is the control of data
errors to enhance the reliability of processing results, a methodology explored in detail in related
works [6]. Testing procedures must use test tasks that are uniformly selected in terms of complexity
to ensure the maximum informativeness and objectivity of the control results. This can be done using
adaptive testing.</p>
      <p>Adaptive testing (AT) is a type of test assessment that evaluates knowledge, where the sequence
of test task presentation (including their complexity) and the number of functions depend on the
student's answers to previous test tasks [7, 8]. The use of an adaptive approach enables you to
consider the individual capabilities of students, assess their cognitive skills, increase the accuracy of
determining their knowledge level, utilize both rating and interval scales for evaluating test results,
and enhance student motivation to study [9].</p>
      <p>However, the adaptive approach has the following limitations: the presence of linear processes in
testing procedures, the absence of feedback; the lack of ranking of test participants who gave the
correct answer to the same number of test tasks, which affects the objectivity of the assessment [10];
the imbalance of test tasks in terms of content and complexity, which affects their quality [11].</p>
      <p>Therefore, to ensure proper informativeness and objectivity of the control results, it is proposed
to assess the level of assimilation of educational material by education seekers based on adaptive
Item Response Theory (IRT) knowledge assessment models, with the possibility of ranking test
participants who gave the correct answer to the same number of test tasks, which are balanced in
content and complexity.</p>
      <p>In the approach to assessing the level of assimilation of educational material by education seekers
and ensuring proper objectivity, it is proposed to use adaptive IRT models of knowledge assessment
with existing procedures: organizing test assessment of knowledge; determining the conditions for
the need to change the level of complexity of the test task; determining the balance of test tasks in
the test (test quality).</p>
    </sec>
    <sec id="sec-2">
      <title>2. Adaptive testing model</title>
      <p>The adaptive testing model involves: a testing organization procedure (determining the initial level
of knowledge of the education seeker, determining the set(s) of test tasks and their level of
complexity, determining the estimated weight of the correct and incorrect answer to the test task,
establishing the form of obtaining the final results for testing); a procedure for conducting testing
(establishing the level of complexity of the task from which testing will begin, determining the
method for assessing the correctness of the answer to the current test task, determining the rules
(conditions) regarding the order of changing test tasks according to the procedure and its completion
to obtain the final result); a method for determining the balance of test tasks in the test (test quality).</p>
      <sec id="sec-2-1">
        <title>2.1. Organization and conduct of testing</title>
        <p>The organization and implementation of knowledge testing should be examined through the lens of
the following classification groups of adaptive testing methods:
1. Pyramidal Testing is an adaptive form of testing where the complexity of the test task
changes according to the correctness of the answer to the previous one. At the beginning of
the testing procedure, the starting level of complexity of the test task is set. The rule of
dividing the complexity scale of test tasks in half at each stage is applied.</p>
        <sec id="sec-2-1-1">
          <title>The total number of questions in pyramid testing can be adjusted according to the needs</title>
          <p>of teachers. Still, the formula 4n is typically used, where n represents the number of test tasks
for each student.</p>
          <p>The drawbacks of this testing procedure include the significant resources required for its
development and the necessity of independently assessing item complexity, which can
potentially compromise the objectivity of the results obtained.
2. FlexiLevel is a type of adaptive testing organization and implementation aimed at comparing
each test participant's qualification level with the degree of complexity of the test tasks given
to them. The testing procedure begins with a level of complexity that is arbitrary. After that,
depending on the test participant's answer, he is given a task of a higher or lower level of
complexity. There is a gradual approach to the real level of preparedness of the test
participant.</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>The primary objective of this procedure is to utilize a fixed-branching algorithm to adjust</title>
          <p>the difficulty level of test items.</p>
          <p>To determine the complexity of test tasks, initial statistical data is collected based on
fixed-form testing procedures. The next step involves determining the test item's complexity
parameter by calculation according to the formula:
measurement of the student's abilities.
more difficult tests by difficulty level.
where:   – the count of tested participants who successfully completed this test item;
  – the count of tested participants who answered this test item.</p>
        </sec>
        <sec id="sec-2-1-3">
          <title>The advantage of such testing is the simplicity of the algorithm. Such tests demonstrate significantly less variability in test items presented to each test taker, and an identical number of items is administered to several test takers who follow the same sequence of correct or incorrect responses.</title>
        </sec>
        <sec id="sec-2-1-4">
          <title>The system may produce incorrect results if the question base is small or does not cover</title>
          <p>all difficulty levels. FlexiLevel testing often focuses on assessing individual aspects of
knowledge rather than their integrated application. This can result in students who are good
at the material but have gaps in individual topics receiving a lower score.
3.</p>
        </sec>
        <sec id="sec-2-1-5">
          <title>Stradaptive (stratified adaptive) – stratified adaptive testing. This procedure is based on</title>
        </sec>
        <sec id="sec-2-1-6">
          <title>Alfred Binet’s strategy of using peak tests. In such tests, there is low variability in task complexity, and tasks are distributed over a narrow range of complexity. The significance of peak tests in the stratified adaptive testing procedure is to provide a more accurate A stratified adaptive test selects stratified or organized tasks into scaled, progressively</title>
          <p>= 1 − 



,</p>
        </sec>
        <sec id="sec-2-1-7">
          <title>Typically, peak tests are located at the beginning of the testing procedure and are used to</title>
          <p>assess the starting abilities of the education seeker. Their number is variable and depends on
the purpose of the test. After the initial knowledge of the education seeker is determined, the
assessment procedure switches to adaptive mode, and test tasks are provided according to
the selected adaptive algorithm.</p>
          <p>The advantages of this procedure in organizing and conducting testing include factors
such as the accuracy of assessment, time efficiency, task balance, flexibility in test design,
and participant psychological comfort. The disadvantages include the complexity of
development and implementation. Stratified testing operates within specific strata, which
may limit flexibility in adaptation. If a participant makes mistakes in the initial stages of
testing, they may be moved to a lower tier, where the tasks will become too easy for them to
complete. This may lead to an underestimation of their real knowledge.</p>
          <p>Therefore, each of the above procedures has certain peculiarities in its application, which
determine its advantages and limitations. This necessitates the use of a combined approach in the
organization and conduct of testing in the proposed model.</p>
          <p>According to the procedure for organizing testing, the initial level of preparedness of the
applicant for education is average. It is proposed that three sets of homogeneously selected test tasks
be formed, differing in complexity. The estimated weight of the correct answer to the test task is 1,
and the incorrect one is 0. The method of obtaining final results involves calculating the ratio of
correctly answered questions to the total number of test tasks, taking into account their complexity.</p>
          <p>According to the testing procedure, the level of complexity of the task from which the testing will
begin is set to medium, which is typical of the pyramidal approach. When answering the test task,
the student launches an adaptive branching algorithm by processing test tasks of different levels of
complexity. If the student answers the test task correctly, the complexity level of the next test
increases; if incorrectly, it decreases (flexible procedure).</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Evaluating the correct answer to a test task</title>
        <p>The processing of the condition for the need to complicate (ease) the complexity of the test task,
depending on the current answer of the education seeker, will be entrusted to the appropriate
procedure that assesses the probability of the education seeker's correct answer to the current test
task.</p>
        <p>Currently, Rasch and Birnbaum's logistic models [12] are widely used to determine such a
probability [13].</p>
        <p>The one-parameter Rasch model (1PL) utilizes two values: θ – the knowledge level of the test
taker, and δ – the task's complexity level, represented as  ; ,  ; .</p>
        <p>The one-parameter Rasch model (1PL) involves the introduction of two main parameters: θ,
representing the test participant’s knowledge level, and δ, representing the task complexity level
[14]. This is the probability that a test participant with a level of preparedness  will correctly
complete a test task of complexity . The likelihood of success depends, in fact, only on one
parameter – the difference   , and is calculated by the following formula:
where:   =     ++11 , p i – the proportion of correct answers to test tasks, q i – the proportion of
incorrect answers to test tasks;
  = 1 +     ++11 ;
i = 1, 2, … , m – number of testing participants;
j = 1, 2, … , n – number of test tasks.</p>
        <sec id="sec-2-2-1">
          <title>At the same time:</title>
          <p>
            lim  = 1, lim  = 0 (
            <xref ref-type="bibr" rid="ref3">3</xref>
            )
(  )→+∞ (  )→−∞
          </p>
          <p>P = 0.5 if   .</p>
          <p>The one-parameter Rasch model is most effective for analyzing dichotomous tasks, where the
answer can only be binary: "correct" or "incorrect". This model considers only one parameter – task
difficulty (δ), which makes it ideal for tasks where additional characteristics such as guessing level
or discrimination need not be considered. The model assumes that all items have the same level of
discrimination (i.e., all items are equally good at distinguishing between strong and weak test takers).
The one-parameter Rasch model is well-suited for tests with a limited number of items, as it requires
fewer parameters to be estimated. This model is best suited for open-ended test items where there is
no way to guess the correct answer.</p>
          <p>In the two-parameter Rasch model (2PL), the adaptive test consists of n test items. In the model,
the response variables Uj, j = 1, 2, …, n, can take the value one if the answer is correct, and zero if it
is incorrect, that is:
  =
0, if the answer is incorrect
1, if the answer is correct</p>
        </sec>
        <sec id="sec-2-2-2">
          <title>This model looks like this:</title>
          <p>
            = 1| = 1+   ( (−− ) ). (
            <xref ref-type="bibr" rid="ref4">4</xref>
            )
where: aj – differentiating ability (discriminative power) of the j-th test task (discrimination
coefficient).
          </p>
          <p>Discrimination helps to select tasks that best distinguish between different levels of knowledge
or skills. It allows you to identify tasks that effectively distinguish between test takers with high and
low levels of expertise. The model is suitable for organizing and presenting test tasks where the test
taker must choose one correct answer from several proposed ones.</p>
          <p>If the test contains open-ended questions or questions that do not have clearly defined
right/wrong answers, the 2PL model may be less suitable. For tasks where it is essential to consider
additional factors (for example, individual characteristics of test participants), more complex models
may be required (for example, a three-parameter model).</p>
          <p>By Birnbaum's logistics model (3PL), the probability of the accurate response     = 1| on j
questions with knowledge level θ is calculated by the formula:
 1.7   − 
    = 1| =   + (1 −   )
1 +  1.7   −</p>
          <p>
            (
            <xref ref-type="bibr" rid="ref2">2</xref>
            )
(
            <xref ref-type="bibr" rid="ref5">5</xref>
            )
where: cj – the probability of guessing the correct answer when performing the j-th test task.
          </p>
          <p>This model takes into account three key parameters for each test item: difficulty (helps to select
items that match the participant's current level), discrimination ability (allows you to choose items
that best distinguish between different levels of knowledge), and guess probability (taken into
account when scoring, especially when the items are multiple choice).</p>
          <p>The 3PL model involves calibrating tasks before testing to ensure assessment accuracy, even with
a fixed test format. The model is most suitable for tests where participants select one or more correct
answers from a set of provided options.</p>
          <p>A limitation of the 3PL model is that it is difficult to calibrate (for accurate application of the
model, a large amount of data is required to estimate task parameters), is sensitive to task quality (if
tasks have low discriminatory power or inadequate complexity, the model may produce inaccurate
results), and has limitations for test tasks with partial scores (scoring on score scales).</p>
          <p>In the proposed adaptive testing model, the single-parameter Rasch model will be used to evaluate
the correct answer to a test task and determine the condition for changing the test task's complexity
level.</p>
          <p>This is because it is most suitable for tests with dichotomous tasks, equal task discrimination, a
small number of functions, and a low level of guessing.</p>
          <p>Probability of a correct answer Pj(θ) obtained during testing. If Pj(θ)≥0.5, then the task's difficulty
level must be increased; otherwise, it must be decreased.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Determining the quality of test tasks</title>
        <p>To assess the quality of selected test tasks, which should reflect the structural hierarchy of the
training model in the academic discipline, it is necessary to address the problem of test composition,
specifically to evaluate the internal consistency (balance) of tasks within the test.</p>
        <p>In this case, the expert needs to assess the effectiveness of the scheme proposed by the developer
and the method used to arrange tasks in the test. The concept of balance involves the proportional
filling of the test with functions of varying levels of complexity.</p>
        <p>Thus, the analysis of the test composition reveals the degree of harmonious presentation of key
elements of the academic discipline's content, the adequacy of their reflection in the test, and the
appropriate level of differential difficulty of the test tasks.</p>
        <p>Currently, the following approaches and methods are most common in the field of assessing the
quality of test tasks:
1. Methods based on the criteria of mathematical and statistical methods of analyzing test task
statistical characteristics (properties). They enable the detection of hidden defects in test
tasks. Models of dispersion, factor, cluster, discriminant analysis, and time series regression
analysis are used.
2. Methods that use expert assessment of the consistency of tasks in the test. The procedure for
specialist assessment of test quality involves the following stages: semantic capacity
evaluation, prediction of test task success, and comprehensive examination of test quality
and task performance, which includes approbation testing. The goal is to establish, verify,
and evaluate its measuring capabilities by approving representative samples.
3. Methods based on analyzing the degree of balance of test items. The calculated correlation
coefficient (balance) is used as a numerical indicator of the validity of the test.</p>
        <p>Among the methods of the third group, the Chelyshkova method of checking the balance of test
weight coefficients attracts special attention [15]. This method has proven very effective in
identifying extreme test tasks, that is, those that are too difficult or too easy. For this, the balance of
their weight coefficients is checked. This check is carried out to analyze the complexity of test tasks
and inform the expert about the feasibility of replacing extreme questions in the test, which
negatively affect the objectivity of assessing the student's knowledge [16].</p>
        <p>Thus, preference is given to a combined method of organizing adaptive testing [17]. The
singleparameter Rasch model will be used to determine the condition for changing the complexity of the
166
test task. Experimentally selected coefficients provide the optimal value of the indicator of
differential ability for test tasks, the balance of which is achieved using the method of M.</p>
        <sec id="sec-2-3-1">
          <title>Chelyshkova.</title>
        </sec>
        <sec id="sec-2-3-2">
          <title>Chelyshkova.</title>
          <p>Thus, preference is given to a combined method of organizing adaptive testing. The
singleparameter Rasch model will be used to determine the condition for changing the complexity of the
test task. Experimentally selected coefficients provide the optimal value of the indicator of
differential ability for test tasks, the balance of which is achieved using the method of M.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Knowledge assessment based on the proposed adaptive model</title>
      <sec id="sec-3-1">
        <title>Organizing and conducting a knowledge assessment test involves the following steps:</title>
        <p>1. Initialization of the testing procedure. The task is randomly selected from a set of functions
corresponding to the specified difficulty level (“easy”, “medium”, or “difficult”). The first task
is always of average difficulty. Depending on the answer to it, the next task will be proposed
by the adaptive algorithm as easier or more difficult than the previous one:</p>
      </sec>
      <sec id="sec-3-2">
        <title>Calculate the current BS (8) and the maximum possible BA (9) number of points for the testing procedure, as well as the current success rate of the education seeker S (10).</title>
        <p>∈ ⃗∪ ⃗∪ ⃗, i=1,  .
where:   – test task of the appropriate level of complexity;
 ⃗,  ⃗,  ⃗– set of tasks of difficulty level “easy”, “medium”, “difficult”;</p>
      </sec>
      <sec id="sec-3-3">
        <title>N – number of test tasks.</title>
      </sec>
      <sec id="sec-3-4">
        <title>In the model, the response variables take the value one if the answer is correct and zero if it is incorrect, that is:</title>
      </sec>
      <sec id="sec-3-5">
        <title>Calculation of the current logit of the level of knowledge of the education seeker θi (13) and the logit of the level of difficulty of the task δi (13). 5. Deciding on the need to move to another level of task complexity.</title>
        <p>(  ) ∈ {0; 1}.</p>
        <p>= ∑ =1  ( )   .</p>
        <p>=   
 =
 
  100%.</p>
        <p>.


  = ∑ =1  ( )</p>
        <p>
          .
  = 1 −   .
  =  
  = 1 +  
  +1 .
  +1
  +1 .
  +1
(
          <xref ref-type="bibr" rid="ref13">13</xref>
          )
(
          <xref ref-type="bibr" rid="ref14">14</xref>
          )
  – weight of the test task,
        </p>
      </sec>
      <sec id="sec-3-6">
        <title>N – number of test tasks.</title>
        <p>where:  (  ) – the answer to the test task of the appropriate level of complexity,
= 1,  
= 2,  
= 3;</p>
      </sec>
      <sec id="sec-3-7">
        <title>The success rate of passing the test indicates the percentage of points received by the student at this stage, compared to the maximum number of points he could have if he had answered all the proposed test tasks correctly, S (10).</title>
      </sec>
      <sec id="sec-3-8">
        <title>Determining the current proportion of true pi (11) and false qi (12) answers.</title>
      </sec>
      <sec id="sec-3-9">
        <title>A feature of this step is that after each answer to the test task, the probability of the student's correct answer is calculated, Pj (θ) (15), and, based on its value, determines the transition to the appropriate level of task complexity from the experimentally established range.</title>
        <p>A threshold value of the probability of a student's correct answer, Pj (θ) = 0.5, serves as a
condition for changing the task's difficulty level during the testing process. The task's
difficulty level increases when Pj (θ) ≥ 0.5 and decreases otherwise.</p>
      </sec>
      <sec id="sec-3-10">
        <title>The higher the probability of an answer, the higher the level of difficulty of the task, and</title>
        <p>vice versa. The essence of this operation is to ensure the correspondence between the task's
difficulty level and the student's knowledge level, thereby enhancing assessment objectivity.
This correspondence is particularly crucial, as the discriminant ability of the item refers to
its capacity to distinguish learners based on their level of preparedness. The higher the
discriminant ability, the better the division of learners into those who are prepared and those
who are not.</p>
        <p>( ) =  ( −  )</p>
        <p>1+ ( −  ).</p>
      </sec>
      <sec id="sec-3-11">
        <title>Organization of verification of the conditions for completing testing.</title>
      </sec>
      <sec id="sec-3-12">
        <title>Below is a schematic representation of the testing process, taking into account the change</title>
        <p>in the weight of the test task, and for the case where the final level of success is determined
by the condition of completing the total number of test items (Fig. 1).</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Checking the balance of test tasks</title>
      <p>A balance check of their weight coefficients is performed to find extreme test tasks, i.e., those that
are too difficult or too easy. This check is performed to analyze the complexity of the functions and
inform the expert about the feasibility of replacing extreme questions in the test that negatively
affect the objectivity of the student's knowledge assessment.</p>
      <p>The balance of the weight coefficients of test tasks is proposed to be checked using the</p>
      <sec id="sec-4-1">
        <title>Chelyshkova balance method.</title>
      </sec>
      <sec id="sec-4-2">
        <title>Checking the balance of test item weights includes the following steps:</title>
      </sec>
      <sec id="sec-4-3">
        <title>1. Organization of the formation of the evaluation matrix. 2.</title>
        <p>In any testing process, the results of calculating   ̃ the statistical estimates   and  ̃ the
statistical estimates   will differ from the existing exact values. In this sense, the estimates
are certain functions of the initial random values of the elements of the test results matrix.</p>
      </sec>
      <sec id="sec-4-4">
        <title>This matrix contains numerical definitions of the indicator gradation.</title>
        <p>This matrix is a table, with the rows corresponding to the number of education seekers
(N) and the columns corresponding to the number of indicator variables (M). In the case of
testing educational achievements, the indicator variables are test tasks. A number that
characterizes the education seeker's response to this task is at the intersection of the rows
and columns. In this work, a binary matrix is used, a matrix of test results for the
dichotomous case, where the education seeker's responses are characterized by two values:
0 and 1. Zero corresponds to an incorrect answer, and units to a correct one.</p>
      </sec>
      <sec id="sec-4-5">
        <title>Determining the proportion of correct pi and incorrect qi answers. (15) 168</title>
      </sec>
      <sec id="sec-4-6">
        <title>The proportion of correct answers pj for the i-th student is defined as the ratio of their</title>
        <p>number of correct responses to the total number of items answered.</p>
        <p>Similarly, the proportion of correct answers for the j-th test item pj is calculated as the
ratio of the total number of correct responses by all students to the aggregate number of
answers to that item. In both scenarios, the proportion of incorrect answers qj is obtained by
subtracting the proportion of correct answers from unity.</p>
        <p>= ∑ =1  ( )</p>
        <p>,   = 1 −   .
  = ∑ =1  ( )</p>
        <p>,   = 1 −   .</p>
        <p>
          Calculate the initial level of knowledge of the education seeker  i0 (
          <xref ref-type="bibr" rid="ref18">18</xref>
          ) and the initial level
of difficulty of tasks  i0 (
          <xref ref-type="bibr" rid="ref18">18</xref>
          ).
        </p>
        <p>
          (
          <xref ref-type="bibr" rid="ref16">16</xref>
          )
(
          <xref ref-type="bibr" rid="ref17">17</xref>
          )
(
          <xref ref-type="bibr" rid="ref18">18</xref>
          )
(
          <xref ref-type="bibr" rid="ref19">19</xref>
          )
(
          <xref ref-type="bibr" rid="ref20">20</xref>
          )
(
          <xref ref-type="bibr" rid="ref21">21</xref>
          )
(
          <xref ref-type="bibr" rid="ref22">22</xref>
          )
(
          <xref ref-type="bibr" rid="ref23">23</xref>
          )
Determining the average level of preparedness of education seekers  and the average
difficulty level of tasks  .
        </p>
        <p>=  

  ++11 ,   = 1 +  
  +1
  +1</p>
        <p>.</p>
        <p />
        <p>= ∑ =1   ,  = ∑ =1</p>
        <p>.


Calculation of variances S and S to reduce the parameters into a single scale of standard
estimates, and calculation of the slope coefficients a
a .</p>
      </sec>
      <sec id="sec-4-7">
        <title>Due to the influence of various random factors, the estimates of the parameters θ and δ</title>
        <p>obtained from several samples will, of course, differ. If the sample size is large enough, one
can calculate stable values of the parameters θ and δ, which will be the most effective
estimates and can be accepted as objective estimates of θ and δ.</p>
        <p>Thus, the question arises of finding the mathematical expectations and variances of these
random variables. It is necessary that the mathematical expectation of the corresponding
estimates coincide with the corresponding exact values, and the variance of the forecast is
minimal.</p>
        <p>= ∑ =1(  )2−</p>
        <p>.
  =
1+  /2,89
1+    /8,35,  =
11++  //28,8,395 .</p>
      </sec>
      <sec id="sec-4-8">
        <title>Calculation of the values of the parameter estimates θi and βj on a single interval scale.</title>
        <p>=   
  +  ,   =   
 +  .</p>
        <p>
          Calculation of the balance of the test task complexity indicator   (
          <xref ref-type="bibr" rid="ref23">23</xref>
          ).
        </p>
        <p>∑ = ∑ =1   .</p>
        <p>The determination of the final grade according to the traditional learning scheme (unsatisfactory,
satisfactory, good, very good, excellent) is carried out using conversion scales based on the student's
level of success.</p>
        <p>To evaluate a user who has passed the test, it is necessary to determine their success and compare
this value with those set by the developer when creating the training task. Based on this comparison,
a score can be assigned according to the traditional training scheme.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Practical use</title>
      <p>To evaluate a user who has passed the test, it is necessary to determine their success and compare
this value with those set by the developer when creating the training task. Based on this comparison,
a score can be assigned according to the traditional training scheme.</p>
      <sec id="sec-5-1">
        <title>The input conditions for executing the algorithm are: 3*N – total number of tasks, N=10;</title>
      </sec>
      <sec id="sec-5-2">
        <title>A set of tasks of difficulty levels “easy”, “medium”, and “difficult”.</title>
        <p>={x1, x2, ... , xN}={1, 2, ... , 10};
Y ={y1, y2, ... , yN}={1, 2, ... , 10};
Z ={z1, z2, ... , zN}={1, 2, ... , 10};
The set of answer options q={0;1}.</p>
      </sec>
      <sec id="sec-5-3">
        <title>The number of students is 3.</title>
        <p>N – number of test questions, N=20.
 _ 
=1,  _
=2,  _
=3.</p>
        <p>The set of task complexity weights K ={k1, k2, k3}={1;2;3};</p>
      </sec>
      <sec id="sec-5-4">
        <title>Success rate, as determined by test results (%), to evaluate the rating indicator of the education seeker on the preference scale.</title>
      </sec>
      <sec id="sec-5-5">
        <title>Assessment according to the traditional learning scheme (unsatisfactory, sufficient, satisfactory, good, very good, excellent).</title>
      </sec>
      <sec id="sec-5-6">
        <title>An indicator of the balance of the level of difficulty of tasks. Calculation results: 1. Success according to testing results.</title>
        <p>1 =</p>
        <p>1 = ∑ =1  (  )   =2x1+2x1+3x0+2x1+2x0+1x1+1x1+2x1+2x1+3x0+2x1+2x1+
+1+1+2+2+3+0=25.
 1 =</p>
        <p>11 100% =
60</p>
        <p>= 0.7,  = 1, … , 3
 1 =
 2 =
…
 20 =
 




1
0.7
0.3
 1 = 1 −   = 1 − 0.7 = 0.3.</p>
        <p>∑ =1  ( )</p>
        <p>∑ =1  ( )

=
=
2
3
3
3
The number of test tasks in terms of difficulty</p>
      </sec>
      <sec id="sec-5-7">
        <title>Construction of a series of preferences for education seekers.</title>
        <p>Based on the calculations, rating indicators of the success of education seekers were determined.
 1,  2,  3 Allow us to construct an objective series of precedence:</p>
        <p>Student 2 (61.67) &lt; Student 1 (41.67) &lt; Student 3 (33.33)
3.</p>
      </sec>
      <sec id="sec-5-8">
        <title>Construction of a series of test task priorities by complexity. To determine the balance of test tasks, we will determine the proportion of correct answers to the j-th test task for the i-th student. For example,</title>
        <p>3
0.7
0.3
4
0.4
0.6
5
0
1
6
1
0
7
1
0
8
0.7
0.3
9
0.7
0.3
10
0.4
0.6
11
1
0
12
1
0
13
0.4
0.6
14
0.7
0.3
15
0.7
0.3
16
1
0
17
0.7
0.3
18
1
0
19
1
0
20
0.4
0.6</p>
      </sec>
      <sec id="sec-5-9">
        <title>We built a series of test tasks in order of difficulty:</title>
        <p>
          (
          <xref ref-type="bibr" rid="ref11 ref12 ref16 ref18 ref19 ref2 ref6 ref7">2,6,7,11,12,16,18,19</xref>
          ) &lt; (
          <xref ref-type="bibr" rid="ref1 ref14 ref15 ref17 ref3 ref8 ref9">1,3,8,9,14,15,17</xref>
          ) &lt; (
          <xref ref-type="bibr" rid="ref10 ref13 ref20 ref4">4,10,13,20</xref>
          ) &lt; (
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
        </p>
      </sec>
      <sec id="sec-5-10">
        <title>We notice that the questions are not well-balanced.</title>
        <p>Calculate the initial level of knowledge of the education seeker  i0 and the initial level of difficulty
of tasks  i0 .</p>
        <p>For example,
  1 =  
 1 = 1 +  
  + 1
  + 1</p>
        <p>= 
  + 1
  + 1
1.7
1.3
= 1 +</p>
        <p>For questions 4, 5, 10, 13, and 20, a reformulation is required to ensure that the proportion of
accurate responses exceeds 0.5.</p>
        <p>As a result of practical implementation, performance rating indicators were obtained, allowing for
a sufficiently objective ranking of students on a preference scale. Test tasks requiring reformulation
were identified based on task complexity balance indicators.</p>
        <p>Using the integrated development environment Microsoft Visual Studio [18], the high-level
objectoriented programming language Microsoft Visual C# [19]. The developed class library, NET, and the
.NET Framework platform [20] were used to create a Windows Application that incorporates the
logic for testing according to the proposed approach. Figure 2 illustrates the screen forms that
demonstrate the operation of the test information system in test passing mode, as well as the process
of obtaining the test result.</p>
      </sec>
      <sec id="sec-5-11">
        <title>Examples of modern adaptive testing systems include :</title>
        <p>•
•
•
•
•</p>
      </sec>
      <sec id="sec-5-12">
        <title>Khan Academy: uses adaptive exercises and tests that are adjusted based on student performance [21].</title>
      </sec>
      <sec id="sec-5-13">
        <title>Duolingo: employs adaptive algorithms for language learning, covering grammar, vocabulary, and listening tests [22].</title>
      </sec>
      <sec id="sec-5-14">
        <title>ALEKS (Assessment and Learning in Knowledge Spaces) is based on the Knowledge Space</title>
      </sec>
      <sec id="sec-5-15">
        <title>Theory, analyzes the student's knowledge structure, and offers an individual learning path [23].</title>
      </sec>
      <sec id="sec-5-16">
        <title>Carnegie Learning Adaptive Learning Platform: platforms for creating adaptive learning modules with integrated testing [24].</title>
      </sec>
      <sec id="sec-5-17">
        <title>Pearson MyLab: commercial solutions for higher education featuring adaptive tests and learning tracks [25].</title>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>An approach to assessing the level of educational material assimilation by education seekers is
proposed.</p>
      <p>According to the procedure for organizing testing, the applicant's starting level of preparedness
for education is average. It is proposed that three sets of uniformly selected test tasks be formed,
differing in complexity. The estimated weight of the correct answer to the test task is 1; the incorrect
one is 0. The method of obtaining final results involves calculating the ratio of correctly answered
questions to the total number of test tasks, taking into account their complexity.</p>
      <p>According to the testing procedure, the test task's complexity level, with which the testing will
begin, is set to medium. When answering the test task, the student launches an adaptive branching
algorithm by processing test tasks of different levels of complexity. If the student answers the test
task correctly, the complexity level of the following test increases; if incorrectly, it decreases (flexible
procedure).</p>
      <p>In the proposed adaptive testing model, the single-parameter Rasch model will be used in the
procedure for evaluating the correct answer to a test task and determining the condition for changing
the test task's complexity level.</p>
      <p>This is because it is most suitable for tests with dichotomous tasks, equal task discrimination, a
small number of tasks, and a low level of guessing.</p>
      <p>Probability of a correct answer Pj(θ) obtained during testing. If Pj(θ)≥0.5, then the task's difficulty
level must be increased or decreased, otherwise.</p>
      <p>Experimentally selected coefficients provide the differentiating ability (discriminative power) of
test tasks, the balance of which is achieved by the method of M. Chelyshkova. This enables the
determination of the student's objective level of preparedness and the assessment of the quality of
preparation for test tasks.</p>
      <p>Compared to existing counterparts, the described approach enables the objective ranking of
learners based on test evaluation results, offering high clarity and moderate computational
complexity.</p>
      <sec id="sec-6-1">
        <title>The authors have not employed any Generative AI tools.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Nagy</surname>
          </string-name>
          ,
          <article-title>Performance Measurement and Quality Assurance in Higher Education: Application of DEA, AHP, and Bayesian Models, Trends High</article-title>
          .
          <source>Educ</source>
          .
          <volume>4</volume>
          (
          <issue>3</issue>
          ) (
          <year>2025</year>
          )
          <article-title>54</article-title>
          . doi:
          <volume>10</volume>
          .3390/higheredu4030054.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Drozdova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Malcik</surname>
          </string-name>
          , E. Mehllova,
          <article-title>Open education at universities, quo vadis</article-title>
          ,
          <source>in: Proceedings of the 2013 IEEE 11th International Conference on Emerging eLearning Technologies and Applications (ICETA)</source>
          , IEEE,
          <year>2013</year>
          , pp.
          <fpage>299</fpage>
          -
          <lpage>304</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICETA.
          <year>2013</year>
          .
          <volume>6674407</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>O.</given-names>
            <surname>Vitchenko</surname>
          </string-name>
          ,
          <article-title>Problems of Realizing the Development Potential of Assessing Learners in Higher Military Educational Institutions in Different Training Systems</article-title>
          , in: S. M.
          <string-name>
            <surname>Salcutsan</surname>
          </string-name>
          (Ed.),
          <article-title>Materials of the Scientific-Practical Seminar</article-title>
          , Kyiv, Oct.
          <volume>28</volume>
          ,
          <year>2020</year>
          ,
          <source>CP "Komprint"</source>
          ,
          <year>2020</year>
          , p.
          <fpage>116</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Díaz-Lauzurica</surname>
          </string-name>
          ,
          <article-title>Active Learning Methodologies for Increasing the Interest and Engagement in Computer Science Subjects in Vocational Education and Training</article-title>
          ,
          <source>Education Sciences 15(8)</source>
          (
          <year>2025</year>
          )
          <article-title>1017</article-title>
          . doi:
          <volume>10</volume>
          .3390/educsci15081017.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Onyshchenko</surname>
          </string-name>
          et al.,
          <article-title>Method for detection of the modified DdoS cyber attacks on a web resource of an Information and Telecommunication Network based on the use of intelligent systems</article-title>
          ,
          <source>in: Proceedings of the 6th International Workshop on Modern Data Science Technologies Workshop</source>
          , MoDaST'
          <year>2024</year>
          ,
          <string-name>
            <surname>CEUR</surname>
          </string-name>
          , Lviv, Ukraine,
          <year>2024</year>
          , рр.
          <fpage>219</fpage>
          -
          <lpage>235</lpage>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3723</volume>
          /paper12.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Krasnobayev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kuznetsov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yanko</surname>
          </string-name>
          , T. Kuznetsova,
          <article-title>The data errors control in the modular number system based on the nullification procedure</article-title>
          ,
          <source>in: Proceedings of the 3rd International Workshop on Computer Modeling and Intelligent Systems, CMIS-2020</source>
          , CEUR, Zaporizhzhia, Ukraine, (
          <year>2020</year>
          ), pp.
          <fpage>580</fpage>
          -
          <lpage>593</lpage>
          . doi:
          <volume>10</volume>
          .32782/cmis/2608-
          <fpage>45</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Rajamanickam</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Kathiravan</surname>
          </string-name>
          ,
          <article-title>Adaptive assessment system for generating sequential test sheets using item response theory</article-title>
          ,
          <source>in: Proceedings of the International Conference on Pattern Recognition, Informatics and Mobile Engineering (ICPRIME</source>
          <year>2013</year>
          ), Salem, India,
          <year>2013</year>
          , pp.
          <fpage>120</fpage>
          -
          <lpage>124</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICPRIME.
          <year>2013</year>
          .
          <volume>6496458</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Nie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Leung and K. -Y. Cai</surname>
          </string-name>
          , Adaptive Combinatorial Testing,
          <year>2013</year>
          13th International Conference on Quality Software, Najing, China,
          <year>2013</year>
          , pp.
          <fpage>284</fpage>
          -
          <lpage>287</lpage>
          , doi: 10.1109/QSIC.
          <year>2013</year>
          .
          <volume>22</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Na</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Chenhua</surname>
          </string-name>
          ,
          <source>Scientific Study on the Learning Motivation of Higher Vocational Students in Shanghai Based on SPSS 11.5</source>
          ,
          <issue>2021</issue>
          2nd International Conference on Education,
          <article-title>Knowledge and Information Management (ICEKIM), Xiamen</article-title>
          , China,
          <year>2021</year>
          , pp.
          <fpage>75</fpage>
          -
          <lpage>80</lpage>
          , doi: 10.1109/ICEKIM52309.
          <year>2021</year>
          .
          <volume>00025</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Naumenko</surname>
          </string-name>
          ,
          <article-title>Foreign Experience in Using Computerized Adaptive Testing as a SubjectOriented Tool for Evaluating Educational Outcomes of Learners</article-title>
          ,
          <source>in: Impact of Innovation on Science: Fundamental and Applied Aspects</source>
          , Verona, Italy, Jun.
          <volume>26</volume>
          ,
          <year>2020</year>
          , vol.
          <volume>2</volume>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L. T.</given-names>
            <surname>Hung</surname>
          </string-name>
          and
          <string-name>
            <given-names>N. T.</given-names>
            <surname>Ha</surname>
          </string-name>
          , Experimental Research and Application of Computerized Adaptive Tests to assess
          <source>Learners' Competencies, 2021 3rd International Conference on Computer Science and Technologies in Education (CSTE)</source>
          , Beijing, China,
          <year>2021</year>
          , pp.
          <fpage>69</fpage>
          -
          <lpage>74</lpage>
          , doi: 10.1109/CSTE53634.
          <year>2021</year>
          .
          <volume>00021</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F. B.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <source>The Basics of Item Response Theory</source>
          , 2nd ed.,
          <source>ERIC Clearinghouse on Assessment and Evaluation</source>
          , (
          <year>2001</year>
          ). URL: https://files.eric.ed.gov/fulltext/ED458219.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhao</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Bai</surname>
          </string-name>
          ,
          <article-title>The principle of Rasch model and compare with the other models</article-title>
          ,
          <source>2011 International Conference on Computer Science and Service System (CSSS)</source>
          ,
          <year>Nanjing</year>
          ,
          <year>2011</year>
          , pp.
          <fpage>3731</fpage>
          -
          <lpage>3733</lpage>
          , doi: 10.1109/CSSS.
          <year>2011</year>
          .
          <volume>5974856</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Triono</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sarno</surname>
          </string-name>
          and
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Sungkono</surname>
          </string-name>
          ,
          <article-title>Item Analysis for Examination Test in the Postgraduate Student's Selection with Classical Test Theory</article-title>
          and Rasch Measurement Model, 2020
          <source>International Seminar on Application for Technology of Information and Communication (iSemantic)</source>
          , Semarang, Indonesia,
          <year>2020</year>
          , pp.
          <fpage>523</fpage>
          -
          <lpage>529</lpage>
          , doi: 10.1109/iSemantic50169.
          <year>2020</year>
          .
          <volume>9234204</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Maslovskyi</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Sachenko</surname>
          </string-name>
          ,
          <article-title>Adaptive test system of student knowledge based on neural networks</article-title>
          ,
          <source>2015 IEEE 8th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)</source>
          , Warsaw, Poland,
          <year>2015</year>
          , pp.
          <fpage>940</fpage>
          -
          <lpage>944</lpage>
          , doi: 10.1109/IDAACS.
          <year>2015</year>
          .
          <volume>7341442</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>N.</given-names>
            <surname>Albalushi</surname>
          </string-name>
          and
          <string-name>
            <given-names>W. S.</given-names>
            <surname>Awad</surname>
          </string-name>
          ,
          <article-title>Generating Questions Bank for Adaptive Assessment Using Machine Learning Techniques: Review, 2024 International Conference on IT Innovation and Knowledge Discovery (ITIKD), Manama</article-title>
          , Bahrain,
          <year>2025</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          , doi: 10.1109/ITIKD63574.
          <year>2025</year>
          .
          <volume>11004696</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>McClenen, Development of Adaptive Formative Assessment System Using Computerized Adaptive Testing and Dynamic Bayesian Networks</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>10</volume>
          (
          <issue>22</issue>
          ) (
          <year>2020</year>
          )
          <article-title>8196</article-title>
          . URL: https://www.mdpi.com/2076-3417/10/22/8196/pdf . doi:
          <volume>10</volume>
          .3390/app10228196.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Microsoft</given-names>
            <surname>Visual</surname>
          </string-name>
          <article-title>Studio software development tool</article-title>
          . URL: https://visualstudio.microsoft.com/ .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <article-title>High-level programming language Microsoft Visual C#</article-title>
          .NET. URL: https://learn.microsoft.com/dotnet/csharp/language-reference/language-specification/ .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Microsoft</surname>
          </string-name>
          .
          <article-title>NET platform</article-title>
          . URL: https://dotnet.microsoft.com/ .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Khan</given-names>
            <surname>Academy</surname>
          </string-name>
          <article-title>Blog</article-title>
          . URL: https://blog.khanacademy.org/ .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Duolingo</given-names>
            <surname>English Test Technical</surname>
          </string-name>
          <article-title>Manual</article-title>
          . URL: https://englishtest.duolingo.com/technicalmanual .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>ALEKS</surname>
          </string-name>
          <article-title>(Assessment and Learning in Knowledge Spaces) platform</article-title>
          . URL: https://www.aleks.com/about_aleks .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <article-title>Carnegie Learning Adaptive Learning Platform</article-title>
          . URL: https://www.carnegielearning.com/adaptive-learning-platform/ .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <article-title>Pearson MyLab platform</article-title>
          . URL: https://www.pearson.com/en-us/subject-catalog/p/mylab.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>