<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Optimization of electronic test parameters in learning management systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yevhen Palamarchuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olena Kovalenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Vinnytsia National Technical University</institution>
          ,
          <addr-line>Khmenytske ave, 95, 21021, Vinnytsia</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The article contains the results of research on the procedures for creating and adjusting tests in learning management systems based on an active dialogue with students and automated analysis of test results. The authors performed research on the procedures for creating, evaluating quality, and adjusting tests based on the electronic learning management system JetIQ. To evaluate the optimized method of creating and adjusting tests in learning management systems, the authors use modules to assess the quality of tests, feedback; modules for the analysis of answers to questions. Students assessed by tests do this several times: by topic; on the intermediate control of knowledge (colloquium) and the final control of knowledge (exam). This approach allows to make changes to the procedure of the final assessment of the knowledge, make adjustments to questions, and select the most correct to combine in the exam. The resulting student activity profile allows the teacher to be more objective when using test scores. The results of the evaluation of the method of optimization of test adjustment procedures are also presented, which indicate a significant efect in saving time.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;learning management information system</kwd>
        <kwd>knowledge testing module</kwd>
        <kwd>quality feedback module</kwd>
        <kwd>answer analysis module</kwd>
        <kwd>"smart test"</kwd>
        <kwd>JetIQ VNTU</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The learning management system should be an information ecosystem and cover all educational
processes. The principle of the ecosystem involves the reuse of information that is entered
once into the system. Among the various modules of the learning management system on the
example of the author’s system, JetIQ VNTU can be distinguished primarily by the ofice of
teacher and student.</p>
      <p>They form the basis of the information ecosystem. Educational processes are also automated
using the electronic dean’s ofice module. Information is provided through news systems
and jet sites of departments. The ofice of the teacher of the learning management system
contains various modules. The “My repository” module is used to download electronic resources.
Providing access to students under the educational programs of the specialty is implemented
using the module Navigator of educational resources of the discipline. The IQ test module is
used to control knowledge by testing students.</p>
      <p>Improving testing modules in the learning management system is a topical issue for
educational institutions that actively use testing tools. The format of distance and blended learning
involves the use of tests both during training and during control activities.</p>
      <p>
        Among the known practical approaches to the formation of test tasks and improvement of
testing tools in learning management systems include the use of tools for quantitative and
qualitative assessment of students’ knowledge in learning management systems, analysis of
learning outcomes for the formation of test groups [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Among them: tests and quizzes, exercises,
written works, individual interviews, special activity reports [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>The testing module is one of the most complex and intelligent modules for the following
reasons:
• The testing module should provide the opportunity to create questions of diferent types
and conduct testing in diferent modes - training and examination.
• The testing module should make it possible to assess the quality of the test according to
certain parameters and, ideally, such an assessment should be carried out automatically.
• Test validation and verification procedures should also be automated.</p>
      <p>
        In various platforms of learning management systems (Moodle, Collaborator etc.) [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ], as
well as some test software applications, the issue of creating and combining tests is partially
solved, and the procedures for their validation, verification, and quality assessment are not
considered at all. That is why research to improve the procedures for creating and adjusting
tests is relevant.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Best practice in using tests involves using a test to assess students’ input knowledge; during
training to strengthen the acquired knowledge and skills. In addition, it is advisable to use
questions in diferent tests to identify errors and features [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>
        The practice of active use of feedback tests, especially instant feedback, allows to open a new
level of dialogue "student-teacher", in addition, the teacher will better form an understanding
of the student’s knowledge and activity [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Research shows that the organization of feedback
online or in blended learning allows to create an active learning environment, and the quality
and detail of feedback, the accumulation of statistics also determine the level of both the course
and learning [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>Active use of learning management systems involves the use of various tools for assessing the
knowledge and activity of the student. We can note that student activity can be efectively used
both to motivate learning and to collect the use of this data to improve the quality of testing.</p>
      <p>
        Distance and blended learning involves the active use of formative and final assessment of
students’ knowledge through tests with built-in modules for the accumulation of statistics and
feedback. Such tools for analytical evaluation of the obtained results are partially implemented
in diferent LMS, but are not used fully enough. One example is the use of analytical assessment
in a civil engineering course at an Australian university [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>Student activity, their assessment is the basis for forming a student profile. It is based
not only on test statistics but also on other student activities. In particular, they can receive
statuses, awards, medals for activity, evaluation of dialogues, for the number of tests performed,
qualitative assessment of the knowledge, etc.</p>
      <p>This comprehensive approach to assessing and motivating students allows them to form their
profile and obtain quantitative and qualitative assessments for the study of the discipline.</p>
      <p>Providing students with information about their achievements activates the mechanisms
of motivation and increases the level of their activity in the e-learning system and includes
additional feedback loops.</p>
      <p>This allows to run procedures to actively verify the quality of test questions based on
teacherstudent relationships. Current training during the school year is optimal in terms of time to
gather information on the quality of test questions and identify those that do not meet the
requirements.</p>
      <p>
        The experience of developing and implementing a training management system and
supporting scientific and methodological activities of teachers JetIQ VNTU [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ] allows us to draw
conclusions about the need for automated modules for assessing the quality of tests, automation
of verification procedures and incident detection. The organization of the use of tests at the
university, at the faculty, by individual teachers depends on the level of implementation of the
learning management system and the policy of using tests to assess knowledge at the university
[
        <xref ref-type="bibr" rid="ref1 ref11">11, 1</xref>
        ].
      </p>
      <p>The general data of the student’s activity include completed tests, tasks, files sent to teachers,
participation in lectures, use of lecture material, dialogues in chat and forum, etc. (Fig. 1).</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Methods and Materials</title>
      <p>The well-known method of trial and error is actively used by teachers around the world when
creating tests. The main phases of this method are presented in Fig. 1. The test will be called
a pool of questions on a particular topic and/or the entire discipline. This approach has its
historical roots in the use of paper-based testing. But even today the phase of improving the
test, which is created in an electronic system that has no special modules for quality assessment
and verification of issues.</p>
      <p>Development. The duration of this phase depends on many parameters:
1. The total number of questions in the tests;
2. Areas of knowledge;
3. Availability of formulated closed and open answers;
4. Availability of calculation parts in the test questions;
5. Availability of graphics in questions and answers;
6. Use of tests in which several correct answers are chosen;
7. Using tests to compare questions and answers.</p>
      <p>It is extremely dificult to identify the main factor influencing the time of test development
and therefore we can only roughly focus on data on test development based on the experience
of VNTU teachers.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Case study</title>
      <p>Consider the features of the procedures for creating and adjusting the test using special electronic
modules of the JetIQ VNTU system.</p>
      <p>The teacher creates a test, publishes it in an electronic test system. Students take this test
and receive test results. They form an information base for the automatic analysis of test data.</p>
      <p>All the data obtained on the evaluation of the quality of tests, errors in them, the distribution
of scores in the questions form an information base for the adjustment of the test.</p>
      <p>If the tests are presented on paper and are static in any electronic testing system, or such
systems do not have analytical units for assessing the quality of tests, then all the procedures
for adjusting the tests are carried out by the teacher himself. After adjusting the test, it can be
re-applied.</p>
      <p>Improving the quality of tests is a repetitive procedure that is performed cyclically. Subject to
the formation of tests on topics and the assessment of students several times, such an adjustment
can be made according to the results of each test. The level of test quality increases gradually
by implementing the following steps:
• Error questions are corrected after errors are identified by the teacher and / or students;
• The issue of the correct derived mainly or mostly wrong answers holes in them are not
qualitatively formulated and well adjusted;
• To increase the level of quality assessment, the teacher should try to design the test in
such a way that on the one hand, the student’s answers to the questions allow to assess
the level of knowledge as accurately as possible. On the other hand, test questions should
be designed in such a way as to minimize the percentage of guessing the correct answers.
In our opinion, this criterion is best met by questions with randomized input conditions
and a computational test program. It is also important to have a large enough number of
questions. With our estimates for one discipline, they should be at least 80-100 for one
credit of the discipline.</p>
      <p>The experience of using the testing unit in the JetIQ VNTU system shows that such an
adjustment is widely used during the training process. Also, tests are used not only on individual
topics but also on intermediate control measures. Their corrected questions are the basis for
combining the final exam tests.</p>
      <p>Also, many student activities are recorded in the e-learning system. These include completed
tests, tasks sent to teachers, participation in lectures, use of lecture material, dialogues in the
chat and forum, etc. (Fig. 3).</p>
      <p>Such work with tests is a motivation for students to increase the level of their activity in
the e-learning system and gives the student a sense of partnership in correcting questions.
In general, feedback on the level of correctness of questions is one of the types of dialogue
with the student during testing. This comprehensive approach to assessing and motivating
students allows to form a student profile, to understand how active he was during the study of
the discipline.</p>
      <p>The student’s profile is based not only on statistical data of passing tests but also on other
activities of the student, in particular the badges received by him, medals for activity, estimation
of dialogues, a quantity of passing of tests, qualitative estimation of knowledge, etc.</p>
      <p>Let’s estimate the duration of the process of optimization of the quality of the test at the use
of an electronic system of training. The duration of the stage of its creation is   , similar to the
creation of tests on any medium and in any electronic system. The main time is spent on the
formation of questions, answer options, calculation tasks, comparison tests, preparation and
implementation of graphics, checking the formed tests, and reviewing - as students see them.</p>
      <p>Let  
form the results of the analysis of the quality of the test and its components (   1 −   1, ).</p>
      <p>According to the analysis, the teacher forms the necessary changes and makes the system
−   1, be the testing time of students. After testing, the modules of the electronic system
  1 −   2, the number of errors and inaccuracies in the questions in the previous stage was
reduced, then for the next stage their number usually decreases, ie the duration of subsequent
phases decreases from   2 −   2, , which is why the organization of time for training tests should
not be limited. Repeated passing of tests is recommended for students in the mode of studying
materials. This will allow the teacher to make adjustments if necessary almost continuously
before the exams. The results of the adjustment afect the growth of the test quality  0 −   ,
determined by:
and decrease the variance in the answers to questions  1 −   ,
.</p>
      <p>Calculate the test optimization time. Let   , be the time to create the test;  
- time to correct questions in the nth iteration. The total time to create and optimize the test is
−   −1 =</p>
      <p>−   ,

adjusted if  0 &lt; 1,</p>
      <p>Let’s perform a situational calculation of the required number of adjustments to the test
questions. Let the test have  0, questions that need adjustment. The test will be considered</p>
      <p>Let’s estimate the adjustment time on the  −ℎ, phase of the iteration, taking into account the
types of questions. For a test that has a total number of questions N on the  −ℎ, iteration, the
results of the correction form a general adjustment Q, which consists of adjusting the following
  , - question of incorrect wording;   , - questions with incorrect answers.
types of questions   , - a question with an ambiguous answer;   , - questions of easy guessing;
 =   +   +   +   .</p>
      <p>We introduce correction coeficients 
to the number of fixed for each type of question. Consider the case where the questions belong
to the same type  = 1 −  After the first adjustment we have the following number of incorrect
  =   ∗ (1 −   ) +   ∗ (1 −   ) +   ∗ (1 −   ) +   ∗ (1 −   )
, which will characterize the ratio of questions with errors
(1)
(2)
(3)
(4)
(5)
type will be indicated in lower case)</p>
      <p>=   +   +   +   .
adjusted questions can be calculated as
We will assume that in the process of their correction the teacher may make mistakes with
probabilities   ;   ;   ;   according to each type of question. Then the number of correctly</p>
      <p>For each stage, the calculation equation for Q f will be similar (the number of questions by
questions:</p>
      <p>1 =  0 ∗  =  0 ∗</p>
      <sec id="sec-4-1">
        <title>For the general case of n-repetitions</title>
      </sec>
      <sec id="sec-4-2">
        <title>In case of no errors</title>
      </sec>
      <sec id="sec-4-3">
        <title>If p &lt;1 can be written as</title>
        <p>ln  0
 &gt;</p>
        <p>ln 
For the presence of questions of diferent types of expression can be represented as follows
(9)
From the last formula, we can conclude that the required number of iterations should be equal
to
(6)
(7)
(8)
(10)
  =  0 ∗  
  =  0 ∗   &lt; 1
 &gt;
ln  10
ln 
 &gt;
ln   + ln   + ln  
ln   ln   ln  
+
ln  
ln  
We introduce the value t, which will characterize the average time of the teacher to correct one
question of a certain type. Then the total time to adjust the test questions can be written as
ln  
ln  
ln  
ln  
 =
Δ  +
Δ  +</p>
        <p>Δ  +
ln  
ln  
ln   Δ 
ln  
Note that the calculations are valid for cases where students pass incorrect questions in a
suficient number of times</p>
        <p>. Therefore, the minimum duration of repeated  
students and each of them is randomly ofered  questions, we can write
should depend on this value and the time interval between the phases of the test 
the factors that afect the total number of passes  . Consider the ideal case when  reaches the
minimum value of 1. Then, for the total number of questions in the test  , when it passes 
−   −1 (Fig. 4)</p>
        <p>. Determine
 =</p>
        <p>∗ 
 =
 ∗ 
 ∗ 
(11)
(12)
(13)
(14)
(15)
To reliably diagnose incorrect questions, the number of tests done should be as large as possible.
But in practice, this number is limited to a certain amount of  .</p>
        <p>The duration of the interval</p>
        <p>−   −1
  −   −1 =  ∗ (  +   ) =</p>
        <p>∗ (  +   )
 ∗ 
 ∗ 
Where the interval   - characterizes the period between the possibility of re-passing the test
and   - the average time to answer the test questions.</p>
        <p>For example, exams and control tests can take several months. This significantly complicates
obtaining the necessary amount of data for quality diagnosis of incorrect issues. For training
tests, the interval   can be significantly lower. In the case of automated training systems, it can
be reduced to zero. Then the minimum value of the phase of the period between the possibilities
of passing the retest   is reduced to zero (no special passage is required, this is done by students
in the phase of training and / or intermediate control and the duration of the interval     −1
  −   −1 =
∗  ∗  =</p>
        <p>∗ 
 ∗ 
 ∗ 
 ∗ 

Thus, the optimization of adjustment time is associated with the organization of periods of
opportunities for students to take tests.</p>
        <p>The motivation of the teacher and the student in organizing the organization of training tests
with a suficient number of passes is that students gain knowledge for the number of attempts
(learning is excluded by the content, diferent types of tests, and their random mixing). Also,
the teacher receives statistics on the verification of test questions. For this purpose, the teacher
uses test questions on topics in colloquia or practical or laboratory classes.</p>
        <p>If the teacher does not use the training tests, the adjustment phase increases significantly.
Verification statistics will be obtained by the teacher only after the exam. This will mean that
the correction of test questions will be a posteriori and its results can be used only for the
following groups.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>To quickly optimize the test parameters, it is necessary to provide the following conditions:
1. Form a test by topic and use it in the current learning process for a large number of
students. This will allow to implement the test procedure many times and identify poor
quality issues.
2. Such training tests have no restrictions on repeating the test.
3. The period of data preparation for their automated analysis is proportional to the total
number of questions in the tests and inversely proportional to the number of students
taking the tests, as well as the number of questions ofered in the test.</p>
      <p>Static tests that do not change and are not adjusted, especially on paper, have low suitability for
optimizing their quality due to long periods of their application and the dificulty of accumulating
suficient data.</p>
      <p>Exam tests should be formed based on a combination of training, verified at a suficient level
of tests.</p>
      <p>The required quality of tests can be provided by special software modules that allow
accumulating and process test data, to carry out a continuous process - "testing-processing-adjustment"
for a short period of test quality assessment. Such software modules should have mathematical
tools for analyzing the answers to the test questions, assessing the quality of the questions, the
reliability of the results of the whole test, etc.</p>
      <p>The testing module in the learning management system is one of the most complex. The
testing process involves the active work of students and providing them with opportunities
to feedback from the teacher on the results of testing. In addition, the accumulated statistical
information allows you to analyze the results of assessment of students’ knowledge and the
level of quality of tests and their individual questions.</p>
      <p>That is the general statistics on the use of test control during 01.09.2019 - 02.11.2020: tests
total number - 3238, answers to the questions of electronic tests TestIQ: 302365. Given that only
the first modules were conducted and there were no final tests, it can be concluded that the
active use of training tests and the accumulation of analytics base for their adjustment. Training
tests are the basis for the formation of examination tests.</p>
      <p>The prospects for the development of the testing module are:
1. Modification of the test evaluation system.
2. Development and implementation of new procedural modules such as reminders on the
timing of training tests, development of error test reports, etc.
3. Development of the concept of using elements of artificial intelligence in the testing
modules.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Acknowledgments</title>
      <p>We express our sincere gratitude for the activity and patience to the teachers and students of
VNTU, who participated in the processes of developing and improving the quality of tests in
the JetIQ system.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Serhii</surname>
          </string-name>
          <string-name>
            <surname>Maslovskyi</surname>
          </string-name>
          ,
          <article-title>Adaptive test system of student knowledge based on neural networks</article-title>
          ,
          <source>Proceedings of the 8th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications</source>
          (IDAACS'
          <year>2015</year>
          ), Warsaw, Poland
          <volume>2</volume>
          (
          <year>2015</year>
          )
          <fpage>940</fpage>
          -
          <lpage>944</lpage>
          . doi:
          <volume>10</volume>
          .1109/IDAACS.
          <year>2015</year>
          .
          <volume>7341442</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Andriotis</surname>
          </string-name>
          ,
          <article-title>Lms tools to help you assess your students' progress (</article-title>
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ostroukh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Blinova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Skvortsova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Nikonov</surname>
          </string-name>
          , I. Ivanova, T. Morozova,
          <article-title>Enhancement of testing process in learning management system moodle</article-title>
          ,
          <source>Asian Journal of Applied Sciences</source>
          <volume>7</volume>
          (
          <year>2014</year>
          )
          <fpage>568</fpage>
          -
          <lpage>580</lpage>
          . doi:
          <volume>10</volume>
          .3923/ajaps.
          <year>2014</year>
          .
          <volume>568</volume>
          .580.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Collaborator</surname>
          </string-name>
          ,
          <year>2020</year>
          . URL: https://collaborator.biz/en/.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>[5] Creating tests and managing test questions. learn how to create and manage a test in for your e-learning courses</article-title>
          .,
          <year>2020</year>
          . URL: https://www.docebo.
          <article-title>com/knowledge-base/ elearning-how-to-create-and-manage-test/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V. D.</given-names>
            <surname>Rinaldi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Lorr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <article-title>Evaluating a technology supported interactive response system during the laboratory section of a histology course</article-title>
          ,
          <source>Anatomical Sciences Education</source>
          <volume>10</volume>
          (
          <issue>4</issue>
          ) (
          <year>2017</year>
          )
          <fpage>328</fpage>
          -
          <lpage>338</lpage>
          . doi:https://doi.org/10.1002/ase.1667.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>West</surname>
          </string-name>
          , W. Turner,
          <article-title>Enhancing the assessment experience: Improving student perceptions, engagement and understanding using online video feedback</article-title>
          .,
          <source>Innovations in Education and Teaching International</source>
          <volume>53</volume>
          (
          <issue>4</issue>
          ) (
          <year>2016</year>
          )
          <fpage>400</fpage>
          -
          <lpage>410</lpage>
          . doi:https://doi.org/10.1080/ 14703297.
          <year>2014</year>
          .
          <volume>1003954</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gamage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ayres</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>e</year>
          . a. Behrend,
          <article-title>Optimising moodle quizzes for online assessments</article-title>
          .,
          <source>International Journal of STEM Education</source>
          <volume>27</volume>
          (
          <issue>6</issue>
          ) (
          <year>2019</year>
          ). doi:https://doi.org/10.1186/ s40594-019-0181-4.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>O.</given-names>
            <surname>Kovalenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Palamarchuk</surname>
          </string-name>
          , Algorithms of blended learning in it education,
          <source>2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT)</source>
          (
          <year>2018</year>
          )
          <fpage>382</fpage>
          -
          <lpage>386</lpage>
          . doi:
          <volume>10</volume>
          .1109/STC-CSIT.
          <year>2018</year>
          .
          <volume>8526605</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Palamarchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kovalenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yatskovska</surname>
          </string-name>
          ,
          <article-title>Variable assessment of students' knowledge using the test-iq system</article-title>
          ,
          <source>Proceedings of the 9th scientific-practical conference. - Lviv: Publishing House of the Scientific Society. Shevchenko</source>
          (
          <year>2017</year>
          )
          <fpage>188</fpage>
          -
          <lpage>193</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rhode</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Richter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Gowen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Miller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wills</surname>
          </string-name>
          ,
          <article-title>Understanding faculty use of the learning management system</article-title>
          ,
          <source>Online Learning 21(3)</source>
          (
          <year>2017</year>
          )
          <fpage>68</fpage>
          -
          <lpage>86</lpage>
          . doi:
          <volume>10</volume>
          .24059/olj.v.
          <source>vi.i. 1217.</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>