Computerized adaptive testing in educational electronic environment of maritime higher education institutions Olena S. Diahyleva1 , Igor V. Gritsuk1 , Olena Y. Kononova2 and Alona Y. Yurzhenko1 1 Kherson State Maritime Academy, 20 Ushakova Ave., Kherson, 73009, Ukraine 2 Maritime College of Kherson State Maritime Academy, 19/2 Ushakova Ave., Kherson, 73000, Ukraine Abstract The article is devoted to the organization of modern learning process, namely the use of innovative technologies – computerized adaptive testing in educational electronic environment of maritime higher education institutions. The example of educational electronic environment is presented in the article on LMS Moodle. The provided new technological and methodological opportunities are a priority in the developed methods of control and testing of knowledge, skills and abilities of students. Comparative characteristic of using computerized adaptive testing in educational electronic environment is given in the article according to different criteria: the role of tests in the learning process; methods of training; equipment; presence of the problems in educational process; level of its control and learning outcomes. The paper also presents examples of activities to form communicative competency of future maritime professionals. Types of adaptive tests are listed in the paper. The research activities were done by second year cadets of ship engineering department of Maritime College of Kherson State Maritime Academy. The experiment was devoted to the formation of communicative competence with the help of electronic environment of maritime higher education institution. The results of experiment proved positive impact of computerized adaptive testing on communicative competence of future ship engineers. Further investigation of adaptive testing can also be done for learning system of maritime education establishments using simulation technologies of virtual, augmented and mixed realities. Keywords distance learning, educational electronic environment, maritime higher education, LMS Moodle, computerized adaptive testing, English for specific purposes 1. Introduction An important role in the organization of the educational process is played by the use of the modern information technologies in the process of selection, accumulation, systematization and transfer of knowledge. The provided new technological and methodological opportunities are a priority in the developed methods of control and testing of knowledge, skills and abilities of students [1]. One of the tools for managing the learning process is test control [2, 3]. That is why one of CTE 2020: 8th Workshop on Cloud Technologies in Education, December 18, 2020, Kryvyi Rih, Ukraine " konon2017@ukr.net (O. Y. Kononova)  0000-0003-3741-4066 (O. S. Diahyleva); 0000-0001-7065-6820 (I. V. Gritsuk); 0000-0003-0403-7292 (O. Y. Kononova); 0000-0002-6560-4601 (A. Y. Yurzhenko) © 2020 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) 411 the problems that require priority attention is to improve the testing process, to increase its efficiency. Computer testing can be used in three ways: computer testing as an alternative form of test presentation (options, and, consequently, the order of presentation of tasks are fixed); computer testing with automatic generation of different test options (options are formed automatically from the existing set of tasks according to the rules set by the developer); computer adaptive testing (for each subject in the testing process an individual set of tasks is formed) [4]. Adaptive testing is generally adequate to modern trends in distance education and opens up new opportunities for improving the efficiency of educational processes [5]. Our common educational model of adaptive school is based, in fact, on the general ideas of adaptive learning and adaptive knowledge control [6, 7]. The origins of this approach can be traced back to the pedagogical works of Comenius [8], Pestalozzi et al. [9] and Diesterweg [10]. At the center of their pedagogical systems was the Student. In their works, they called for starting teaching with what the student stopped to study, because without the “knowledge base” it is impossible to learn new material further. Insufficient awareness of the real level of knowledge of students and their ability to assimilate the proposed knowledge, have become the main reason for the emergence of adaptive systems based on the principle of individualization of learning, so this principle cannot be implemented in traditional, classroom form [11]. Analyzing recent research, we can see many scientists who have dealt with the problem of implementing adaptive tests in the educational process. Modern researchers related to the problem of adaptive testing include Albano et al. [12], Austin et al. [13], Cetin-Berber et al. [14], Collares and Cecilio-Fernandes [15], Istiyono et al. [16], Kang et al. [17], Kozmina et al. [18], Lin and Chang [19], Paap et al. [20], Samsudin et al. [21], van der Linden and Choi [22], van der Linden and Ren [23], Wang et al. [24], Yasuda et al. [25]. Bradác and Klimes [26] created a model of adaptive e-learning system for significant op- timisation of language learning with the primary focus on the English language, which is based on using Learning management systems (LMS). Created by them adaptive system for decision-making support enables automated creation of study variants which are suited to each individual student’s needs, which current LMSs do not enable. Souki et al. [27] have placed an emphasis on preferred ways of studying. Scientists provided evidence about the potential of the framework in increasing specific aspects of self-regulated learning and students’ performance. The use of adaptive testing is usually possible in personalized learning. Thus, Balogh et al. [28] dealing with the way of personalizing the teaching of IT subjects from the point of view of a constructivist approach towards the learner proposed their e-course creation methodology that fulfils the assumptions to personalize the education. The study of Susanti et al. [29] experimentally proved computerised adaptive tests’ advantages comparing with the baseline, a linear test. However, the study of computerized adaptive testing in the training of maritime professionals (teaching English for professional purposes) has not been fully implemented [30]. Therefore, the purpose of this work is to analyze the content of tests and results of computer testing of students to assess the effectiveness of this type of teaching and improve testing methods in teaching English for professional purposes. 412 The tasks of our work include the following: description of types of adaptive testing, examples of adaptive testing, analysis of its advantages and disadvantages, evaluation of testing. 2. Methods The participants of this research are a total of 90 cadets (male) aged 17–18 from Ukrainian maritime education establishment (Maritime College of Kherson State Maritime Academy). They are cadets of three departments: navigation, ship engineering and electrical engineering one. The participants of the research were asked to study in an educational electronic environment including study on Learning Management System (LMS) Moodle and its activities (Forum, Chat, Lesson, Assignment). The materials to be used on LMS Moodle e-course are from “Welcome Aboard: course book” created by English teachers of Kherson State Maritime Academy [31]. A range of interrelated methods was used to solve the formulated tasks for achieving the goal and verification the hypothesis: • theoretical: terminological analysis – to define the basic concepts of the investigation “communicative competence”, “gamification approach”; analysis and generalization of these concepts from the standpoint of different scientific approaches; synthesis, compari- son and generalization of theoretical provisions, normative documents, and experience of teaching English in a professional direction by future ship engineers to determine the most appropriate approaches for solving the problem; generalization of scientific- theoretical and practical data for scientific substantiation of structural training of future ship engineers; modeling for the development of a structural model for the formation of foreign language communicative competence of future ship engineers based on a gamification approach; • empirical: pedagogical monitoring of the state of preparation of future ship engineers for professional activity in order to identify the levels of the specified training, experimental verification of effectiveness developed methods of forming foreign language commu- nicative competence of future ship engineers on the basis of a gamification approach and pedagogical conditions of its implementation; interviewing; questionnaires; analysis results of sea practices; • statistical: methods of mathematical statistics in order to analyze experimental research data and their interpretation. 3. Results Computer adaptive tests, which are a fast and quite effective way to measure the level of learning, are becoming more and more widely used for input diagnostic, intermediate and final testing [32]. To determine the level of knowledge of the student, you can conduct initial testing, and then conduct a control test with questions of this level. This type of adaptive tests is more appropriate 413 to conduct at the initial stage of studying the topic, in order to diagnose the initial level of knowledge of the student, and further correction by presenting tasks of optimal complexity. Currently, there are three options for forming an adaptive test. The first one is pyramidal, when without a preliminary assessment, each student is given a task of medium difficulty and then, depending on the answer, the next task is formed, the scale of which is lower or higher by 2 times [33]. When creating computerized adaptive testing of this type, we use LMS Moodle [34], which supports the “Lesson” activity. This activity allows to create a single task (add a page with data), and after the answer to move to the next page with a question that will depend on the answer provided by the student. The example of a test of this kind in the author’s electronic course LMS Moodle is illustrated below. The first page of the test consists of a task (Read the case and choose the correct answer to the question below), quasi-professional situations, the authentic text of which was taken from the Internet newspaper, the question (What has crewmembers of Pacific Express done while pirates attack? Where their actions correct or not?), Four possible answers and the scale of progress that is element of gamification and reflects the percentage of correctly completed tasks [35]. Depending on the chosen answer, the next page, if the answer is correct, will have the text of the next question and four possible answers to it. The example of the second page can be seen in figure 2. If the answer to the question is incorrect, then the next page contains a repetition of the text, a simpler question (What type of ship was attacked by pirates?) and fewer options for answering it [36]. Figure 3 shows the page with incorrect answer. The second method (flexy level) initially uses any level of difficulty, gradually selecting the appropriate one for the student [37]. Using the “Lesson” activity of LMS Moodle when creating the first page of the test and the question to it, the average level of difficulty is selected. Teacher manages settings of “Lesson” activities: progress bar – to see student’s progress while completion; ongoing score – to know the grade to pass; menu – to choose previous page any minute. He also provides the option to try a question again, chooses maximum number of attempts, restricts the access, links this activity with previous one. The linked media chosen by teacher is instructions on how to use “Lesson” activities. The maximum number of answers is four. The time limit is 60 minutes. The activity is attempted offline using the mobile application. Students are allowed to review. There is a special setting called Competencies which allows teacher to connect the activity with course competencies from Competency framework. Competency framework allows teacher to add the list of compe- tencies added by an administrator from Standards of training, certification and watchkeeping international convention and IMO Model course 3.17 Maritime English [38]. Competency framework also has progress bar where students see the list of activities needed to achieve competencies of the course. When a student answers the questions created by teacher correctly all the time, the questions become more difficult. If a student makes a mistake at least once, the questions become easier. In the third stratified method (stratified adaptive) tasks are taken from the bank of tasks, divided by levels of difficulty. If the answer is correct, the next task is taken from the upper level, if it is incorrect, it is taken 414 Figure 1: The first page of the electronic adaptive test “Security case”. from the lower level [39]. Distinctive features of adaptive testing in comparison with other forms of testing are the following: each subject receives his own set of tasks, so the content and length of the test may differ for different subjects; each subject is evaluated individually (at his level) with a minimum measurement error [40]. The example of a stratification testing method on LMS Moodle “Lesson” activity where a student can choose his level of knowledge about burns by himself can be seen in figure 4. Such tests have advantages and disadvantages. The advantages of adaptive tests include a reduction in testing time, as the student may be presented with far fewer tasks. At the same time diagnostic possibilities are not reduced. High measurement accuracy is also achieved. The reliability of test results in this case is the highest. Also, these tests allow flexible and accurate measurement of students’ knowledge, identify topics that are poorly known and allow you to ask them a number of additional questions [41]. Disadvantages include the fact that it is not known in advance how many questions need to be asked to determine the level of knowledge of the student; if the student did not answer all 415 Figure 2: Security case’s page in case of correct first answer. Figure 3: Incorrect answer page example. the questions, then you can evaluate the result by the number of questions answered. can be used only on a computer [42]. Another important question that arises when using monitoring programs is the question of how to evaluate the results. There are different assessment methods for adaptive testing [43]. The first method is when the assessment is based on the number of correctly performed tasks, without taking into account their complexity. For example, if a student performed correctly from 0–40% of tasks – “unsatisfactory”, from 40–70% of tasks – “satisfactory”, from 70–90% of tasks – “good”, and more than 90% – “excellent”. The second method is when first all the questions are broken down by levels of difficulty and the correct implementation of the next level (in the presence of previous ones) leads to an increase of 1 point (if we talk about the traditional five-point grading system). This method takes into account the quality of the test and makes it possible to somehow differentiate students, even with insufficient adequacy of the test material. When grading by the third method, the best result among the test participants is taken into account, which is evaluated as “excellent”, and all other marks are already set taking into account this result [44]. The fourth method, as well as the second, is based on the initial distribution of all questions 416 Figure 4: The example of a stratification testing method on LMS Moodle. by levels of complexity, each of which has its own assessment. And in contrast to the second method, which is taught immediately determined by the level of complexity that he can and works within this level. At correct performance of all tasks the corresponding estimation is exposed. If the number of incorrect answers exceeds 30%, the level of complexity is reduced [45]. When testing, three criteria of test quality are taken into account: reliability, validity, objec- tivity. The reliability index is characterized by the accuracy and stability of the measurement results using the test with its repeated use. In order for the test to really perform its functions, it is also necessary that its compliance with the requirement of “validity”, e.g. the reflection of the scientific content of the discipline and its suitability to serve as a means of measurement. The most common reasons for the invalidity of control: there is a write-off, hint, “teaching” tutors, indulgence, excessive demands, the use of any method in the absence of appropriate conditions [46]. In order to increase the validity of pedagogical control, expert assessments of control material are usually used. Objectivity – a criterion that combines reliability, validity and a number of aspects of psychological, pedagogical, ethical, value nature. When developing this test, all these requirements must be taken into account, but the final conclusions can be made only after its 417 repeated practical application [47]. 4. Discussion The analysis of scientific knowledge and pedagogical practice demonstrates that the student’s tests can be effective. By analyzing the data of success after using the tests in the electronic course, one can observe that the current state of formation of the communicative competence of future ship engineers of the 2nd year 2019–2020 Maritime College of KSMA is better compared with 2018–2019 academic year. According to the results, we see an increase in the success (by 17%) and knowledge quality (by 9%). Qualitative indicator of success was taken as the number of students by “good” and “very good” multiplied by 100% and divided by the total number of students. An absolute success indicator was taken as number of students by “good”, “very good” and “sufficient” multiplied by 100% and divided by the total number of students. The data was taken from the processing of control testing results on LMS Moodle of Kherson State Maritime Academy (Stop and checks activities and Progress tests). Stop and check is testing conducted by the end of every module. Progress tests are conducted twice a semester. Figure 5: Comparison of statistics for 2018-2019 academic year and 2019-2020 one. The surveys conducted by cadets of Kherson State Maritime Academy were created by English teachers in Google forms. Links to the surveys were located in LMS Moodle English for professional purpose courses. The results of survey have showed the following data graphically represented in figure 6. The cadets were asked to answer few questions on the usefulness of new system of com- puterized adaptive testing on e-course of English for specific purpose. The course contained adaptive tests of the following topics: First aid on board; Emergency situations onboard a vessel; Lifesaving appliances and their use; Marine Environment and its protection; Maritime security. The computerized adaptive tests were located on LMS Moodle with the help of Lesson activity which enables a teacher to deliver content and/or practice activities in interesting and flexible way. Teacher can also create ‘branching’ exercises where students are presented with content 418 Figure 6: The results of survey conducted among students of ship engineering department on comput- erized adaptive testing in English for specific purpose. and then, depending on their responses, are directed to specific pages. The content may be text or multimedia [48]. According to survey results the biggest part of students find adaptive testing useful and comfortable for English for specific purpose studying. 5. Conclusions Today, learning through the use of computer technology has a number of features that signifi- cantly distinguish it from the traditional. Studies have shown that the use of adaptive computer testing technologies can reduce testing time by 50–60%, helps to individualize each test and get a more accurate assessment of knowledge and skills. The tasks offered to the student become more complex gradually and ideally suit his knowledge and skills, increasing his motivation to pass the test. The expediency of adaptive control follows from the need to streamline traditional testing. Every teacher understands that a well-prepared student does not need to be given easy tasks, as they do not have significant developmental potential. Similarly, due to the high probability of the wrong decision, it makes no sense to give difficult tasks to a weak student. It is known that difficult and very difficult tasks reduce the learning motivation of many students. The use of tasks that correspond to the level of preparedness significantly increases the accuracy of measurements and minimizes the time of individual testing to about 5–10 minutes. Adaptive testing allows to provide computer delivery of tasks at the optimal, approximately 50% level of probability of the correct answer. We see the prospects for our further research in this direction in the study of adaptive testing in the system of in-depth learning. References [1] Open Assessment Technologies, Using adaptive testing in digital assess- ment to support learning, 2020. URL: https://www.taotesting.com/blog/ using-adaptive-testing-in-digital-assessment-to-support-learning/. [2] A. Abdula, H. Baluta, N. Kozachenko, D. Kassim, Peculiarities of using of the Moodle test tools in philosophy teaching, CEUR Workshop Proceedings 2643 (2020) 306–320. 419 [3] K. Polhun, T. Kramarenko, M. Maloivan, A. Tomilina, Shift from blended learn- ing to distance one during the lockdown period using Moodle: test control of stu- dents’ academic achievement and analysis of its results, Journal of Physics: Confer- ence Series 1840 (2021) 012053. URL: https://doi.org/10.1088/1742-6596/1840/1/012053. doi:10.1088/1742-6596/1840/1/012053. [4] V. Avanesov, Scientific problems of test control, MSU, Moscow, 2004. [5] Computer adaptive testing: Background, benefits and case study of a large-scale national testing programme, 2019. URL: https://tinyurl.com/w5925sct. [6] O. Haranin, N. Moiseienko, Adaptive artificial intelligence in RPG-game on the Unity game engine, CEUR Workshop Proceedings 2292 (2018) 143–150. [7] K. Osadcha, V. Osadchyi, S. Semerikov, H. Chemerys, A. Chorna, The review of the adaptive learning systems for the formation of individual educational trajectory, CEUR Workshop Proceedings 2732 (2020) 547–558. [8] J. Comenius, La conception de l’éducation des jeunes enfants selon, International Journal of Early Childhood 25 (1993) 60–64. doi:10.1007/BF03185620. [9] J. Pestalozzi, J. Piaget, F. Froebel, Conversation 4: How do young children learn?, in: Early Childhood Education: History, Philosophy and Experience, 2 ed., SAGE Publications Inc., 2014, pp. 98–104. doi:10.4135/9781446288863. [10] A. Diesterweg, Wegweiser zur Bildung für deutsche Lehrer: II, volume 2, Bädeker, 1875. [11] M. Petrova, M. Mintii, S. Semerikov, N. Volkova, Development of adaptive educational software on the topic of “Fractional Numbers” for students in grade 5, CEUR Workshop Proceedings 2292 (2018) 162–192. [12] A. Albano, L. Cai, E. Lease, S. McConnell, Computerized adaptive testing in early education: Exploring the impact of item position effects on ability estimation, Journal of Educational Measurement 56 (2019) 437–451. doi:10.1111/jedm.12215. [13] E. Austin, A. Henson, H. Kim, K. Ogle, H. Park, Analysis of computer adaptive testing in a pathopharmacology course, Journal of Nursing Education 60 (2021) 155–158. doi:10. 3928/01484834-20210222-06. [14] D. Cetin-Berber, H. Sari, A. Huggins-Manley, Imputation methods to deal with missing responses in computerized adaptive multistage testing, Educational and Psychological Measurement 79 (2019) 495–511. doi:10.1177/0013164418805532. [15] C. Collares, D. Cecilio-Fernandes, When i say . . . computerised adaptive testing, Medical Education 53 (2019) 115–116. doi:10.1111/medu.13648. [16] E. Istiyono, W. Dwandaru, R. Setiawan, I. Megawati, Developing of computerized adaptive testing to measure physics higher order thinking skills of senior high school students and its feasibility of use, European Journal of Educational Research 9 (2020) 91–101. doi:10.12973/eu-jer.9.1.91. [17] H.-A. Kang, Y. Zheng, H.-H. Chang, Online calibration of a joint model of item responses and response times in computerized adaptive testing, Journal of Educational and Behavioral Statistics 45 (2020) 175–208. doi:10.3102/1076998619879040. [18] I. Kozmina, D. Lukyantsev, O. Musorina, Computer adaptive testing as an automated control of students’ level of preparadness taking into account their individual characteristics, Institute of Electrical and Electronics Engineers Inc., 2020. doi:10.1109/Inforino48376. 2020.9111661. 420 [19] C.-J. Lin, H.-H. Chang, Item selection criteria with practical constraints in cognitive diagnostic computerized adaptive testing, Educational and Psychological Measurement 79 (2019) 335–357. doi:10.1177/0013164418790634. [20] M. Paap, S. Born, J. Braeken, Measurement efficiency for fixed-precision multidimensional computerized adaptive tests: Comparing health measurement and educational testing using example banks, Applied Psychological Measurement 43 (2019) 68–83. doi:10.1177/ 0146621618765719. [21] M. Samsudin, T. Somchut, M. Ismail, Evaluating computerized adaptive testing efficiency in measuring students’ performance in science timss, Jurnal Pendidikan IPA Indonesia 8 (2019) 547–560. doi:10.15294/jpii.v8i4.19417. [22] W. van der Linden, S. Choi, Improving item-exposure control in adaptive testing, Journal of Educational Measurement 57 (2020) 405–422. doi:10.1111/jedm.12254. [23] W. van der Linden, H. Ren, A fast and simple algorithm for bayesian adaptive test- ing, Journal of Educational and Behavioral Statistics 45 (2020) 58–85. doi:10.3102/ 1076998619858970. [24] W. Wang, L. Song, T. Wang, P. Gao, J. Xiong, A note on the relationship of the shannon en- tropy procedure and the jensen–shannon divergence in cognitive diagnostic computerized adaptive testing, SAGE Open 10 (2020). doi:10.1177/2158244019899046. [25] J.-I. Yasuda, N. Mae, M. Hull, M.-A. Taniguchi, Optimizing the length of computerized adaptive testing for the force concept inventory, Physical Review Physics Education Research 17 (2021). doi:10.1103/PhysRevPhysEducRes.17.010115. [26] V. Bradác, C. Klimes, Language e-learning based on adaptive decision-making system, in: Proceedings of the European Conference on e-Learning, ECEL, 2013, pp. 48–57. [27] A.-M. Souki, F. Paraskeva, A. Alexiou, K. A. Papanikolaou, Developing personalised e-courses: Tailoring students’ learning preferences to a model of self-regulated learning, Int. J. Learn. Technol. 10 (2015) 188–202. URL: https://doi.org/10.1504/IJLT.2015.072357. doi:10.1504/IJLT.2015.072357. [28] Z. Balogh, M. Turcáni, M. Burianová, Personalized learning and current technologies in teaching it related subjects, in: 2019 International Symposium on Educational Technology (ISET), 2019, pp. 124–126. doi:10.1109/ISET.2019.00034. [29] Y. Susanti, T. Tokunaga, H. Nishikawa, Integrating automatic question generation with com- puterised adaptive test, Research and Practice in Technology Enhanced Learning 15 (2020) 9. URL: https://doi.org/10.1186/s41039-020-00132-w. doi:10.1186/s41039-020-00132-w. [30] B. D. Wright, M. H. Stone, Best test design, Mesa Press, 1979. [31] V. Kudryavtseva, T. Malakhivska, O. Moroz, Y. Petrovska, O. Frolova, Welcome Aboard: coursebook, STAR, Kherson, 2018. [32] N. Shapovalova, O. Rybalchenko, I. Dotsenko, S. Bilashenko, A. Striuk, L. Saitgareev, Adaptive testing model as the method of quality knowledge control individualizing, CEUR Workshop Proceedings 2393 (2019) 984–999. [33] M. B. Chelishkova, Adaptive testing in education (theory, methodology, technology), Research Center for Problems of Preparing Specialists, Moscow, 2001. [34] I. Mintii, S. Shokaliuk, T. Vakaliuk, M. Mintii, V. Soloviev, Import test questions into Moodle LMS, CEUR Workshop Proceedings 2433 (2019) 529–540. [35] Computerized adaptive testing, 2020. URL: https://assess.com/adaptive-testing/. 421 [36] P. Fedoruk, Technology of learning process construction in adaptive systems of distance learning, in: Proceedings of the 12th IASTED International Conference on Computers and Advanced Technology in Education, CATE 2009, St. Thomas, 2009, pp. 228–230. [37] Y. Tyshchenko, A. Striuk, The relevance of developing a model of adaptive learning, CEUR Workshop Proceedings 2292 (2018) 109–115. [38] Model Course 3.17. Maritime English, 2015 ed., International Maritime Organization, 2015. [39] J. Winkley, Adaptive testing, 2020. URL: https://www.e-assessment.com/news/ adaptive-testing/. [40] V. Vasil’ev, T. Tyagunova, Fundamentals of the culture of adaptive testing, IKAR, Moscow, 2003. [41] F. M. Lord, Application of Item Response Theory to Practical Testing Problems, Routledge, 2012. [42] Y. Lebedenko, V. Danyk, P. Krupitsa, Adaptive control of the combined propulsion system, in: 2016 4th International Conference on Methods and Systems of Navigation and Motion Control (MSNMC), 2016, pp. 214–217. doi:10.1109/MSNMC.2016.7783145. [43] A. Y. Yurzhenko, An e-course based on the LMS Moodle to teach “Maritime English for professional purpose”, Information Technologies and Learning Tools 71 (2019) 92–101. URL: https://journal.iitta.gov.ua/index.php/itlt/article/view/2512. doi:10.33407/itlt.v71i3. 2512. [44] H. Popova, A. Yurzhenko, Competency framework as an instrument to assess professional competency of future seafarers, CEUR Workshop Proceedings 2387 (2019) 409–413. [45] N. Kolesnichenko, T. Hladun, O. Diahyleva, L. Hats, A. Karnaukhova, Increasing students’ motivation to learn at tertiary educational institutions, International Journal of Higher Education 9 (2020) 166–175. doi:10.5430/ijhe.v9n7p166. [46] S. Lavrynenko, L. Krymets, A. Leshchenko, Y. Chaika, O. Holovina, Purpose and features of teaching philosophical disciplines at tertiary educational institutions while training specialists of various knowledge areas, International Journal of Higher Education 9 (2020) 321–331. doi:10.5430/ijhe.v9n7p321. [47] S. Voloshynov, H. Popova, A. Yurzhenko, E. Shmeltser, The use of digital escape room in educational electronic environment of maritime higher education institutions, CEUR Workshop Proceedings 2643 (2020) 347–359. [48] Activities, 2020. URL: https://docs.moodle.org/39/en/Activities. 422