=Paper= {{Paper |id=Vol-2864/paper33 |storemode=property |title=Forms of Additions to Physical Models of Objects of Study in Remote Laboratories |pdfUrl=https://ceur-ws.org/Vol-2864/paper33.pdf |volume=Vol-2864 |authors=Karsten Henke,Mykhailo Poliakov,Heinz-Dietrich Wuttke,Johannes Nau,Oleksii Poliakov |dblpUrl=https://dblp.org/rec/conf/cmis/HenkePWNP21 }} ==Forms of Additions to Physical Models of Objects of Study in Remote Laboratories== https://ceur-ws.org/Vol-2864/paper33.pdf
Forms of Additions to Physical Models of Objects of Study in
Remote Laboratories
Karsten Henkea, Mykhailo Poliakovb, Heinz-Dietrich Wuttkea, Johannes Naua and Oleksii
Poliakovc
a
  Ilmenau University of Technology, Ilmenau, D-98684, Germany
b
  Zaporizhzhia Polytechnic National University, vul. Zhukovsʹkoho, 64, Zaporizhzhia, 69063, Ukraine
c
   National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”, prosp. Peremohy,
37, Kyiv, 03056, Ukraine


                 Abstract
                 The trend of expanding the use of remote laboratories in engineering education is noted. This
                 requires an increase in the variety of experiments, including the use of physical models of
                 objects under study as part of a remote laboratory. In addition, an increase in the complexity
                 and cognition of objects of study and the appearance of properties in them that are difficult to
                 simulate on physical models are noted.
                 It is proposed to supplement the physical models of technical systems with virtual add-ons.
                 The forms of these additions and the interfaces of the remote laboratory system in which they
                 appear are described.
                 The basic forms of reality and functionality of physical models, as well as forms of
                 augmented reality, functionality and cognition of these models have been determined.
                 Examples of additions to the physical models of the GOLDi remote laboratory are given.
                 It is proposed to change the interface of video surveillance of the physical model during the
                 experiment. It is proposed to replace the video of the behavior of a physical model in real
                 time with a demonstration of typical fragments that were filmed earlier, including on a real
                 object. Methods of synchronizing the time of demonstration of these fragments are described.
                 Such a replacement allows you to preserve and even increase the feeling of the reality of the
                 experiment even in the absence of physical models of the object of study.

                 Keywords 1
                 Remote laboratories, cognitive systems, augmented reality, augmented functionality,
                 augmented cognitivity

1. Introduction
    A relevant argument in favor of distance learning is the continued isolation of university students
in the context of the coronary virus pandemic in 2020 and 2021. The relevance of the transition from
training in the design of technical systems to the design of cognitive systems (CS) is due to forecasts
of the deployment of a cognitive revolution in the world [1]. The hierarchical nesting of physical
models improving of cognitive objects of study in remote laboratories (RL) is illustrated in Figure 1.
    A cognitive system is defined as a system that uses knowledge to achieve the goals of its
functioning, has cognitive abilities, such as perception, planning, training, reasoning [2]. The
cognitive system includes the subsystems of cognition, activity, cognitiveness and the knowledge base



CMIS-2021: The Fourth International Workshop on Computer Modeling and Intelligent Systems, April 27, 2021, Zaporizhzhia, Ukraine
EMAIL: karsten.henke@tu-ilmenau.de (K. Henke); dieter.wuttke@tu-ilmenau.de (H.-D. Wuttke); johannes.nau@tu-ilmenau.de (J. Nau);
polyakov@zntu.edu.ua (M. Poliakov); poliakovalyosha@gmail.com (O. Poliakov);
ORCID: 0000-0001-5424-9053 (K. Henke); 0000-0002-7772-3122 (M. Poliakov); 0000-0001-8030-647X (H.-D. Wuttke), 0000-0001-
7538-2283 (J. Nau); 0000-0002-9355-7056 (O. Poliakov)
            © 2020 Copyright for this paper by its authors.
            Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
            CEUR Workshop Proceedings (CEUR-WS.org)
in various forms - from data to wisdom [3]. This system has a hierarchical tree structure with levels of
target, conceptual, behavioral, and computational and other controls [4].
    Important elements of distance learning environments for learning system design are remote
laboratories [5], which provide a number of services to a remote student. First, this is a service for
creating a project of behavior of the selected physical model object of study. Secondly, it is a service
for remote monitoring of an experiment with an object of study, during which verification and
validation of a student's project is performed. In the considered case, a video and an animated image
of the object of study in the user interface are used. Therefore, the RL “Grid of Online Lab Devices
Ilmenau (GOLDi)” [6] has a set of physical models of an elevator, a crane, an automatic line, a
warehouse (manufacturer - Staudinger GmbH Automatisierungstechnik [7]) and others. Their
interface with the control board has an electromechanical level. From the standpoint of remote
laboratory computing nodes, these models are represented by a set of sequences of sensor and actuator
values. That is, these models are not cognitive systems.




Figure 1: Nesting issues of improving physical models of cognitive objects of study

    The pedagogical problem when using physical models in the process of teaching systems design
lies in a very limited number of technological operations linking the state of the model with its
sensors and actuators. For example, for the remote laboratory GOLDi warehouse model, the typical
operations are “put a box in a cell with coordinates Y, Z” and “take a box from a cell with coordinates
Y, Z”. Many operations performed by the warehouse during the experiment differ only in the
coordinates of the cells, which do not provide the required number of design tasks for students.
    Another problem of remote laboratories is the small number of simultaneous experiments with
physical models of research objects. Virtual experiments solve this problem, but, with such an
experiment, the quality of the animated image of the object is significantly reduced.
    The authors see the way out of this situation in the use of augmented methods.

2. Research Objectives
   The purpose of this article is to expand the functionality of physical models of objects of study,
increase the variety of experiments with them, increase the level of systems under study to cognitive
ones by using various augments, starting with augmented reality and improve interface in experiments
with virtual models.

3. Analysis of Publications
   The obvious advantage of studying objects using their physical models is the reduction in
equipment costs. You have to pay for it with reduced functionality. The desire to maintain both low
cost and high functionality led to the creation of cyber physical systems (CPS) [8]. In CPS, an
extension of functionality is performed by the software of a node with a computing resource that is
part of the system. For example, it is possible to expanding the capabilities of a physical model by
improving the human perception interface of simulation results. This method, which was called
augmented reality (AR), was devoted to approximately 20% of the reports at the REV2020
conference, for example [9]. Varieties of this method differ in the ratio of reality and virtuality of
objects introduced into the field of human perception [10]. At the same time, the essence of AR lies in
the fact that under the influence of virtual facts in the field of perception, the flow of influences on the
object of study by a person changes. So, the driver of the car, perceiving augmented reality, begins to
go around virtual obstacles.
   A variation of the AR method can be considered to virtually introduce an interface with the
sensors of the physical model. In [11], the sensors of a physical model are affected by the virtual flow
of elevator calls from floors from the computer side of the system. Thus, the AR method does not
expand the functionality of the physical model, but introduces comfortable options for using the
model in experiments.
   In [12], a method of additional functionality (AF) was proposed, according to which, using
software, the functions of the object of study are realized that are absent in its physical model and
even in the object of study. An example is the added functionality of a traffic light in terms of
continuous monitoring of the performance of its lamps / indicators. The implementation of this AF is
associated with the introduction of additional sensors, events and states of the state machine of traffic
light behavior.
   The AF functions include the following functions:
   • Account the influence of non-deterministic changes in the technical state of its elements [13] -
        the elevator cable burst, the position sensor does not work, there is no contact in the drive
        circuit, the power supply is malfunctioning
   • Interaction of the manipulation object with the system - the storage box tells the system its
        number, type, weight or coordinate of the cell in which it must be placed
   • Interaction of the manipulation object with each other and with the system - the exchange of
        information about the intentions of the movement of cars at the intersection and the actions of
        the traffic light can change these intentions
   • RL interaction with a student project - online monitoring, diagnostics, correction / blocking
        and analysis of student project quality [13].
   These changes in functionality are characteristic of behavior that is based on additional
information, but they are not related to the intellectual properties of the system.

4. Research Content
   Additions of a cognitive system with a physical model are complex. They include both well-
known types of augmented (AR, AF) and specific augmented, which we will call cognitive (AC)
entities. The term “augmented” entity (reality, functionality, cognitivity) implies the definition of a
basic entity with respect to which additions are made. The contents of the basic and added entities will
be examined using physical models of the GOLDi laboratory warehouse and passenger elevator,
which are shown in Figure 2.




Figure 2: Appearance of physical models of the Elevator (a), 3‐Axis‐Portal (b), Warehouse (c) and
Production Cell (d) of the GOLDi laboratory [6]
    The basic and augmented entities of these models are manifested in the structure of the model
interfaces, which includes the following and interfaces:
    • Parallel interface of electromechanical sensors and actuators with a control board (SAPI)
    • WEB-camera interface (video stream and model view control) (WCI)
    • Serial interface for data transfer between control boards, laboratory server (SASI)
    • The user interface of the object of study (UI)
    • User perception interface (UPI).
    The basic reality forms of the physical models of the GOLDi laboratory storage and passenger
elevator shows Table 1.

Table 1
Forms of the basic reality of physical models
 Interface type                     Forms of the basic reality of Warehouse, Elevator
 SAPI                               Parallel binary code of current values of sensors and actuators
 WCI                                Video/ livestream of a physical model during an experiment
 SASI                               Transfer protocol for the current values of sensors and actuators
 MAI                                Visual and virtual model of static and moving visible elements of
                                    the physical model
 USI                                Binary status code for sensors controlled by a remote user

   By basic functionality, we mean the minimum set of functions performed by a normal physical
model in normal mode with basic reality. The warehouse model has basic functionality, which
includes the following basic operations:
   • Load a box (cargo) from a warehouse cell or input drive into a movable module (cargo
       manipulator)
   • Move the movable module to a given warehouse cell
   • Unload the box (cargo) from the moving module to the warehouse cell or to the output drive
   • Waiting for a box operation.
   The basic functionality of the elevator model includes the following basic operations:
   • Waiting to elevator call from the floors
   • Close the door, move to the call floor and open the door
   • Close the door, move to the destination floor and open the door
   • Slow down the speed of movement when approaching the destination floor
   • Indication the current position of the elevator.
   The augmented realities of the warehouse model include the following:
   1. AR flow of incoming and outgoing boxes, which is entered in the USI inter-face and, as a
       result, manifests itself in the WCI and MAI interfaces. The essence of this AR is to generate a
       stochastic sequence of incoming boxes in the input drive with a random set of parameters,
       such as number, weight, color, etc. As well, as generate random times and requirements for the
       composition of the batch of boxes that need to be loaded in the output buffer.
   2. Failure of the elements of the electro mechanics of the warehouse AR, which is introduced
       into the SAPI interface. The essence of this AR is to generate the stochastic flow of sensor and
       actuator failures, which should be taken into account in the student’s project.
   Consider the possible augmented realities of the elevator model:
   1. Virtually elevator calls, which is introduced in the USI interface and, as a result, manifests
       itself in the WCI and MAI interfaces. The essence of this AR is to generate a stochastic
       sequence of elevator calls from different floors at the same time, as well as generate
       destination floors for these calls. At the same time, in the user screens that are controlled by
       the WCI and MAI interfaces, the queue of users waiting for the arrival of the elevator on call is
       visualized.
   2. AR failures of the elements of the electro mechanics of the elevator, which is introduced into
       the SAPI interface. The essence of this AR is to generate a stochastic flow of sensor and
       actuator failures, which should be taken into account in the student's project.
    The presence of AR allows the study of more complex systems with physical models of control
objects as subsystems that have augmented functionality. Therefore, based on the elevator model and
AR, AF, you can build an elevator dispatch system. In addition, the sorting and optimal cargo
placement systems can be built based on the warehouse models with AR and AF. Consider the AFs
that are needed in an elevator dispatch system:
     1. The analysis of the load of the elevator and the optimization of replanting passengers along
         the route of the elevator.
     2. Calculation of the parameters for the elevator latency and user query time
     3. Analysis of the parameters of the flow of requests for calling and moving the elevator.
    Systems for sorting and optimal placement of goods based on the warehouse can have the
following AF:
     1. Obtaining information about filling the cells of the warehouse with boxes and information
         about the numbers and parameters of the boxes stored in the store, the timing of sending
         batches of boxes and the composition of the boxes in the lot.
     2. Sorting boxes in the warehouse according to a specified criterion, for example, by numbers,
         arrival time, parameters of boxes and their storage areas – rows and floors of the warehouse.
     3. Sorting boxes in the warehouse if there are restrictions on the performed rearrangements of
         the boxes, for example, as in the famous game “fifteen” [14].
     4. Analysis of the effectiveness of search algorithms for sorting boxes.
     5. Optimization of the placement of boxes according to various criteria, for example, the
         minimum length of movements of the movable module, the timing of sending boxes,
         increasing the speed of receiving boxes through the input drive.
    At the same time, we note that the tasks of developing tools for analyzing flows (boxes, calls) and
choosing effective algorithms are research tasks for students. Moreover, RL turns from a design
laboratory into a research tool. In [15], an example of an experiment of sorting boxes in the
warehouse according to the principle of the game “fifteen” is given. The initial placement of the
boxes is randomly performed by the warehouse animation program, which analyzes the possible
existence of a solution. A greedy algorithm on graph search is used to solve the problem. Animation
screens of the initial and final placement of boxes in the warehouse are shown in Figure 3.




Figure 3: Screens of initial (a) and final (b) placement of boxes in the Warehouse

    The basic and augmented realities of perception and functionality of behavior discussed above and
the related information base are the basis for constructing cognitive systems. When constructing these
systems, additional means are used to convert higher forms of knowledge, implement target and
scenario management, cognitive perception, planning, training and reasoning. The complex of these
tools will be called augmented cognitivity. Consider this complex on systems that include the physical
model of the warehouse and elevator described above.
    Cognitive perception of the results of studying an object is possible, for example, because of the
transformation of secondary information about added reality. For example, let the signal “Input drive
is busy” be received from the receiving device of the warehouse, which is interpreted by the system as
a signal to place a new box in an empty storage cell. The system registers the time of receipt of the
box and determines its number. Then it determinates the position of the box in the batch (first,
intermediate, last), the number and the time interval for receipt of boxes and other parameters are
calculated. The statistics of the flow of boxes at the entrance to the storage train the neural network,
which forms a model of the flow of boxes. This model is used to predict future receipts of boxes and
plan their placement. Options for predicted situations and placements are analyzed by a block of
reasoning, taking into account the possible goals of the system:
    1. Minimize the movement of the movable module during the operations of receiving, sorting
       and sending a batch of boxes (“Save resource”).
    2. Minimize the time for receiving a batch of boxes (“Quick reception”).
    3. Minimize the time it takes to send a batch of boxes (“Quick shipment”).
    4. Sort batches of boxes ("Sort").
    The graphic form of reasoning, which justifies the change in the goals of the functioning of the
system in relation to the situation with filling the warehouse and flows of incoming, sent and sorted
boxes, can be represented as FSM graph.
    A graph of a state machine of scenarios is a graphical interpretation of reasoning when choosing a
scenario change in the course of implementing the chosen goal. Examples of goal and scenario
machines graphs are shown in Figure 4.




Figure 4: Examples of graphs of goal (a) machines and scenarios (b)

    The purpose of the “Save resource” is basic: to place the boxes so that the sum of their movements
in the mobile unit during reception and sending is minimal. The transition to other goals is due to
special conditions: X1 - the receipt of boxes in the warehouse with a minor interval; X3 - requirement
for quick shipment of a batch of boxes; X5 - the priority of storing boxes in sorted form. Under
condition X1, the boxes are placed in the closest available storage cells closest to the input drive. To
fulfill condition X3, the sent boxes move closer to the output drive. Under X5 priority conditions, the
system strives to sort the boxes according to a given criterion. Conditions X2, X4, X6 return to the
goal “Save resource” are opposite in meaning to the conditions X1, X3, X5. As an external constraint,
conditions X1, X3, and X5 can only occur exclusively, otherwise, there would be a contradiction.
    The selection of any goal activates the corresponding output of the target automaton and the
scenario selection automaton of its achievement. Figure 4b shows possible scenarios for achieving the
“Quick reception” goal. The priority scenario is “Closer to the exit”, in which the system tries to place
incoming mailboxes closer to the output drive. However, if the rate of receipt of the boxes is high
(event X2), the system begins to place the boxes near the input drive (the “Closer to the entrance”
scenario), with the hope of rearranging them after the completion of the fast reception of a batch of
boxes. Finally, when free cells end (event X4), it proceeds to the “To any free places” scenario.
    The examples of finite state machines of goals and scenarios considered in Figure 4 require
detailing for practical use. For example, taking into account the priorities of events X1 – X6 (Figure
4a) when they are simultaneously received at the input of the machine.
    We estimate the technical efficiency of the proposed methods by increasing the number of
experiments on the existing laboratory equipment. As an indicator of technical efficiency, we will
choose the relative increase in the number of experiments
                                                K = N/Nb,
where N, Nb are the achieved and baseline number of experiments, respectively.
    Theoretically, the K index can take values in the range from 0 to ∞. The efficiency of the method
increases with increasing K. The criterion of technical efficiency: K > 1. The economic efficiency of
the proposed methods consists in reducing the cost of purchasing and operating a remote laboratory
per one type of experiment. These are determined by the formulas
                                               C1b = Cb/Nb,
                                                C1 = C/N,
where C1b, C1 are the cost per one type of experiment for the basic and proposed options, respectively;
Сb, С are the costs of purchasing and operating a remote laboratory according to the basic and
proposed options, respectively.
    As an indicator of economic efficiency, we will choose the relative decrease in the cost of one
experiment
                                  ∆C = C1/C1b = CNb/CbN = (C/Cb)/K.
    If we neglect the additional costs for the software implementation of additions to physical models
(that is, C = Cb), then ∆C = 1/K
    Theoretically, the ∆C indicator can take values in the range from 0 to ∞. The economic efficiency
of the method increases with decreasing ∆C. The criterion of the technical efficiency of the proposed
methods: ∆C <1.
    When implementing the proposed method of additions in the GOLDi laboratory, it is expected that
the variety of experiments will increase by at least two times. Accordingly, the efficiency of
experiments on the proposed indicators will increase at least twice.


5. Additions to the WCI interface
    The WCI interface is the most important element of the remote student's perception to get a
realistic impression of his remote experiment. Previously it is discussed inserting additional AR-
elements into this interface, for example, images of the queue of passengers waiting for the arrival of
the elevator on the floor, text comments to the image and the results of calculations. These images
should be synchronized with the video stream, taking into account the time delay in the transmission
of a video stream over the Internet if we use the GOLDi-lab in its real experimentation mode.
Alternatively to this approach we want to propose an additional virtual experimentation mode to the
variants, published in [16].In that virtual mode we use pre-recorded video fragments of typical actions
/ movements / operations of a physical model instead of the livestream from a web-cam. The display
of these fragments should be synchronized with the operation of the executive mechanisms of the
experiment object and the triggering of its sensors.
    Let us consider the essence of the method using the example of experiments with the model of the
elevator of the 4th floor of the GOLDi-lab. In the work of this model, the following typical fragments
of movement can be distinguished, which are given in Table 2. These are only the video fragments for
a correct control algorithm. If the algorithm, programmed during the experiment is wrong, many
alternative video frames are needed.
    The duration of each fragment, as a rule, is no more than 10 seconds. Therefore, for a one-time
download and storage of video of all fragments (in total, this will be 2-3 minutes of video) on the
computer of a remote student, high network exchange speeds and large amounts of memory are not
required.
Table 2
Typical fragments of movement of the physical model 4‐floor elevator
         Designation                Description
         Fr1                        Cab doors are open
         Fr2/Fr3/Fr4/Fr5            Cab doors close on the first / second / third / fourth floor
         Fr6/Fr7/Fr8/Fr9            Cab doors open on the first / second / third / fourth floor
         Fr10/Fr11/Fr12             The lift moves up from the first floor to the second / from the
                                    second floor to the third / from the third floor to the fourth
         Fr13/Fr14/Fr15             The lift moves down from the fourth floor to the third / from the
                                    third floor to the second / from the second floor to the first

   In the process of synchronizing the video display and the operation of the physical model of the
elevator of the 4th floor, the sensors and actuators shown in Table 3 [6] are used.

Table 3
Sensors and actuators of the physical model of the 4‐floor‐elevator to synchronize the video display
                                   Variable                   Name
                               x0/x1/x2/x26       Elevator on floor 1/2/3/4
                               x7/x9/x11/x29      Door open Floor 1/2/3/4
                               x8/x10/x12/x30     Door closed Floor 1/2/3/4
                               y0/y1              Drive upwards/downwards
                               y3/y5/y7/v24       Door floor 1/2/3/4 open
                               y4/y6/y8/y25       Door floor 1/2/3/4 close

   Examples of conditions for displaying video fragments are given in Table 4.
   The experiment of moving the elevator from the first floor to the third is accompanied by showing
a chain of video fragments Fr1, Fr2, Fr10, Fr11, Fr8 and Fr1.
   The fact that the time for creating video clips is practically unlimited makes it possible to use a
number of techniques that are used in cinematography to improve the quality of production. This is a
preliminary development of the script, improved video cameras and stage lighting, duplication of
video fragments, changing shooting angles, alternating shots, zooming in / out of the shooting point,
various frame modes, soundtrack, titles and much more.

Table 4
Examples of conditions for displaying video fragments
            Fragment               Demonstration conditions (sensors and actuators state)
            video         Start                        During                 Stop
            Fr1           Before the experiment                               Run experiment
            Fr2           Run experiment & x0 & y4     y4                     x8
            Fr10          x8 & x0 & y0                 y0                     x1
            Fr11          x10 & x1 & y0                y0                     x2
            Fr8           x12 & x2 & !y0               y7                     x11
            Fr1           x11                          No new run             Run experiment

   Further development of this method is the use of video fragments of the work of a real object
during its operation instead of a physical model or their combination. In this case, the goals of
increasing the realism of the video are pursued, showing the details of the object that are absent in its
model, demonstrating the video in the absence of a physical model, i.e. experiment in virtual mode
and others.
   The composition of video fragments of the movement of a physical model should visually reflect
the variety of its behaviors during the experiment. The variety of behaviors in the production cell
model (Figure 2d) includes:
   1. The main option for moving a part along a circular route using three continuously moving
   belt conveyors, two rotary tables with conveyor chains, a rail carriage with a conveyor belt and a
   vertical milling machine.
   2. A variation of the main variant of movement, when the belt conveyor stops after the part
   passes through it and turns on again before the part approaches on the next cycle.
   In this case, a saving of the resource of conveyors is achieved, which gives the task a practical
meaning and increases the variety of educational experiments with this model.
   This variety is also achieved by changing the direction of movement of parts along conveyors. By
recording video clips depicting the movement of a part along one or two conjugate nodes of a
production cell (e.g., conveyor belt rotary table), video support of experiments with virtual models of
objects of study showing elements of real physical models is possible. Figure 5 shows a structural
diagram of a combined model "warehouse-production cell".




Figure 5: Structural diagram of the integrated model: 1, 10 are in‐/ output buffers; 2, 9 are rail
carriages; 3, 5, 8 are conveyor belts; 4, 7 are rotary tables; 6 is a vertical milling machine

    In this model, the circular path of the production cell is closed through the bins of the warehouse
model (Figure 2c). Video fragments show the work of the warehouse to unload the next box into the
output hopper. Then the next video fragment shows the movement of the box using a rail carriage,
then a fragment of the transition of the box to the belt conveyor, and so on.
    For some objects, the set of video fragments that demonstrate all its possible movements can be
quite large. The number of options for moving the box using the mobile module of the physical model
of the warehouse of the GOLDi laboratory is equal to the number of 2-permutations of K elements,
where K is the number of cells in the warehouse. For the physical model warehouse of the laboratory
GOLDi K = 50 and the number of permutations, respectively, of video fragments is K (K - 1) = 2450.
Here we must add the same number of fragments of moving the mobile module without a box. The
total duration of these video fragments with an average duration of one fragment of 5 seconds will be
about 7 hours. Of course, the creation of such a number of video clips requires a significant amount of
time, and during storage, they take up gigabytes of memory.
    To assess the effectiveness of the proposed method for improving the WCI interface, we will
assume that the quality of perception of the experiment is proportional to the functionality of the
experiment object displayed on the computer screen of the remote student. Let's take the QRO
functionality of a real object as a unit of quality. Then the video image quality of the experiment with
the physical QPM model will be less than one, and the image quality of the QAM animation model will
be inferior to the QPM quality (QAM < QPM). It is expected that as a result of inserting previously
recorded video fragments of the behavior of an object or its physical model into the interface of a
virtual experiment, the quality of perception will increase to the level of QPM.
    Moreover, this increase in quality was achieved without the cost of purchasing and operating
additional physical models. The cost savings for the proposed method for improving the WCI
interface depends on the required number of concurrent experiments. If it is required to
simultaneously conduct G experiments with QPM quality, then the cost of physical models according
to the basic method will be CPMG, where CPM is the cost of purchasing and operating G physical
models. These costs can be avoided by applying the proposed method for improving the WCI
interface.
    It should be noted that not for all objects it is possible to obtain a set of video fragments that
comprehensively describe its visible movements. This applies to physical models, the position of
which is described in continuous coordinates: robots, cars, quadcopters, and portal cranes.
Accordingly, the number of options for possible movements can be practically infinite. But even in
these cases, video clips embedded in the animation screen can improve the quality of the end-user
interface.

6. Conclusion
    The method of additions allows you to expand the range of creative / design tasks for students
without additional costs for the acquisition of physical models of the studied systems. The principles
of the formation of additions to systems with physical models can be useful in the formation of
educational and pedagogical problems for students.
    The add-ons presented in the remote lab are complex and include augmented reality, functionality,
and cognition that interact with the underlying properties of the object.
    Augmented reality is implemented both in the user perception interface and in the interface of
hybrid models of the research object in order to visualize the flow of virtual events associated with the
research or control object.
    Extended functionality concerns the behavior and activity of the system under study, taking into
account changes in the technical state of its elements, behavior in emergency situations, quality
control of a student project, building higher-level systems in which the original physical model is one
of the elements. For example, a dispatching system for two elevators, a system for sorting goods in a
warehouse.
    Enhanced cognition improves or creates an object's cognitive abilities in terms of perception,
learning, planning, and reasoning in a targeted management process.
    The paper presents an example of a virtual experiment with a model of a storage warehouse of the
GOLDi laboratory, in which the added functionality implements an algorithm for sorting boxes
according to the principle of the game "fifteen".
    The expected efficiency of experiments using additions is at least twice the baseline in terms of the
proposed indicators.
    A method is proposed for obtaining a video image of an object in the process of a virtual
experiment, based on the use of typical fragments of the object's motion.
    It is expected that as a result of inserting previously recorded video fragments of the behavior of an
object or its physical model into the interface of a virtual experiment, the quality of perception of the
experiment by a distant student will increase to the level of an experiment with a physical model. At
the same time, additional costs are not required for the purchase and operation of physical models,
and the number of simultaneously conducted experiments does not depend on the number of physical
models.

7. References
[1] G. A. Miller, The cognitive revolution. Trends in Cognitive Sciences, 7, 2003.
[2] D. Vernon, Artificial Cognitive Systems. Carnegie Mellon University Africa 2014.
[3] M. Poliakov, “Cognitive Control Systems: Structures and Models.” Electrical and Computer
    Systems 25 (2017) 387-393. doi:10.15276/eltecs.25.101.2017.46
[4] M. Poliakov, K. Henke, H.–D. Wuttke, Prospects for Constructing Remote Laboratories to Study
    Cognitive IoT Systems in. V. Kharchenko, A. L. Kor, A. Rucinski (Eds), Dependable IoT for
    Human and Industry. Modeling, Architecting, Implementation, RIVER PUBLISHERS. 2018:
    503-513.
[5] A.K.M. Azad, M.E. Auer, and V.J. Harward (Eds.) (2012) Internet Accessible Remote
    Laboratories: Scalable E-Learning Tools for Engineering and Science Disciplines, Engineering
    Science Reference,
[6] The Grid of Online Lab Devices Ilmenau (GOLDi). https://goldi-labs.net
[7] Staudinger GmbH Automatisierungstechnik, site. www.staudinger-est.de
[ 8 ] H. Giese, B. Rumpe, B. Sch¨atz, and J. Sztipanovits, “Science and Engineering of Cyber-
      Physical Systems (Dagstuhl Seminar 11441),” Dagstuhl Reports , vol. 1, no. 11, 2012: 1–22.
[9] N. Schiffeler, V. Varney, E Borowski, I. Isenhardt, Basic Requirements to Designing
      Collaborative Augmented Reality Status Quo and First Insights to a User-Centered Didactic
      Concept, in: Proceedings of the 17th International Conference on Remote Engineering and Virtual
      Instrumentation (REV2020). 26–28 February 2020, University of Georgia, Athens, GA, USA,
      pp. 216-230.
[10] M. Poliakov, K. Henke, H.-D. Wuttke. The augmented functionality of the physical models of
      objects of study for remote laboratories M.E. Auer, D.G. Zutin (eds.) Online Engineering &
      Internet of Things, in: Proceedings of the 14th. International Conference on Remote Engineering
      and Virtual Instrumentation REV’2017, held 15-17 March 2017, Columbia University, New
      York, USA Series: Lecture Notes in Networks and Systems, Vol. 22, Springer, Cham. 2018. doi:
      10.1007/978-3-319-64352-6.
[11] M. Poliakov, T. Larionova, H.-D. Wuttke, K. Henke, Automated testing of physical models in
      remote laboratories by control event streams, in: Proceeding 2016 International Conference on
      Interactive Mobile Communication, Technologies and Learning IMCL’2016, 17–19 October
      2016, San Diego, CA, USA. 94 p., pp. 10–13.
[12] M. Poliakov, T. Larionova, G. Tabunshchyk, A. Parkhomenko, K. Henke. “Hybrid models of
      studied objects using remote laboratories for teaching design of control systems”. Int. J. Online
      Eng. (iJOE) 9, 7–13 (2016). doi:10.3991/ijoe.v12i09.6128.
[13] H.-D. Wuttke, M. Hamann and K. Henke. “Learning analytics in online remote labs.” 2015. 3rd
      Experiment International Conference (exp.at'15) (2015): 255-260: Online Experimentation 2016
      doi:10.1109/EXPAT.2015.7463275.
[14] George F. Luger. Artificial Intelligence: Structures and Strategies for Complex Problem Solving,
      4th. ed., Williams Publishing House, 2003.
[15] Example of sorting boxes in the Warehouse,
URL: https://drive.google.com/drive/folders/1jzUECoI2ZU0nhrHCPQwvYFFpnxHHKhKu
[16] H.-D. Wuttke, K. Henke, R. Hutschenreuter, Digital Twins in Remote Labs. In: Auer M., Ram B.
      K. (eds.) Cyber-physical Systems and Digital Twins. REV’2019. Vol. 80 of Lecture Notes in
      Networks and Systems. Springer. Cham. 2020. doi:10.1007/978-3-030-23162-0_26.