<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>vol. 14(3) (2019). doi: 10.1371/journal.pone.0210102
[36] G. Raghu</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1109/CADSM.2019.8779290</article-id>
      <title-group>
        <article-title>Implementation of reinforcement learning strategies in the synthesis of neuromodels to solve medical diagnostics tasks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Serhii Leoshchenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii Oliinyk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergey Subbotin</string-name>
          <email>subbotin@zntu.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Viktor Lytvyn</string-name>
          <email>lytvynviktor.a@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Korniienko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National university “Zaporizhzhia polytechnic”</institution>
          ,
          <addr-line>Zhukovskogo street 64, Zaporizhzhia, 69063</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>42</volume>
      <fpage>99</fpage>
      <lpage>107</lpage>
      <abstract>
        <p>The highlevel of accuracy of the functioning of artificial neural network (ANN) diagnostic models described at the resources indicates the prospects for the use of ANN in various fields of medicine for the diagnosis and forecasting of diseases. The implementation of diagnostic neuromodels into clinical practice can provide effective results during making medical decisions, contribute to improving the accuracy of diagnosis of diseases, and speed up process of examination of the patient. It is also worth noting that ANN can be used as models of the subject area under consideration. By changing the input data of the neural network model, observing the behavior of the output signals, it is possible to research the subject area under consideration, identify and investigate medical patterns that the ANN extracted during training. However, medical tasks become more complicated every time: the nature of clinical data about the patient changes, the data is constantly updated, the volume of data increases, as well as the hidden connections in the data. An additional challenge is the increased requirements for the adaptability and sensitivity of the neuromodel for a particular patient or disease. Using a reinforcement learning approach demonstrates good training results on incomplete data or in areas of high specificity. The paper investigates the possibility of using reinforcement learning strategies for the synthesis of high-precision neuromodels for subsequent use in medical diagnostics.</p>
      </abstract>
      <kwd-group>
        <kwd>1 medical diagnostics</kwd>
        <kwd>neuromodel</kwd>
        <kwd>synthesis</kwd>
        <kwd>reinforcement learning</kwd>
        <kwd>penalty and reward</kwd>
        <kwd>duel</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Moreover, researchers often have to work with more specific tasks that are not so common in mass
practice [
        <xref ref-type="bibr" rid="ref3 ref4 ref5 ref6">3-6</xref>
        ]:
 methods for detecting signs of early-stage Alzheimer's disease on MRI images;
 a method that looks for anomalies in X-ray images;
 methods for the control of bedridden patients. There are cameras in the wards that are
connected to a program that can recognize a specific situation: a patient falling out of bed. If this
happens, the nurses are automatically notified;
 methods for monitoring the workload of operating tables. The program determines how
evenly the load is distributed on medical teams in different operating rooms;
 medical reference book with artificial intelligence (the doctor enters data about the patient,
the program suggests a solution).
      </p>
      <p>However, all these tasks are characterized by similar problems in the process of implementing
ANN. For example, getting data. To create a neuromodel designed for any task, it must be trained on
data. To teach her to see an anomaly on an X-ray or to determine that it is cancer and not pneumonia,
she needs to show a lot of such pictures (thousands, hundreds of thousands, millions). The diagnosis
must be correctly signed on all the pictures, otherwise the program will make more mistakes [7-10].</p>
      <p>So, many researchers agree that the main difficulty of developers is: the lack of homogeneous and
high-quality data. A developer can't just come to a hospital and take medical data about patients. Even
taking into account the fact that they are depersonalized, for example, X-rays without a first and last
name [7-10]. These data are protected by several legal laws at once: on medical secrecy, on personal
data, etc. Large Western universities often provide developers with arrays of data to guarantee the
ability to train a model. But then there is a problem with data compatibility. For example, the
developers received a database with postoperative X-rays: control images, which are made after
surgery in the patient's supine position. However, to analyze the results of screening studies, the
pictures are taken most massively when the patient is standing, it is impossible to apply a system
trained on such data. The patient's X-rays lying down and standing are two very different pictures.
There are also always doubts about the reliability and accuracy of other people's data. It is difficult to
train models that prompt a doctor to make a decision based on text data: approaches to the treatment
of certain diseases may differ in each country [11-13].</p>
      <p>Reinforcement learning is a approach of machine learning method in which a model is trained that
has no information about the system, but has the ability to perform any actions in it. Actions move the
system to a new state, and the model receives some reward from the system. Therefore, such a
strategy can be a useful practice for solving medical problems.</p>
      <p>So the main goal of the work will be to develop a new method of neuroevolutionary synthesis of
neuromodels for medical diagnostics with the borrowing of strategies and mechanisms of
reinforcement learning methods. This approach will eliminate most of the disadvantages of
neuroevolutionary methods.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>In recent years, researchers have observed significant qualitative growth in reinforcement learning
technologies. If initially this approach demonstrated good results in game tasks, then at the moment
neuromodels trained with reinforcement learning methods are actively used for pattern recognition,
agent management in robotics and decision-making in continuous tasks [14-20].</p>
      <p>Sometimes reinforcement learning is distinguished not as a separate strategy, but as an offshoot
from the strategy of learning with a teacher. This is due to the formulation of the assignment: a real or
virtual environment acts as a teacher. However, this is the main mistake of this classification. After
all, the environment in this case reacts to the agent dynamically and each time the reaction may be
different. Thus, during the training process, the agent receives information from the external
environment about where there is no exit, thus, he studies the surrounding world and learns to find a
way out [14-20].</p>
      <p>It should be noted that several factors have influenced the rapid development of the reinforcement
learning approach [14-20]:
 increased computing speed (using powerful distributed and parallel computing systems, the
use of many lightweight threads of modern GPUs);
 a significant increase in the amount of suitable data for training models in open repositories
(for example, ImageNet);
 dissemination of new ANN topologies (CNN, LSTM, GRU);
 expansion and distribution of computing infrastructures (Linux, TCP/IP, Git, ROS, PR2,
AWS, AMT, TensorFlow, etc.).</p>
      <p>So in general, It can be concluded that the main impetus for recent progress is not new ideas and
methods, but the intensification of computing, sufficient data, mature infrastructure. And, despite
significant practical results, their theoretical basis still remains simple [16-20].</p>
      <p>The most common and researched reinforcement learning method is Policy Gradient (PG) [21-28].
The popularity of this method is explained by theoretically supported rules for optimizing the
expected reward:
 clear policy;
 transparent rules.</p>
      <p>In general, PG can be represented in the likeness of the diagram in Fig. 1. Then basically the
method will consist of performing 4 basic steps [21-28]:
1. run N scenarios i with a strategy  a | s . At the same time, i is a certain scenario, that
is, a sequence of agent states ( si ) and actions performed in these states
( ai ):   s1, a1; s2 , a2 ; s3, a3;..., sn , an , and the behavior of the agent (further states and actions) is
determined by its stochastic strategy:  an | sn  ;</p>
      <p>1 N  T i  T i 
2. calculate the arithmetic mean  J       log  ati | sti   rati | sti  , where J  is a</p>
      <p>N i1 t1  t1 
function of the maximized mathematical expectation of the sum of the agent's winnings  , and
 J  is the gradient of this function. Then, rati | sti  is the gain gained from the action ati in the
state sti at the step T at which the transition to the terminal state occurred;
3.      J  ;
4. if result not agree to the extreme, repeat from point 1.</p>
      <sec id="sec-2-1">
        <title>Generate samples (i.e. run the policy)</title>
      </sec>
      <sec id="sec-2-2">
        <title>Improve the policy</title>
        <p>     J 
T i
 r ati | sti 
t 1</p>
      </sec>
      <sec id="sec-2-3">
        <title>Fit a model to estimate return</title>
        <p>Further researches of reinforcement learning methods was found in the more complete and
advanced Q-learning method [21-28]. Q-learning is a method that researched values from a special
table that measures in what quality level it will be performed a certain action in any state (it can be
measured this with a simple scalar value, so the larger the value, the better the action). The values
which stored in the table are called "Q-values". These are estimates of the amount of future awards. In
other words, they estimate how much more reward it could be get before the end of the game by being
in the ( si ) state and performing the ( ai ) action. This method allows to get more information about the
environment at every step. This information is used to update the values in the table [21-28].</p>
        <p>The basic concept of Q-learning is based on the Bellman equation:</p>
        <p>Qs, a  r   max a' Qs', a' ,
(1)</p>
      </sec>
      <sec id="sec-2-4">
        <title>Q is a Q-Values for the state given a particular state;</title>
        <p>si is a sequence of agent states ( si );
ai is a sequence of agent and actions;
r is a expected discounted cumulative reward;
 is a the award in the future, devaluing future awards.</p>
        <p>The equation states that the value of Q for a certain state-action pair should be the reward received
when moving to a new state (by performing this action), added to the value of the best action in the
next state. And to resolve the conflict, when the hypothesis works that receiving an award right now is
more valuable than receiving an award in the future,  number is used from 0 to 1 (usually from 0.9
to 0.99), which is multiplied by the award in the future, devaluing future awards [21-28].</p>
      </sec>
      <sec id="sec-2-5">
        <title>State</title>
      </sec>
      <sec id="sec-2-6">
        <title>Action</title>
      </sec>
      <sec id="sec-2-7">
        <title>State</title>
      </sec>
      <sec id="sec-2-8">
        <title>General Q-Learning</title>
      </sec>
      <sec id="sec-2-9">
        <title>Q Table</title>
        <p>State-Action
–
–
–
–
–
–
–
–</p>
      </sec>
      <sec id="sec-2-10">
        <title>Q-Value Action N</title>
        <p>Dueling Double Deep Q-Learning (Dueling DDQN) this is the most modern approach to the
synthesis of neuromodels based on the principles of reinforcement learning. To approximate the
optimal function of the action value ( ai ), it could be used a deep Q-network: Qs, a,  with the
parameter  . To evaluate this network, firstly should be optimized the following sequence of function
dropouts on iteration i : Li i   Es,a,r,s' yiDQN  Qs, a, i 2  ; yiDQN  r   max Qs', a', ' , updating the

parameters of the descent gradient such that i Li i   Es,a,r,s' yiDQN  Qs, a, i iQs, a, i  [21-28].</p>
        <p>Dueling DDQN is a special state-of-the-art deep Q learning algorithm consisting of separate duel
architectures that share streams of value and benefits in deep Q networks to determine the value of the
DNN</p>
        <p>DNN</p>
        <p>DNN
n
e
t
t
a
l
F</p>
        <p>FC
FC</p>
        <p>V(s)
A(s,a1)
A(s,a2)
A(s,a3)
r
e
y
a
L
n
o
i
t
a
g
e
r
g
g
A</p>
        <p>Q(s,a1)
Q(s,a2)
Q(s,a3)
next state. Prioritizing experience reproduction, i.e. sampling mini-experience packages that have a
large expected impact on learning, further increases efficiency [21-28].</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed method</title>
      <p>As it was presented in the previous section, reinforcement learning methods have great prospects
for solving problems that are poorly formalized, with incomplete data or with a dynamic environment.
In our work, we propose a method based on strategies of reinforcement learning methods [29].</p>
      <p>So it is proposed:
1. taken the neuroevolutionary synthesis of neuromodels as a basis, but with the addition of 2
separate neural networks: a network for evaluating and monitoring the environment
( NN glob_ crit ) and a network for duplicating the parameters of the best agent at the next step
will
be
a
set
of
different
individual
agents
( NNbest _ clone );
2. the rest of the population
( NNind's  NNind1 , NNind2 ,..., NNindn );
3. during the synthesis and modification of the structure of individual individuals considered as
agents in the environment, all information will be forwarded to the global network NN glob_ crit ,
whose task is to compare the current results of agents with the reference results of training data
and adjust the penalty or reward for each agent;
4. at the same time, after evaluating the actions of all agents, the agent with the best results is
selected at the iteration ( Q  max, outNNind  min ) structurally and parametrically duplicated in the
clone network: NNbest _ clone  NNiindn Struct NN , ParamNN  .</p>
      <p>The main goal of this step is to evaluate the results in the next iteration with the previously best
ones.</p>
      <p>This synthesis approach also assumes the presence of an additional identifier: the evaluation of the
reward growth step markQlev [30-34]. Such an identifier will help to avoid areas of local extremes,
since if the reward value decreases less than the specified one, it is possible to change the best agent
in the population peratively.</p>
      <p>The general progress of the method is shown in Fig. 4.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Discussions</title>
      <p>A data set was selected for testing based on the characteristics of patients with pneumonia, which
was recently presented by authors M.-A. Kim, J. Seok Park,C. W. Lee, and W.-I. Choi [35]. Total
sample size: 77490 values. Table 1 shows the characteristics of the set date.</p>
      <p>For this task, the development of neuromodels will make it much easier to determine the further
diagnosis of a person after collecting data on their well-being. Given that pneumonia is one of the
most important signs and complications of COVID-19 [36], [37], after additional training on
advanced data, this model can be used to diagnose patients or predict the further development of
disease dynamics.</p>
      <p>Clone of the best network
Training
data</p>
      <p>Global space</p>
      <p>Critic Model
Policy</p>
      <p>State
function
Mutation
rules</p>
      <p>Network</p>
      <p>Accessor
Accessor
Individual</p>
      <p>Accessor
Individual</p>
      <p>Accessor
Individual
…</p>
      <p>Accessor</p>
      <p>Individual
Number of attributes
Number of instances'</p>
      <p>It will be compared the work of the proposed method reinforcement learning (RLNE) with the
modified neuroevolution genetic algorithm method (MGA) which synthesis tasks will be RNN and
DNN [14], [15], [34]. For methods compared, will be used next characteristics of the metaparameters.</p>
      <p>Analyzing the results, it can be concluded that the proposed method has well demonstrated the
synthesis time in comparison with the use of MGA for the synthesis of DNN. This is due to the fact
that topologically synthesized neuromodels were simpler and their modifications required less effort.
However, the time results are inferior in time to MGA for RNN synthesis. A possible explanation may
be that during RNN synthesis, there was no need to clone the best individuals to compare the results,
since the presence of recurrent connections makes this process easier.</p>
      <p>Another important characteristic is the accuracy of the synthesized solutions. So the solutions
obtained by RLNE were more accurate both on training and test data, but the difference in error with
MGA RNN is not so significant. And the results of MGA DNN were even better. It is likely that deep
networks allow encode hidden connections between data more accurately.</p>
      <p>The second stage of the study of experimental results was focused on the characteristics of
resource consumption during the synthesis of solutions. So special attention was paid to measuring
the load on the CPU and RAM [38]. Such monitoring allows more accurately determine the load
distribution at different iterations of the method execution. The CPU and RAM load graphs are shown
in Fig. 5 and 6, respectively.</p>
      <p>During the use of MGA in both cases, the load on the CPU and RAM was more abrupt, but did not
exceed the mark of 81-82% on average. When using RLNE, the load distribution was more
systematic, but it often reached 100%. These indicators are important when designing a parallel
approach in synthesis using methods. So a relatively low load allow implement MGA on highly
productive GPUs, but the high resource consumption of RLNE, on the contrary, limits this possibility.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>The proposed strategies and method demonstrated the accepted level of work. Thus, the accuracy
of the resulting solution was increased by 6.4% (from 0.157 to 0.147). It was also possible to reduce
the synthesis time: in comparison with analogues by 8.5% (from 8031 s to 7352 s). However, a high
level of resource consumption limits the parallelization of the method, which in turn can significantly
limit the genetic diversity of individuals. In the future, it is possible to implement the main strategies
of the proposed method in parallel implementations of neuroevolutionary methods for the purpose of
intellectual maintenance and control of populations of solutions.</p>
      <p>Also, an important option for further research may be to simplify the proposed strategy by
extracting a clone of the best result at the iteration and replacing this approach with the use of
individual agents with recurrent connections, but by tightening the control of the import of the barrier
from the external global critic network. On the other hand, this approach will allow to focus the work
of the critic's network on the external data of the environment.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Acknowledgements</title>
      <p>The work was carried out with the support of the state budget research projects of the state budget
of the National University "Zaporozhzhia Polytechnic" “Intelligent methods and software for
diagnostics and non-destructive quality control of military and civilian applicationrse”gis(tsrtaatieon
number 0119U100360) and “Development of methods and tools for analysis and prediction
dynamic behavior of nonlinear obje(cstsa”te registration number 0121U107499).
of
7. References
[7] G. Paragliola, M. Naeem, Risk management for nuclear medical department using reinforcement
learning algorithms, J Reliable Intell Environ 5, pp. 105–113 (2019). doi:
10.1007/s40860-01900084-z
[8] Z. Tian, X. Si, Y. Zheng, et al., Multi-step medical image segmentation based on reinforcement
learning, J Ambient Intell Human Comput (2020). doi: 10.1007/s12652-020-01905-3
[9] A. Kishor, C. Chakraborty, W. Jeberson, Reinforcement learning for medical information
processing over heterogeneous networks, Multimed Tools Appl 80, pp. 23983–24004 (2021).
doi: 10.1007/s11042-021-10840-0
[10] M. Chitsaz, C. Seng Woo, Software Agent with Reinforcement Learning Approach for Medical
Image Segmentation, J. Comput. Sci. Technol. 26, pp. 247–255 (2011). doi:
10.1007/s11390011-9431-8
[11] I. Izonin, R. Tkachenko, V. Verhun, K. Zub, An approach towards missing data management
using improved GRNN-SGTM ensemble method, Engineering Science and Technology, an
International Journal, vol. 24(3) (2021), pp. 749-759. doi: 10.1016/j.jestch.2020.10.005.
[12] I. Izonin, R. Tkachenko, I. Dronyuk, P. Tkachenko, M. Gregus, M. Rashkevych, Predictive
modeling based on small data in clinical medicine: RBF-based additive input-doubling method,
Math Biosci Eng. 18(3) (2021), pp. 2599-2613. doi: 10.3934/mbe.2021132. PMID: 33892562.
[13] R. Tkachenko, I. Izonin, P. Tkachenko, Neuro-Fuzzy Diagnostics Systems Based on SGTM
Neural-Like Structure and T-Controller, in: Babichev S., Lytvynenko V. (eds), Lecture Notes in
Computational Intelligence and Decision Making. ISDMCI 2021. Lecture Notes on Data
Engineering and Communications Technologies, vol 77 (2022), Springer, Cham, pp. 685-695.
doi: 10.1007/978-3-030-82014-5_47
[14] J.A. Kumar, S. Abirami, Ensemble application of bidirectional LSTM and GRU for aspect
category detection with imbalanced data, Neural Comput &amp; Applic (2021). doi:
10.1007/s00521021-06100-9
[15] A. Khan, A. Sarfaraz, RNN-LSTM-GRU based language transformation, Soft Comput 23, pp.</p>
      <p>13007–13024 (2019). doi: 10.1007/s00500-019-04281-z
[16] M. Lu, Z. Shahn, D. Sow, F. Doshi-Velez, L.H. Lehman, Is Deep Reinforcement Learning Ready
for Practical Applications in Healthcare? A Sensitivity Analysis of Duel-DDQN for
Hemodynamic Management in Sepsis Patients, Proceedings of the AMIA Annual Symposium,
Rockville, 2020, pp. 773-782. Published 2021 Jan 25.
[17] M. Hausknecht, P. Stone, Deep Recurrent Q-Learning for Partially Observable MDPs, AAAI</p>
      <p>Fall Symposia (2015), pp. 1-7.
[18] T.P. Lillicrap, J.J. Hunt, A. Pritzel, N.M. Heess, T. Erez, Y. Tassa, D. Silver, D. Wierstra,</p>
      <p>Continuous control with deep reinforcement learning, CoRR (2016), pp.1-14.
[19] A. Liu, N., Liu, Y., Logan, B. et al. Learning the Dynamic Treatment Regimes from Medical
Registry Data through Deep Q-network. Sci Rep 9, 1495 (2019). doi:
10.1038/s41598-01837142-0
[20] B. Guo, X. Zhang, Q. Sheng, H. Yang, Dueling Deep-Q-Network Based Delay-Aware Cache
Update Policy for Mobile Users in Fog Radio Access Networks, IEEE Access, Vol. 8 (2020), pp.
7131-7141. doi: 10.1109/ACCESS.2020.2964258.
[21] A pair of interrelated neural networks in Deep Q-Network, 2020. URL:
https://towardsdatascience.com/a-pair-of-interrelated-neural-networks-in-dqn-f0f58e09b3c4
[22] G. Delétang, Mixing policy gradient and Q-learning, 2019. URL:
https://towardsdatascience.com/mixing-policy-gradient-and-q-learning-5819d9c69074
[23] Karpathy, Deep Reinforcement Learning: Pong from Pixels, 2016. URL:
http://karpathy.github.io/2016/05/31/rl/
[24] G. Kesari, Catch me if you can: A simple english explanation of GANs or Dueling neural-nets,
2018. URL:
https://towardsdatascience.com/catch-me-if-you-can-a-simple-english-explanationof-gans-or-dueling-neural-nets-319a273434db
[25] C. Yoon, Dueling Deep Q Networks. Dueling Network Architectures for Deep Reinforcement</p>
      <p>Learning, 2019. URL: https://towardsdatascience.com/dueling-deep-q-networks-81ffab672751
[26] Improvements in Deep Q Learning: Dueling Double DQN, Prioritized Experience Replay, and
fixed, 2018. URL:
https://www.freecodecamp.org/news/improvements-in-deep-q-learningdueling-double-dqn-prioritized-experience-replay-and-fixed-58b130cc5682/</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>V.M.</given-names>
            <surname>Adamović</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. Z.</given-names>
            <surname>Antanasijević</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Đ. Ristić</surname>
          </string-name>
          , et , aAl.n
          <article-title>optimized artificial neural network model for the prediction of rate of hazardous chemical and healthcare waste generation at the national level</article-title>
          ,
          <source>J Mater Cycles Waste Manag 20</source>
          , pp.
          <fpage>1736</fpage>
          -
          <lpage>1750</lpage>
          (
          <year>2018</year>
          ).
          <source>doi: 10.1007/s10163- 018-0741-6</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Albizri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Simsek</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence in healthcare operations to enhance treatment outcomes: a framework to predict lung cancer prognosis</article-title>
          ,
          <source>Ann Oper Res</source>
          (
          <year>2020</year>
          ).
          <source>doi: 10.1007/s10479-020-03872-6</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.W.Y.</given-names>
            <surname>Khang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.A.J.</given-names>
            <surname>Alsayaydeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.M.</given-names>
            <surname>Idrus</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.A.B.M. Gani</surname>
            ,
            <given-names>W.A.</given-names>
          </string-name>
          <string-name>
            <surname>Indra</surname>
            ,
            <given-names>J.B.</given-names>
          </string-name>
          <string-name>
            <surname>Pusppanathan</surname>
          </string-name>
          ,
          <article-title>Resource efficient for hybrid fiber-wireless communications links in access networks with multi response optimization algorithm</article-title>
          ,
          <source>ARPN Journal of Engineering and Applied Sciences</source>
          , vol.
          <volume>16</volume>
          (
          <issue>1</issue>
          ) (
          <year>2021</year>
          )
          <fpage>45</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.A.J.</given-names>
            <surname>Alsayaydeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Aziz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.I.A.</given-names>
            <surname>Rahman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.N.S.</given-names>
            <surname>Salim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zainon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.A.</given-names>
            <surname>Baharudin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.I.</given-names>
            <surname>Abbasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.W.Y.</given-names>
            <surname>Khang</surname>
          </string-name>
          ,
          <article-title>Development of programmable home security using GSM system for early prevention</article-title>
          , vol.
          <volume>16</volume>
          (
          <issue>1</issue>
          ) (
          <year>2021</year>
          )
          <fpage>88</fpage>
          -
          <lpage>97</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.A.J.</given-names>
            <surname>Alsayaydeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.A.Y.</given-names>
            <surname>Khang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.A.</given-names>
            <surname>Indra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.B.</given-names>
            <surname>Pusppanathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Shkarupylo</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.K.M. Zakir Hossain</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Saravanan</surname>
          </string-name>
          ,
          <article-title>Development of vehicle door security using smart tag and fingerprint system</article-title>
          ,
          <source>ARPN Journal of Engineering and Applied Sciences</source>
          , vol.
          <volume>9</volume>
          (
          <issue>1</issue>
          ) (
          <year>2019</year>
          )
          <fpage>3108</fpage>
          -
          <lpage>3114</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.A.J.</given-names>
            <surname>Alsayaydeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.A.Y.</given-names>
            <surname>Khang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.A.</given-names>
            <surname>Indra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Shkarupylo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jayasundar</surname>
          </string-name>
          ,
          <article-title>Development of smart dustbin by using apps</article-title>
          ,
          <source>ARPN Journal of Engineering and Applied Sciences</source>
          , vol.
          <volume>14</volume>
          (
          <issue>21</issue>
          ) (
          <year>2019</year>
          )
          <fpage>3703</fpage>
          -
          <lpage>3711</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>