<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Requirement Analysis Approach to Estimate the Possibility of Software Development Artifacts Reusing Consulting with Artificial Intelligence Technologies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Olena Chebanyuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Carrer de Can Planas</institution>
          ,
          <addr-line>Zona 2, 08193 Bellaterra, Barcelona, Catalonia</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institut d'Investigació en Intel·ligència Artificial, Campus Universitat Autònoma Barcelona</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>National Aviation University</institution>
          ,
          <addr-line>1, Liubomyra Huzara ave., 03058, Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Requirement Analysis, AGILE, PlantUML, Artificial Intelligence, Domain Engineering</institution>
          ,
          <addr-line>Model-Driven Engineering</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper proposes a domain engineering approach to estimate the possibility of reusing software development artifacts with the consultation of artificial intelligence technologies. Estimation is done by means of comparing the semantics of software artifacts with the semantics of the requirement specification. The approach is based on the following ideas: two lists of user stories are formed where the first list contains the user stories of software artifacts, while the second list contains user stories of the future project. Then, the semantics of these two lists are compared. Software components with the same semantics from both lists of user stories are marked as possible candidates for reuse. The requirements engineer manages the artificial intelligence that performs all routine tasks for this approach. As a result of these activities, a software developer receives recommendations on which software components need to be reviewed for reuse. The proposed approach addresses one more challenge of using artificial intelligence technologies to generate software models, represented as UML diagrams, despite the limitation that artificial intelligence tools may only generate text answers. The choice of an environment for UML diagram visualization in the case of communication with artificial technologies is grounded. The paper also considers questions about the effectiveness of using different natural language groups, namely Germanic, Romance, and Cyrillic, to support communication with artificial intelligence technologies.</p>
      </abstract>
      <kwd-group>
        <kwd>Keywords:1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Artificial Intelligence (AI) technologies open up new opportunities to address current challenges in
software engineering. One such challenge is the semantic analysis of software artifacts to support
the software development lifecycle processes within the Agile methodology. Semantic comparison
increases the accuracy of software reuse procedures.</p>
      <p>This paper initiates a series of works that leverage the fundamentals of Model-Driven
Engineering as a foundation for approaches supporting AI-powered software development lifecycle
processes.</p>
      <p>This work focuses on describing an approach for evaluating the feasibility of reusing software
development artifacts using AI technologies.</p>
      <p>
        According to ISO/IEC/IEEE 24765:2017 standard, a software development artifact is any artifact
related to the software development process. Examples include UML diagrams, interface
screenshots, *.dll files, and test cases [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        The same standard defines a software artifact as any type of software, such as source code, *.dll
files, frameworks used in a project, or executable code [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Artificial intelligence technologies are envisioned to handle routine tasks of semantic analysis
for various software development artifacts, with a human subsequently verifying the results. This
paper considers Agile requirements analysis within the software product line approach.</p>
      <p>A core principle of Agile is to adapt to changing customer requirements (as outlined in the Agile
Manifesto). Changing requirements necessitate a sequence of actions to update the associated
software development artifacts. This involves requirements verification, validation, and elicitation
(review and editing of software models and requirements specifications). These activities consume a
significant portion of the time spent on requirements analysis.</p>
      <p>AI technologies can take on routine tasks, reducing the time required for artifact analysis and
minimizing human error when updating data and making recommendations for reusing existing
software artifacts in new projects. Semantic comparison of different user stories falls under the
umbrella of semantic search. The main challenge of the semantic search procedure lies in
representing various software development artifacts in a format that enables exact comparisons.
Following the semantic search procedure, the requirements engineer will need to verify the results
of the requirements analysis, while the developer will only need to evaluate the AI's
recommendations.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature Review</title>
      <p>Research investigating the principles of communication between users and AI technologies (e.g.,
chatbots) can be organized according to the steps involved in supporting the user-AI tool
communication process. The schema for organizing a conversation between a user and an AI tool is
represented in Figure 1.
1. Sending request to chatbot (the first four messages in Figure 1).
2. Processing a user request and extracting semantics from it.
3. Searching for an answer to the question that corresponds to the defined semantics.
4. Returning the answer to the client side and interpreting it (the last message on Figure 1).</p>
      <p>Consider review of papers representing results about the main steps of described algorithm.</p>
      <sec id="sec-2-1">
        <title>2.1. Processing User Request and Extracting Semantics of the User’s Query</title>
        <p>
          Formal query creation methods for knowledge bases are based on a series of checks performed
on the user's natural language query. These checks may include the presence of question words,
subordinating conjunctions, semantically colored marker verbs, subject and predicate groups, etc.,
in the input phrase. The program uses the set of results compiled after passing these checks to
automatically construct a formal query from template blocks. Named entities found in the phrase
become input parameters for the queries created in this way [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
        </p>
        <p>
          Widely used knowledge systematization approaches utilize ontologies to organize information
about the problem domain. Ontological knowledge bases are effective for systematizing large
amounts of knowledge, particularly in the development of natural language dialogue systems.
Today, there is widespread development of various approaches for information systematization in
the medical field. Examples include the medical rehabilitation support system presented in [
          <xref ref-type="bibr" rid="ref4">3</xref>
          ] and
medical diagnostic software systems [
          <xref ref-type="bibr" rid="ref5">4</xref>
          ].
        </p>
        <p>
          In developing reference dialogue systems, neural network approaches can be combined with the
usage of ontological knowledge bases, thereby increasing each other's effectiveness. So, for
example, as described in [
          <xref ref-type="bibr" rid="ref5">4</xref>
          ], the ontology stores marked fragments of texts that are extracted using
queries formed on the basis of user phrase data. These texts are then utilized to form a response
through a large language model according to the semantic intentions defined in the user's phrase.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Searching for an Answer to the Question that Corresponds to the Defined</title>
      </sec>
      <sec id="sec-2-3">
        <title>Semantics</title>
        <p>A knowledge base can increase the level of accuracy and precision in retrieving answers.</p>
        <p>
          Consider the use of artificial intelligence techniques for requirement analysis, specifically for
analyzing non-functional requirements in security system development [
          <xref ref-type="bibr" rid="ref7">5</xref>
          ]. A requirements
engineer can prepare additional prompts for the AI using the analytical fundamentals of security
key distribution [
          <xref ref-type="bibr" rid="ref9">6</xref>
          ], open and closed key generation for cryptographic algorithms [
          <xref ref-type="bibr" rid="ref11">7</xref>
          ], or ensuring
the safe use of credit cards [
          <xref ref-type="bibr" rid="ref12">8</xref>
          ] or networks to develop software systems that meet high-level
security requirements [
          <xref ref-type="bibr" rid="ref14">9</xref>
          ].
        </p>
        <p>Conclusion from the review. The current level of AI development is sufficient to begin
experiments on applying these technologies to software development within the Agile
methodology.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. The task, the Research Questions, and a Scientific Novelty of the</title>
    </sec>
    <sec id="sec-4">
      <title>Proposed Approach</title>
      <p>Task: To propose an approach to estimate the possibility of reusing software artifacts during
the requirement analysis consulting with artificial intelligence technologies</p>
      <sec id="sec-4-1">
        <title>3.1. Research questions (RQs):</title>
        <p>1. Propose a concept for semantic comparison of different types of software development
artifacts, namely software artifacts and user stories.
2. Ground the selection criteria for the environment used to visualize software models (UML
diagrams).
3. Provide the support for different natural language groups, including Cyrillic, Romance, and</p>
        <p>Germanic languages.
4. Enable the analysis of source code written in various object-oriented programming
languages.</p>
        <p>Choose effective AI tools for source code analysis and semantic comparison of user stories
written in natural languages.</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Scientific novelty of the proposed approach</title>
        <p>The approach addresses the challenge of analyzing the semantics of source code written in
different object-oriented programming languages. The analysis results can be represented in various
natural languages depending on user preferences. To achieve this, the approach combines AI
technologies with established software engineering principles.</p>
        <p>Technologies that provide scientific novelty of the proposed approach:
● From Model-Driven Engineering fundamentals Text-to-Text and Text-to-Model
Transformations are used. Leverages the concept of text-to-model transformation from
software engineering to represent the software artifact semantics in a structured format.
● From AI technologies Large Language Model (LLM) Prompt Engineering is used: Utilizes AI
by employing LLM prompt engineering techniques to guide the extraction of semantic
attributes while considering the specific characteristics of the chosen LLM.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Model-Driven Engineering Foundations of the Proposed Approach</title>
      <p>The first step to involve Model-Driven Engineering fundamentals to approaches of software
development lifecycle management is to provide the analysis of modelling environments “for text to
model transformation”</p>
      <p>
        Aim of this analysis is to select modelling environment with the simplest representation of
textual description of UML diagram. Simple representation requires minimum efforts to teach AI
tools to prepare a correct and full text representation of UML diagram. Figure 2 represents a classic
model to model transformation scheme [
        <xref ref-type="bibr" rid="ref15">10</xref>
        ] with propositions (blue text labels) of elements’ names
that participate in the proposed approach.
      </p>
      <p>Research area is concentrated on the generating “Model a” (Ma on the figure 2). “Model a” is a
concrete type of software model (UML diagram) for concrete project. The description of this model
must be generated by artificial intelligence tools. The research task is to prove that AI may generate
correct model interpreting metamodel a correctly. “Metamodel a” in this scheme is a description of
possible elements of UML diagram. For example, to describe class diagram Metamodel a support
description of class diagram elements (classes and interfaces) and all possible relationships between
them (inheritance, aggregation, composition, and association).</p>
      <p>The Text-to-Model transformation (elements “Transformation Metamodel”, ”Transformation
Model”, and “MetaModelb”) is done by modelling environment.
The next modelling environments were considered:
● Visual studio plug-in for class diagram generating;
● DrawIO;
● Luquidchart;
● PlantUML;
● ASTAH UML.</p>
      <p>Consider the results of the analysis of the effectiveness of modeling environments for generating
UML diagram text descriptions.</p>
      <p>DrawIO's requirement is to set coordinates for UML diagram elements complicates the creation
of complex formal requests for AI tools.</p>
      <p>Figure 3 illustrates a portion of the description for a simple class diagram. Note the pointer on
the right side of the figure.</p>
      <p>The next visualization environment considered is Visual Studio. The analysis reveals that the
generated descriptions do not encompass all relationships between elements. Figure 4 exemplifies a
description of two classes that are linked by inheritance. This description shows that inheritance
relationship is missed. It is represented only in file with source code. Additionally, the complexity of
the XML file structure hinders the creation of effective requests for AI technologies.</p>
      <p>Previous versions of Visual Studio utilized supplementary files, such as those with the .layout
extension, to store information about element placement, color, and other visualization attributes.</p>
      <p>
        A similar analysis was conducted for Gleek and PlantUML. Ultimately, PlantUML was selected
due to its simpler, more accurate text notation for representing UML diagrams. This environment
facilitates the formalization of instructions for AI tools in natural language across various types of
natural languages [
        <xref ref-type="bibr" rid="ref16">11</xref>
        ].
      </p>
    </sec>
    <sec id="sec-6">
      <title>5. Description of Technologies Stacks Allowing to Integrate ChatGTP to</title>
    </sec>
    <sec id="sec-7">
      <title>Software System for Requirement Analysis</title>
      <p>
        To begin, an API key from OpenAI must be obtained through your account on the OpenAI
website (in the API keys section). Then, a new project should be created using a code editor or an
Integrated Development Environment (IDE) [
        <xref ref-type="bibr" rid="ref17">12</xref>
        ].
      </p>
      <p>
        For ASP.NET Core applications, the OpenAI API client package can be utilized to streamline
communication with the ChatGPT APIs (use the command Install-Package OpenAI from NuGet
packages [
        <xref ref-type="bibr" rid="ref18">13</xref>
        ]). This package offers a collection of classes and methods that simplify interaction
with the APIs.
      </p>
      <p>
        The OpenAIClient class can be wrapped in a service class that implements an interface. It needs
to be configured with your API key, and methods to transmit text prompts to the ChatGPT API and
receive human-like responses. The Completions method of the OpenAIClient class can be used to
send a completion request to the ChatGPT API. This method takes a CompletionRequest object as a
parameter, which should contain the text prompt to be sent to the API. The CompletionRequest
object can be customized with various parameters such as the maximum length of the generated
text, the number of responses to generate, and the value to control the randomness of the generated
text. The Completions method returns a CompletionResponse object, which contains the generated
text as well as other metadata such as the usage statistics and the model used for generation. The
generated text can be extracted from the CompletionResponse object and returned to the user as a
natural language response [
        <xref ref-type="bibr" rid="ref19">14,15</xref>
        ].
      </p>
      <p>This service class can be injected into the controller using dependency injection. The controller
should expose endpoints that take user queries or context as input and return natural language
responses generated by the ChatGPT API.</p>
      <p>
        For Node.js applications, an HTTP client library like Axios can be used to communicate with the
ChatGPT APIs. The API key should be incorporated in the request headers, and the request body
should hold the text prompt to be sent to the API. After receiving the API response, it can be parsed
and processed to extract the generated text. The parsed response can then be returned to the user as
a natural language response [
        <xref ref-type="bibr" rid="ref18">13</xref>
        ].
      </p>
      <p>
        Then the endpoints need to be tested to confirm that they are functioning correctly. This can be
done using a tool such as Postman. User queries or context can be passed to the API, and natural
language responses can be returned for testing purposes [
        <xref ref-type="bibr" rid="ref19">14</xref>
        ].
      </p>
      <p>Finally, the application can be deployed to a web server or a cloud platform such as Azure or
AWS. When deploying the application, it is important to ensure that the API key is securely stored
and not exposed in the application code or configuration files. The API key should be securely
stored in a configuration file or an environment variable and not hardcoded in the application code.
When deploying the application, it is important to ensure that the API key is not exposed in the
deployment configuration or the application logs. The application can be further enhanced by
implementing features such as user authentication, input validation, and error handling [15].</p>
    </sec>
    <sec id="sec-8">
      <title>6. Description of the Proposed Approach</title>
      <p>The proposed approach modifies the classical domain engineering
incorporating the following steps:
methodology by
6.1.</p>
      <p>Domain Analysis Stage</p>
      <p>During domain analysis, the semantic content of source code is extracted and represented as a
set of epics with corresponding user stories. AI techniques are employed to derive these user stories
from the codebase. To facilitate reuse, these extracted user stories are compared semantically with
those generated during requirement analysis. Subsequently, an AI tool produces behavioral UML
diagrams in PlantUML format. Traceability links are established between the user stories and these
UML diagrams.</p>
      <p>This stage leverages Model-Driven Engineering (MDE) principles, specifically Text-to-Text
transformations (source code to user stories) and Text-to-Model transformations (PlantUML
descriptions of UML diagrams).</p>
      <sec id="sec-8-1">
        <title>6.2. Application Engineering Stage</title>
        <p>In the application engineering phase, user stories are generated from requirement specifications
or product vision documents using AI tools. A semantic comparison is then conducted between the
project user stories and those derived from the source code. The resulting analysis identifies
potential reusable software components, along with their associated UML diagrams, which are
presented to developers as candidates for reuse.</p>
        <p>Figure 5 presents a UML sequence diagram illustrating the proposed approach. The
"Transformation Fundamentals" actor highlights the MDD aspects utilized at each step of the
process.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>7. Experiment</title>
      <p>The experiment aim is to demonstrate the feasibility of addressing research questions using
simple source code modules. ChatGPT Copilot is selected as AI technologies for this purpose. To
investigate language influences, Bulgarian (Cyrillic), Catalan (Romance), and English (Germanic)
are chosen as representative languages.</p>
      <sec id="sec-9-1">
        <title>7.1. Domain Analysis Phase</title>
        <p>The experiment employs two Python scripts performing speech-to-text conversion as the source
code. Table 1 outlines the prompts used for domain analysis in the three selected languages.</p>
      </sec>
      <sec id="sec-9-2">
        <title>7.2. Application Engineering Phase</title>
        <p>The product vision document from a customer is obtained (see figure 6).
Explanation of product vision document: Please write me “Speech to Text” module on python</p>
        <sec id="sec-9-2-1">
          <title>Please, generate me user stories and epics for this</title>
          <p>English Catalan</p>
          <p>Hello! I need to write a Hola! Necessito escriure
‘speech to text’ application in una aplicació de ‘conversa a
Python. text’ en Python.</p>
          <p>Bulgarian
Здравей, искам да напиша
приложението което
конвертира «глас в текст» на</p>
          <p>Python.</p>
          <p>Genera històries d’usuari i Моля, генерирай ми истории
èpiques per a l’aplicació за потребтеля и епици за
application
‘conversa a text’, si us plau.</p>
          <p>това приложение</p>
          <p>Result user stories are generated. Their content do not depend upon prompt language.
The number and sense of user stories for all prompts are the same.</p>
          <p>Example of one user story is provided below</p>
        </sec>
        <sec id="sec-9-2-2">
          <title>As a user, I want to be able to convert my spoken words into text so that I can save time on typing.</title>
          <p>Com a usuari, vull poder
convertir les meves paraules
parlades en text per estalviar
temps en escriure.
Като потребител искам да
мога да преобразувам
изговорените си думи в
текст, за да спестя време за
писане.</p>
        </sec>
      </sec>
      <sec id="sec-9-3">
        <title>7.3. Semantic comparison of requirement specification and source code</title>
        <p>Като разработчик искам да
As a developer, I want to Com a desenvolupador, vull инициализирам модула за
initialize the microphone
module so that I can capture
voice commands.</p>
        <p>As a system user, I want to
open an audio stream to
receive voice input.</p>
        <p>As a voice command system
user, I want to read audio data
from the microphone module.</p>
        <p>As a user interacting with the
voice command system, I want
to start and stop recording
voice commands.</p>
        <p>inicialitzar el mòdul del
micròfon per poder capturar
comandes de veu.</p>
        <p>Com a usuari del sistema, vull
obrir un flux d’àudio per rebre
entrada de veu.</p>
        <p>Com a usuari del sistema de
comandes de veu, vull llegir
dades d’àudio del mòdul del
micròfon.</p>
        <p>Com a usuari que interactua
amb el sistema de comandes de
veu, vull començar i aturar la
gravació de comandes de veu.</p>
        <sec id="sec-9-3-1">
          <title>Com a administrador del</title>
          <p>As a system administrator, I sistema, vull guardar els fitxers
want to save recorded audio d’àudio gravats per a l’anàlisi
files for analysis or archival o fins arxivístics.
purposes.
микрофон, за да мога да
улавям гласови команди.
Като системен потребител
искам да отворя аудио поток,
за да получавам гласов вход.
Като потребител на система
за гласови команди искам да
чета аудио данни от модула
за микрофон.
Като потребител, който
взаимодейства със системата
за гласови команди, искам да
започвам и спирам
записването на гласови
команди.
Като системен
администратор искам да
запазвам записаните аудио
файлове за анализ или
архивиране.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>8. Conclusion</title>
      <p>The paper proposes the approach to estimate the feasibility of reusing software development
artifacts during requirement analysis. This approach is grounded in the semantic analysis of
software modules and a subsequent comparison between the semantics of these modules and user
requirements. User stories are employed as semantic attributes for both software development
artifacts and requirement analysis artifacts.</p>
      <p>Artificial intelligence plays a crucial role in automating routine tasks such as recognizing
software component semantics and comparing user stories derived from both source code and
requirement analysis.</p>
      <p>The proposed approach offers several advantages through the utilization of artificial intelligence:
●
●
●
●
●</p>
      <p>It enables the search for software components written in different programming languages.
It eliminates limitations imposed by the natural language used by developers and
requirement engineers.</p>
      <p>It facilitates efficient processing of extensive requirement specifications.</p>
      <p>It reduces human involvement in discovering the semantics of software development
artifacts.</p>
      <p>It increases the likelihood of reducing development time and costs.</p>
    </sec>
    <sec id="sec-11">
      <title>9. Acknowledgements</title>
      <p>This paper is performed as a part of a research project “Ingeniería de dominio para los
desarrollos de inteligencia artificial” (Domain engineering for artificial intelligence development) de
Instituto de Investigación en Inteligencia Artificial (IIIA, Catalonia, Spain), Consejo Superior de
Investigaciones Científicas (CSIC, Spain).</p>
      <p>I would like to express my sincere gratitude to Professor Carles Sierra, Research Professor and
Director of the Artificial Intelligence Research Institute (IIIA) at CSIC, for his exceptional leadership
and unwavering support throughout this project, guidance in both technical and organizational
aspects, as well as his encouragement in my Catalan language studies.</p>
      <p>Additionally, I would like to extend my heartfelt thanks to Joan Jené, Head of the Technology
Transfer &amp; Development Unit (UDT) at the IIIA, for generously providing the source code modules
essential for experiments [16], as well as his encouragement in my Catalan language studies.</p>
      <p>"ChatGPT Completions in ASP.NET Core Web API." C# Corner, 2023. URL:
https://www.c-sharpcorner.com/article/chatgpt-completions-in-asp-net-core-web-api/.</p>
      <p>Jené, J “Source code, taken for experiment”
https://drive.google.com/file/d/1WxoRGndDxMEMMgdK3tScqA7CcrQVhG7o/view?usp=dr
ive_link</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>“</surname>
            <given-names>ISO</given-names>
          </string-name>
          /IEC/IEEE 24765:
          <article-title>2017 Systems and</article-title>
          software engineering - Vocabulary” https://www.iso.org/standard/71952.html
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Linvin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            ,
            <surname>Palagin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            ,
            <surname>Kaverinsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Malakhov</surname>
          </string-name>
          . “
          <article-title>Ontology-driven development of dialogue systems</article-title>
          .
          <source>” South African Computer Journal</source>
          , vol.
          <volume>35</volume>
          , no.
          <issue>1</issue>
          ,
          <issue>2023</issue>
          , pp.
          <fpage>37</fpage>
          -
          <lpage>62</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          https://doi.org/10.18489/sacj.v35i1.
          <fpage>1233</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Palagin</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaverinsky</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petrenko</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malakhov</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          “
          <article-title>Digital Health Systems: OntologyBased Universal Dialog Service for Hybrid E-Rehabilitation Activities Support</article-title>
          .
          <source>” Proceedings of the 2023 IEEE 12th International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS)</source>
          , Dortmund, Germany, vol.
          <volume>1</volume>
          ,
          <issue>2023</issue>
          , pp.
          <fpage>84</fpage>
          -
          <lpage>89</lpage>
          . https://doi.org/10.1109/IDAACS58523.
          <year>2023</year>
          .
          <volume>10348639</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Boiarskyi</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Popereshnyak</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          “Automated System and
          <article-title>Domain-Specific Language for Medical Data Collection and Processing</article-title>
          .” In: Babichev,
          <string-name>
            <given-names>S.</given-names>
            ,
            <surname>Lytvynenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          <source>(eds) Lecture Notes in Computational Intelligence and Decision Making. ISDMCI 2021. Lecture Notes on Data Engineering and Communications Technologies</source>
          , vol.
          <volume>77</volume>
          , Springer, Cham,
          <year>2022</year>
          , pp.
          <fpage>25</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          https://doi.org/10.1007/978-3-
          <fpage>030</fpage>
          -82014-5_
          <fpage>25</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Palagin</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kaverinskiy</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petrenko</surname>
            ,
            <given-names>M. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malakhov</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>"Fundamentals of the Integrated Use of Neural Network and Ontolinguistic Paradigms: A Comprehensive Approach." Cybernetics and Systems Analysis</article-title>
          , vol.
          <volume>60</volume>
          , no.
          <issue>1</issue>
          ,
          <issue>2024</issue>
          , pp.
          <fpage>111</fpage>
          -
          <lpage>123</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>https://doi.org/10.1007/s10559-024-00652-z.</mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Masol</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Popereshnyak</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          "
          <source>Joint Distribution of Some Statistics of Random Bit Sequences." Cybernetics and Systems Analysis</source>
          , vol.
          <volume>57</volume>
          , no.
          <issue>1</issue>
          ,
          <issue>2021</issue>
          , pp.
          <fpage>139</fpage>
          -
          <lpage>145</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>https://doi.org/10.1007/s10559-021-00337-x.</mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Tynymbayev</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gnatyuk</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ibraimov</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Namazbayev</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mukasheva</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          "
          <source>Cybersecurity Providing in Information and Telecommunication Systems." CEUR Workshop Proceedings</source>
          , vol.
          <volume>3654</volume>
          ,
          <string-name>
            <surname>Kyiv</surname>
          </string-name>
          , Ukraine,
          <year>2024</year>
          , pp.
          <fpage>513</fpage>
          -
          <lpage>519</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Popereshnyak</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>"Technique of the Testing of Pseudorandom Sequences."</article-title>
          <source>International Journal of Computing</source>
          , vol.
          <volume>19</volume>
          , no.
          <issue>3</issue>
          ,
          <issue>2020</issue>
          , pp.
          <fpage>387</fpage>
          -
          <lpage>398</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          https://doi.org/10.47839/ijc.19.3.
          <year>1888</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Baisholan</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turdalyuly</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gnatyuk</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baisholanova</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kubayev</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>"Implementation of Machine Learning Techniques to Detect Fraudulent Credit Card Transactions on a Designed Dataset."</article-title>
          <source>Journal of Theoretical and Applied Information Technology</source>
          , vol.
          <volume>101</volume>
          , no.
          <issue>13</issue>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [10]
          <article-title>Figure model to model transformation is taken from https://wiki</article-title>
          .eclipse.org/images/9/90/OMCW_chapter10_
          <article-title>Modelplex-WP6 Training_IntroductionToM2M</article-title>
          .pdf
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [11]
          <string-name>
            <surname>“PlantUML Language Reference</surname>
          </string-name>
          <article-title>Guide (1</article-title>
          .
          <year>2020</year>
          .
          <volume>22</volume>
          )” https://pdf.plantuml.
          <source>net/1</source>
          .
          <year>2020</year>
          .22/PlantUML_Language_Reference_Guide_en.pdf
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Ranasinghe</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <article-title>"Effortlessly Integrate OpenAI ChatGPT APIs in</article-title>
          .
          <source>NET Core 7 WebAPI with Ease." Medium</source>
          ,
          <year>2023</year>
          . URL: https://medium.com/@nirajranasinghe/effortlessly-integrate
          <article-title>-openai-chatgpt-apis-in-net-core7-web-api-with-ease-7658ab26afc3.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Friedner</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>"Make an API Request to Chat GPT-4 with Next</article-title>
          .
          <source>js Using JavaScript." Medium</source>
          ,
          <year>2023</year>
          . URL: https://medium.com/@JohanFriedner/
          <article-title>make-an-api-request-to-chat-gpt4-with-next-js-using-javascript-c238b47bd88a.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Tripathy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <article-title>"ChatGPT Integration in ASP</article-title>
          .
          <source>NET Core Using OpenAI." Jayant Tripathy</source>
          ,
          <year>2023</year>
          . URL: https://jayanttripathy.com
          <article-title>/chatgpt-integration-in-asp-net-core-usingopenai/.</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>