<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Adaptation technology of existing decision support systems using Large Language Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yevhenii Tolkachenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Artem Samokish</string-name>
          <email>samokisartem@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Sychov</string-name>
          <email>sychov.for.students@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhii Toliupa</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nataliia Gulak</string-name>
          <email>nataliia.hulak@npp.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elena Dubchak</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii Maistrenko</string-name>
          <email>andrii.maistrenko@npp.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cisco Networking Academy</institution>
          ,
          <addr-line>Hlushkova Ave., 4D, Kyiv, 03127</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ivan Kozhedub Kharkiv National University of Air Forces</institution>
          ,
          <addr-line>Sumska Str., 77/79, Kharkiv, 61023</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Scientific Cyber Security Association of Ukraine</institution>
          ,
          <addr-line>Mykhaila Dontsia Str., 2A, Kyiv, 03161</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Volodymyrska Str., 60, Kyiv, 03022</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The emergence of large language models (LLMs), particularly ChatGPT, has fundamentally transformed user expectations regarding interaction with software, including Decision Support Systems (DSS). While LLMs have introduced intuitive dialogue-based interfaces and the ability to process and generate natural language text, they have also raised concerns over reliability, trust, and stability due to phenomena like "hallucinations." This paper analyzes how these developments have reshaped the requirements for modern DSS and explores strategies for adapting existing systems to meet new expectations. Key requirements now include support for natural language queries and responses, contextual query processing, integration of unstructured data, utilization of open-source information, and adaptive response generation based on user profiles. The paper advocates for a service-oriented architecture (SOA) approach, enabling modular upgrades-such as adding an API layer and interaction services-without altering the DSS's core mathematical or data models. This modular design supports quick adaptation to evolving standards while preserving the DSS's integrity, thus ofering a practical pathway to enhance usability, trustworthiness, and relevance in an AI-driven era.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Decision Support Systems (DSS)</kwd>
        <kwd>Large Language Models (LLM)</kwd>
        <kwd>DSS adaptation</kwd>
        <kwd>unstructured information processing</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The emergence of modern large language models (LLM) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] based on transformers [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ], opened up
new possibilities for text analyzing and generating. But the real breakthrough, that came to every home,
was the emergence of Chat GPT 3 [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Search engines like Google made it possible to ask questions in
any form, but the answer was a link to a set of resources without data processing. That is, the user must
independently analyze the answers. The Chat GPT result was a response in natural language based on
a number of sources compilation. It became possible to clarify questions in a dialogue mode. This can
truly be considered a new stage in the development of artificial intelligence (AI) systems. Easy access to
Chat GPT capabilities allowed a large number of users to test and evaluate its capabilities. Over several
years Chat GPT active usage, public opinion began to form regarding of this type systems. Over time,
public opinion regarding the LLM capabilities and the risks associated with their use took on certain
forms and became a factor influencing the entire AI sector development, including decision support
systems (DSS).
      </p>
      <p>
        Using AI in DSS has always been quite natural [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. Over time, the tasks number, solved by AI
in DSS, only increases, as the AI itself capabilities are constantly increasing [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. Though the AI
development and its comprehensive, but incompetent use led to the fact that people began to accumulate
mistrust in AI-based systems. Chat GPT had a significant impact on the increase in people’s AI mistrust.
Chat GPT’s strange responses, which were later called "hallucinations", not only “pleased” users, but
also increased skepticism towards all AI.
      </p>
      <p>As a result, ordinary users, on the one hand, appreciated the LLM capabilities and expressed a desire
to see similar capabilities in modern DSS, on the other hand, users have accumulated AI distrust, which
must also be taken into account when developing modern DSS.</p>
      <p>However, since there are a large number of high-quality and useful DSS, in addition to the new DSS
development, the question arises about the adapting existing DSS possibility to new requirements and
the such adaptation feasibility.</p>
      <p>This work is devoted to the analysis of how the requirements for modern DSSs have changed and the
technology development that will allow adapting the existing DSSs to these requirements.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Natural language style queries</title>
      <sec id="sec-2-1">
        <title>2.1. General information</title>
        <p>Any DSS that does not support natural language queries is perceived by the user as something outdated.
Of course, a DSS should have a graphical interface and an intuitive menu, but users are not only
accustomed to the ability to communicate with programs in natural language, but also understand that
it is not very dificult.</p>
        <p>
          Everyone understands that it’s no longer necessary to independently develop your own converter
word2vec [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ], your own transformers implementation. Now, the average developer has ready-made
libraries from the most famous developers, allowing you to immediately recode a sentence into a
numerical vector.
        </p>
        <p>It’s even possible to do without direct sentence encoding, and ask questions to LLM, working in the
dialogue mode, in which the user’s request is contained in a package with additional information. By
asking a number of such questions, it’s possible to clearly decompose the user’s request into simpler
formalized instructions for the DSS.</p>
        <p>As we can see, in this issue both user expectations and developer capabilities coincide. Therefore, a
natural language query becomes the norm for DSS.</p>
        <p>It’s also necessary to pay attention that most natural language processing systems are designed to
work with English. If the DSS must process queries in Ukrainian, it’s advisable to use the corresponding
LLMs, since when translating text from Ukrainian to English and vice versa, the content may be
distorted.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Generate natural language responses</title>
        <p>From technical standpoint, consistent response text generating is no more dificult than request
analyzing. The process doesn’t look very complicated, but there are a few issues to consider.</p>
        <p>
          The modern user is used to formulating relatively short queries and at the same time wants the system
to understand them. Of course, without the query context, it’s impossible to give a correct answer.
LLMs have taught users that they can clarify their queries, i.e. add new requirements formulated as
short queries that do not make sense without analyzing the preliminary conversation. Of course, you
can force the user to work with the DSS correctly, but for users accustomed to modern technologies, this
causes irritation and, accordingly, reduces the motivation to use such a DSS. Therefore, the accumulation
of information that creates the query context is a necessary condition for modern competitive DSS
development. Technologies such as Retrieval-Augmented Generation (RAG) [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] allow taking content
into account when response forming.
        </p>
        <p>Another context source is the user’s personal data. On the one hand, no one wants to share their
personal data, no one wants to identify themselves, everyone is about their privacy concerned. But
the modern Internet, social networks, and Chat GPT use user information. When an online translator
translates a text, it knows the user’s preliminary requests and the sites list visited. Accordingly, it knows
the user’s interest area. Using this knowledge, it makes the translation more accurate. And although
people do not want to share information about themselves, they have already gotten used to the fact
that all modern services adapt to them. We can say that this is already a modern era requirement,
since users expect such behavior from software. Accordingly, the DSS should, if possible, meet these
requirements.</p>
        <p>Of course, the DSS should not be a full-fledged chatbot that can support a conversation on any
topic, but it’s still desirable to take into account the context and previous user’s requests. In addition,
diferent user groups require diferent levels of information generalization. For example, users with little
experience need more detailed information. Specialists are more focused on facts, and managers tend
to receive general information with conclusions. And the modern user expects the DSS to understand
over time in what format the answers are needed.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Using open-source data</title>
        <p>Any DSS is based on a models set, and these models don’t necessarily have to use AI. The value of DSS
models is in their quality and reliability. However, in recent years, users have become accustomed to
being able to ask any questions. And it often happens that user questions go beyond the mathematical
apparatus, embedded in the DSS. Even 10 years ago, the opinion was established that requests that go
beyond the DSS models definition zone should be answered with a refusal. However, at that time, the
LLM didn’t exist. If we already use the LLM to analyze requests and formulate responses, why can’t
we use other capabilities? The responses’ reliability will usually be much lower, but with the right
combination of the DSS mathematical apparatus and the LLM capabilities, additional capabilities can be
obtained.</p>
        <p>In response to a user query such as "how will a 10% increase in the oil cost afect the rockets cost?"
the DSS may refuse to answer, since the rockets cost formula included in its mathematical model doesn’t
include the oil cost. Or it can use the LLM to determine the impact of oil cost on each of the factors
included in the formula for calculating rocket cost, and thus calculate a new cost. Of course, this
calculation will be approximate, but it’s unknown what exactly the user wants and what accuracy suits
him. The DSS task is to warn about the quality of the result obtained. For example, add the phrase "A
model of analogies and data from open sources was used in the calculations" to the answer. Thus, the
DSS doesn’t limit the user, but warns about a decrease in the answer authenticity.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Protection of DSS from “hallucinations” emergence</title>
      <sec id="sec-3-1">
        <title>3.1. General information</title>
        <p>As is known, "hallucinations" appearance is a disadvantage not only of Chat GPT, but of all LLMs in
general. Even if the developers define this as temporary dificulties, it’s clear that the "hallucinations"
possibility will decrease sharply in the near future, but will not disappear completely. Therefore, the
DSS task is to prevent the "hallucinations" appearance in responses. To reduce the "hallucinations"
number in the answers, you can reduce the load on LLM. It’s necessary to explain in detail what is
meant by load.</p>
        <p>In fact, the LLM is a very powerful tool with many capabilities. Sometimes DSS developers are
tempted to transfer part of their work to the LLM, for example, ask it to calculate a value quality or
make a conclusion or description on its own. It isn’t very dificult to do - just to form a response
request and get a quick result. Exactly such of LLM using leads to the appearance of "hallucinations".
Moreover, during DSS testing, there is a high probability of not detecting these "hallucinations". But if
the "hallucination" is detected by the user, then the trust in the DSS will be completely violated. If the
DSS has a probability of a "hallucination" appearance at the output, then for the user it’s no better than
Chat GPT. The user will not figure out where there are more or fewer "hallucinations". For him, the
very fact of their possible appearance is a sentence for DSS. Therefore, the issue of LLM correct use is
very important.</p>
        <p>
          However, in order to completely prevent the "hallucinations" occurrence, it’s necessary to do
postprocessing and check the responses generated by the LLM. For this, trigrams can be used [
          <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
          ], and
conclusions analysis can be carried out using the same scheme described above for queries analysis.
But if significant disagreements arise, the response must be regenerated. As we can see, a lot of efort is
needed to overcome this problem, and there is always a temptation not to spend so much, arguing that
"hallucinations" don’t occur very often. However, everyone can ask themselves whether they will use a
DSS that sometimes lies?
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Stability</title>
        <p>Another problem that occurs when updating the LLM during response generation is that the LLM may
give diferent responses to same requests. It’s especially true for LLMs with online access.</p>
        <p>Accordingly, if a user received one answer to a request yesterday, and today received a diferent
answer to the same request, this will lead to irritation at the very least, and may also lead to the
emergence of entire DSS mistrust. Any user expects that, in the absence of changes, the DSS answers to
the same questions will be the same. Otherwise, the user has a feeling of misunderstanding how the
DSS works, and accordingly, he stops trusting it.</p>
        <p>Unfortunately, when using dynamically updating systems like Chat GPT, additional efort is required
to achieve stability.</p>
        <p>To solve this problem, you can save user queries, and you can work with software fixed versions.
Whatever the case, this problem cannot be avoided.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Self-improvement of DSS</title>
        <p>Users of Google, Chat GPT and other quality systems are accustomed to the fact that they can correct
an incorrect answer and the same error will not happen again next time. It’s clear that large companies
have powerful support departments and improve their systems due to such corrections. It’s their
development strategy part. But users have already gotten used to such capabilities, and therefore the
DSS should also have them. Unfortunately, it’s necessary to spend human resources and time to analyze
the error and correct the mathematical model. It can’t be done quickly. If the DSS uses, for example,
neural networks for modeling, then time will be needed for training and testing. The user, in turn,
wants his correction to be immediately taken into account by the DSS. To resolve this conflict, the
following approach is used - a post-processing module is added to the DSS output, which accumulates
and takes into account user feedback. It allows, on the one hand, the user to immediately see the
corrected results, and on the other hand, it gives the developers the opportunity to accumulate and
analyze user corrections for a certain period, and then make a new DSS modification. From a scientific
point of view, this approach can’t be called self-training or self-improvement. But for the user, what is
important isn’t what happens behind the scenes of the user interface, but that the system has taken
into account his comments.</p>
        <p>When developing such post-processing, there is a need to search for new requests to match added
exceptions. For this, it’s also possible to use LLM according to the schemes described above. But a large
number accumulation of exceptions significantly increases the time and computing resources necessary
for processing requests. That’s why the SSP should be updated regularly.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. User qualifications accounting</title>
        <p>It can be said that "unfortunately, the average level of DSS professional knowledge user has dropped
significantly." Or it can be said that "the DSS powerful development the has allowed users to quickly
master new areas of activity." Both statements are true and complement each other. Indeed, a modern
user, on the one hand, very easily masters new areas of activity, on the other hand, he lacks basic
knowledge and his professional education level isn’t very high. Should this be taken into account
when developing a DSS? The answer is "definitely yes." The task of modern DSS developers is to do
everything possible so that such users can master the work on the DSS as quickly as possible. The use
of correct reference, visualization of information, calculations of auxiliary quantities, a user-friendly
interface - all this makes mastering the DSS easier and faster. If earlier the presence of such capabilities
was an additional plus of the DSS, now it has become a necessary condition for its use. Of course, the
development of such capabilities takes a lot of time and resources, while not adding scientific novelty.
But a modern DSS can’t be targeted at a small group of specialists, which will make it uncompetitive.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. DSS modification technology</title>
      <p>
        All of the above indicates that modern DSS should have additional properties, but, as we can see, these
additional properties don’t require changes in either the mathematical apparatus of the DSS or the
data structure. Moreover, most of the necessary improvements are the same for certain classes of DSS.
Accordingly, it’s possible to propose a technology that will accelerate the adaptation of existing DSS to
modern requirements. In our opinion, the use of service-oriented architecture (hereinafter SOA, from
Service-oriented Architecture) [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13, 14, 15</xref>
        ] makes it possible to adapt existing DSS to new requirements
with minimal efort.
      </p>
      <p>In fact, most of the new properties concern the sphere of interaction between the DSS and the user.
Accordingly, if an application programming interface (hereinafter API, from Application Programming
Interface) is added to the DSS, then the interaction services with the user and the outside world can use
DSS as a service that performs mathematical calculations, modeling and other specialized operations.
The service of interaction with users is relatively universal and can be used to work with a whole class
of similar DSS.</p>
      <p>Accordingly, the DSS modification comes down to adding a module to the DSS that will provide API
support.</p>
      <p>The services set may vary depending on the tasks. In our opinion, based on the above arguments,
the basic additional services set should include:
• Natural Language Query Service;
• Natural Language Postprocessing and Response Generation Service;
• Query context generation service;
• Interaction service with external information sources;
• Unstructured information processing service.</p>
      <p>In fact, the DSS also becomes one of the services that can be replaced without changing the structure
of service interaction (Figure 1).</p>
      <p>The user still has the ability to interact with the DSS through the DSS Graphical User Interface,
which was developed specifically for this DSS, and the ability to interact with the DSS through response
services appears.</p>
      <p>This technology allows adapting the DSS to modern requirements with minimal efort.</p>
      <p>It doesn’t require changing the programming language in which the DSS is written, nor changing
the DSS itself architecture. Adding an API module should not take much time, but it requires efort
to understand the DSS itself structure. If a set of additional services has already been created and
interaction interfaces have been agreed upon, then in fact, creating and connecting an API module will
allow the user to interact with the DSS through the corresponding services.</p>
      <p>Compared to modifying the entire DSS, this approach is more expedient and faster.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The paper analyzes the impact of LLM emergence on the requirements for modern DSS.</p>
      <p>The following requirements for DSS are described, which should be taken into account when
developing new DSS and improving existing ones:
1. Requirements afected by new capabilities:
• Natural language style queries;
• Responses generation in natural language style;
• Unstructured information processing;
• Data from open source usage;
• Taking into account knowledge about the user (request context).</p>
      <sec id="sec-5-1">
        <title>2. Requirements that take into account the AI shortcomings:</title>
        <p>• DSS protection against the "hallucinations" appearance;
• Stability.</p>
      </sec>
      <sec id="sec-5-2">
        <title>3. Requirements that take into account changing of user behavior:</title>
        <p>• DSS self-improvement;
• Taking into account user qualifications.</p>
        <p>The paper proposes a technology for DSS adapting to modern requirements using a service
architecture that ensures the DSS improvement with minimal efort.</p>
        <p>By using API, minimal changes are made to the DSS structure. And using of service architecture
allows to apply of modules that use LLM to work with diferent DSSs.</p>
        <p>As a result, the user has the opportunity to interact with DSS both through the basic user interface and
through additional services that provide support for new functionality that meets modern requirements.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <sec id="sec-6-1">
        <title>The authors have not employed any Generative AI tools.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Fathallah</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Das</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. De Giorgis</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Poltronieri</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Haase</surname>
          </string-name>
          , L. Kovriguina,
          <string-name>
            <surname>NeOn-GPT: A Large Language</surname>
          </string-name>
          <article-title>Model-Powered Pipeline for Ontology Learning</article-title>
          ,
          <source>in: Extended Semantic Web Conference</source>
          <year>2024</year>
          , Hersonissos, Greece,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          , Attention is All You Need,
          <source>in: Advances in Neural Information Processing Systems</source>
          , volume
          <volume>30</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Banko</surname>
          </string-name>
          , E. Brill,
          <article-title>Scaling to Very Very Large Corpora for Natural Language Disambiguation, in: Proceedings of the 39th Annual Meeting on Association for Computational Linguistics (ACL '01), Association for Computational Linguistics</article-title>
          , Morristown, NJ, USA,
          <year>2001</year>
          , pp.
          <fpage>26</fpage>
          -
          <lpage>33</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>ChatGPT</given-names>
            <surname>Release Notes</surname>
          </string-name>
          ,
          <year>2024</year>
          . URL: https://help.openai.com/en/articles/ 6825453-chatgpt-release-notes, accessed:
          <fpage>2025</fpage>
          -07-29.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>I.</given-names>
            <surname>Ostroumov</surname>
          </string-name>
          , et al.,
          <source>Modelling and simulation of dme navigation global</source>
          service volume,
          <source>Advances in Space Research</source>
          <volume>68</volume>
          (
          <year>2021</year>
          )
          <fpage>3495</fpage>
          -
          <lpage>3507</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.asr.
          <year>2021</year>
          .
          <volume>06</volume>
          .027.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>O.</given-names>
            <surname>Solomentsev</surname>
          </string-name>
          , et al.,
          <article-title>Method of optimal threshold calculation in case of radio equipment maintenance</article-title>
          ,
          <source>in: Lecture Notes in Networks and Systems</source>
          , volume
          <volume>462</volume>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>69</fpage>
          -
          <lpage>79</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-19-2211-
          <issue>4</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>O. C.</given-names>
            <surname>Okoro</surname>
          </string-name>
          , et al.,
          <article-title>Optimization of maintenance task interval of aircraft systems</article-title>
          ,
          <source>International Journal of Computer Network and Information Security</source>
          <volume>14</volume>
          (
          <year>2022</year>
          )
          <fpage>77</fpage>
          -
          <lpage>89</lpage>
          . doi:
          <volume>10</volume>
          .5815/ijcnis.
          <year>2022</year>
          .
          <volume>02</volume>
          .07.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>V.</given-names>
            <surname>Larin</surname>
          </string-name>
          , et al.,
          <article-title>Prediction of the final discharge of the UAV battery based on fuzzy logic estimation of information and influencing parameters</article-title>
          ,
          <source>in: IEEE 3rd KhPI Week on Advanced Technology (KhPIWeek)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/KhPIWeek57572.
          <year>2022</year>
          .
          <volume>9916490</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Goldberg</surname>
          </string-name>
          ,
          <string-name>
            <surname>O. Levy,</surname>
          </string-name>
          word2vec Explained: Deriving Mikolov et al.'
          <source>s Negative-Sampling WordEmbedding Method</source>
          ,
          <year>2014</year>
          . URL: https://arxiv.org/abs/1402.3722.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>What is</surname>
            <given-names>RAG</given-names>
          </string-name>
          ?
          <article-title>-</article-title>
          <string-name>
            <surname>Retrieval-Augmented Generation AI Explained - AWS</surname>
          </string-name>
          ,
          <year>2024</year>
          . URL: https://aws. amazon.com/what-is/rag/, accessed:
          <fpage>2025</fpage>
          -07-16.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Martin</surname>
          </string-name>
          , Speech and
          <string-name>
            <given-names>Language</given-names>
            <surname>Processing</surname>
          </string-name>
          :
          <article-title>An Introduction to Natural Language Processing</article-title>
          , Computational Linguistics, and Speech Recognition, Pearson Prentice Hall,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>N-gram</surname>
          </string-name>
          ,
          <year>2025</year>
          . URL: https://en.wikipedia.org/wiki/N-gram, accessed:
          <fpage>2025</fpage>
          -07-29.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Service-Oriented Architecture</surname>
          </string-name>
          Standards - The Open Group,
          <year>2025</year>
          . URL: https://www.opengroup. org/soa, accessed:
          <fpage>2025</fpage>
          -07-29.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M.</given-names>
            <surname>Richards</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ford</surname>
          </string-name>
          ,
          <article-title>Fundamentals of Software Architecture: An Engineering Approach</article-title>
          ,
          <string-name>
            <given-names>O</given-names>
            <surname>'Reilly Media</surname>
          </string-name>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Brandner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Craes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Oellermann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Zimmermann</surname>
          </string-name>
          ,
          <article-title>Web Services-Oriented Architecture in Production in the Finance Industry</article-title>
          ,
          <source>Informatik-Spektrum</source>
          <volume>27</volume>
          (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>