<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>AI-based Intelligent System for Applied problems: Model Generation Web Application Architecture⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksandr M. Khimich</string-name>
          <email>khimich505@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr V. Popov</string-name>
          <email>alex50popov@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tamara V. Chistyakova</string-name>
          <email>tamara.chistjakova@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elena A.</string-name>
          <email>elena_nea@ukr.net</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nikolaevskaya</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavlo S. Yershov</string-name>
          <email>yershov.pavel.wsk@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>V.M Glushkov Institute of Cybernetics of the NAS of Ukraine</institution>
          ,
          <addr-line>Academician Glushkov Avenue, 40, Kyiv, 03187</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>Mathematical modeling and computer experiments are fundamental approaches for investigating complex processes in science, engineering, and other domains. In our earlier work, we introduced the concept and architecture of the AIMM (Artificial Intelligence for Mathematical Modeling) system, designed to integrate a web-based modeling interface, large language models (LLMs), and high-performance hybrid computing resources on the SKIT supercomputer. That study established the theoretical foundation and modular microservice architecture of AIMM. The paper continues this research by detailing the design and implementation of the Model Generation Web Application (MGWA), a key subsystem within AIMM. The MGWA provides a prompt-based pipeline workflow for stepwise construction of computational models, guiding users from informal problem statements to solver-ready representations. Furthermore, the developed prototype of MGWA has been validated through preliminary testing, demonstrating its applicability for real-world problem settings.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;mathematical modeling</kwd>
        <kwd>Artificial Intelligence</kwd>
        <kwd>parallel algorithms</kwd>
        <kwd>ISCM</kwd>
        <kwd>AIMM</kwd>
        <kwd>MGWA</kwd>
        <kwd>MIMD architecture</kwd>
        <kwd>GPU</kwd>
        <kwd>solution reliability</kwd>
        <kwd>convolutional neural networks</kwd>
        <kwd>modeling automation</kwd>
        <kwd>machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Mathematical modeling and the related computer experiment are among the primary means of
studying objects, processes, and phenomena of various natures—in science, engineering,
economics, society, and beyond. Mathematical modeling, in particular, involves substituting a real-world
system or phenomenon with its abstract mathematical representation, thereby enabling
computational experiments to be carried out on a computer. Such an approach allows the exploration of in
tricate processes that would be otherwise impractical or prohibitively expensive to study in
realworld settings, leveraging modern computational technologies and numerical methods [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        The vast majority of such studies require solving systems of linear algebraic equations (SLAEs)
with matrices of arbitrary structure and extremely large orders. The properties of a computational
problem may differ from those of the original mathematical problem, since the input data in a
computer are represented approximately [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5 ref6 ref7 ref8 ref9">2-9</xref>
        ]. This increases the need to ensure the reliability of
solutions, especially when modeling complex physical and mechanical processes. Here, a key factor be
comes the use of Artificial Intelligence (AI), capable of automating critically important stages—from
problem formulation to mathematical model construction and the selection of optimal solution
methods and algorithms. Powerful parallel computers and the rapid development of AI make it
possible to automate all stages of model building and analysis [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>In practice, the advancement of high-performance computing is closely tied to the development
of user-oriented software—intelligent systems that facilitate interaction in the language of a spe
cific domain and automate the entire workflow of obtaining computational solutions. A powerful
framework for this lies in knowledge-based technologies that integrate hybrid artificial intelligence
techniques, including machine learning. Addressing these challenges in the core areas of computa
tional mathematics (systems of linear algebraic equations, algebraic eigenvalue problems, systems
of ordinary differential equations, and nonlinear equations) can be framed within a paradigm that
unites three pillars: high-performance computing, computer mathematics, and artificial
intelligence.</p>
      <p>
        Publications [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]–[14] represent landmark studies that have shaped the development of artificial
intelligence and GPT models. At present, the field of artificial intelligence is undergoing rapid and
multifaceted development. Among the most widely known advancements are the products of Ope
nAI [15], including the GPT family of models. GPT represents a natural language processing
technology built on a transformer architecture, trained on extensive collections of textual data. This
enables the generation of coherent and high-quality texts. Through training on massive datasets, such
models acquire the ability to capture both syntactic and semantic structures of language, which
makes them particularly effective tools for constructing semantic networks and domain-specific
models. The ChatGPT system [16] is based on the GPT-3 [17], GPT-3.5 [18], GPT-4 [19] and GPT-5
[20] large language models developed by OpenAI. Its fine-tuning has been achieved using a
combination of supervised learning techniques and reinforcement learning approaches.
      </p>
      <p>The development and implementation of artificial intelligence (AI) methods and technologies
open up new opportunities, while simultaneously giving rise to a range of critically important chal
lenges. Among the most vulnerable aspects are the risks associated with the unreliability and
unsafe operation of AI systems (AIS). Such threats are caused by the unpredictable behavior of sys
tems in situations that go beyond the data used for training or testing, limited input sequences, as
well as possible failures due to imperfect hardware and software implementations, physical defects,
or cyberattacks [21]. In addition, an important task is the formulation and enforcement of clear
requirements for key AI characteristics, such as ethics, explainability, trustworthiness, and others,
which have been defined and systematized in [22, 23]. In [24], the concept of guarantee-capable AI
systems is proposed, based on the development of the von Neumann paradigm (VNP), presented
through a set-theoretic description that takes into account various components—AI and AIS quality
characteristics.</p>
      <p>In our earlier work [25], we introduced a new approach to solving applied problems that in
volves using AI at all stages—from problem formulation to obtaining a reliable solution. AIMM
(Artificial Intelligence for Mathematical Modelling) is an Intelligent System for research and solving
applied problems, designed for the automatic investigation and solution of mathematical modeling
tasks on multi-core computers with MIMD architecture and graphics processing units (GPUs). The
system is developed on the basis of the Intelligent System of Computer Mathematics (ISCM) [26].
Special attention is paid to applying AI in the process of mathematical model construction, auto
matic selection of numerical methods, and implementation on hybrid computing architectures
(CPU+GPU). Such an approach significantly enhances the efficiency and reliability of modeling,
enabling a broader range of users to work with high-performance computing systems without deep
expertise in applied mathematics or programming. Ultimately, this opens up new opportunities for
research in fields where accuracy, adaptability, and speed of obtaining results are crucial—from
engineering and medicine to economics and environmental science.</p>
      <p>The system is aimed at the complete automation and optimization of all stages of solving com
plex applied problems—from formulating the problem in the language of the subject area, to build
ing a mathematical model, adaptively selecting numerical methods, and obtaining reliable
solutions. The proposed system is designed to support automated modeling and numerical solution of
computational problems by integrating a modeling web interface, a large language model (LLM),
and a high-performance computing backend deployed on the SKIT supercomputer [27]. The system
architecture builds on previous research on implementing computational solutions through the
interaction of modules for formalization, processing, generation, and solving of mathematical prob
lems [28].</p>
      <p>Building upon the theoretical foundations and modular microservice architecture of AIMM, this
study presents the design and implementation of the Model Generation Web Application (MGWA),
a pivotal subsystem within the broader framework. MGWA implements a prompt-based pipeline
modelling workflow, which systematically guides users from informal problem statements to
solver-compatible representations through a series of semantically linked prompt stages. The
current prototype has undergone initial validation, demonstrating its feasibility for practical
modelling tasks. While the system has shown promising results, testing and iterative refinement
remain in progress to ensure robustness and adaptability across a range of problem domains.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Prompt-Based Pipeline Architecture</title>
      <p>In our earlier work [25], authors introduced the concept and architecture of the AIMM system,
designed to automate mathematical modeling and the numerical solution of computational
problems by integrating a web-based modeling interface, a large language model (LLM), and a
high-performance computing backend deployed on the SKIT supercomputer. The architecture follows a
modular microservice paradigm, combining three main subsystems: the web application for prob
lem formalization, the LLM-assisted modeling module, and the SKIT-based computational backend.
The concept of generating problem statements using large language models has already been suc
cessfully implemented in the GrantsForScience project [29]. Each stage of the modeling process is
implemented as a distinct interaction with the LLM, guided by a specifically designed prompt
constructed in line with prompt engineering principles [30]. This design enables flexible model
construction, automatic selection of numerical methods, and efficient execution on hybrid CPU/GPU
architectures.</p>
      <p>This paper focuses on the architectural and implementation aspects of the Model Generation Web
Application (MGWA), which serves as the entry point to the computational workflow within the
AIMM system (Fig. 1).</p>
      <p>The MGWA provides a structured environment for the incremental construction of
computational models from a formalized problem description. The modeling process is organized into
sequential stages, where the user can examine, edit, and validate intermediate outcomes. These stages
cover the formulation of the physical model (identifying processes and assumptions), the develop
ment of the mathematical description (differential equations and boundary conditions),
discretization into a numerical scheme, synthesis of the computational model (choosing algorithms and
methods), and finally, configuration of execution parameters such as solvers and data formats. At
the final stage, the system generates a draft computational solution schema that encapsulates key
properties of the intended numerical implementation—such as data structures, precision, and solver
characteristics. This schema serves as a foundational interface for injecting problem-specific input
data into AIMM and informs the subsequent selection of a suitable algorithmic implementation
from the SKIT backend.</p>
      <p>Consider the Prompt-Based Pipeline Architecture. Each task proceeds through a predefined
sequence of transformation steps. These steps are selectively enabled for each pipeline, depending on
the nature of the problem area. The default logical structure, based on the concept proposed in [31],
follows four key stages:</p>
      <p>1. Formalized Description. The user provides a natural language description of the
problem. The system uses a structured prompt to extract entities such as inputs, outputs,
goals, knowns, and constraints. The result is a logically organized representation suitable
for further abstraction.</p>
      <p>2. Mathematical Model. The physical model is reformulated into a mathematical
abstraction, typically consisting of systems of equations, variational formulations, or
operator-based representations. This step bridges physical reasoning with computational
methods, ensuring that governing equations, boundary conditions, and assumptions are
explicitly captured in formal mathematical notation. It plays a critical role in enabling the
transition toward discretization, especially in continuum mechanics, field theories, or
control systems. This stage may involve defining PDEs, ODEs, algebraic systems, or integral
formulations.</p>
      <p>3. Discrete Model. This step discretizes the conceptual model into algebraic forms—
often sparse matrices, vectors, and systems of equations. It specifies the problem space in
terms suitable for numerical manipulation. Sparse matrix formulations are especially
prevalent in engineering and physical modeling. The system supports the description of
matrix topology, dimensions, symbolic constraints, and storage formats.</p>
      <p>4. Computer Solution Draft. The final stage generates a preliminary computational
skeleton that will serve as input to downstream modules of ISCM [26]. This includes:
expected data structures (e.g., sparse matrices, load vectors), orientation of data (e.g.,
rowwise vs column-wise formulation), precision constraints, accepted numerical methods or
solver classes.</p>
      <p>This representation is not a finalized codebase, but rather a structured blueprint for automated
solution orchestration on HPC infrastructure.</p>
      <sec id="sec-2-1">
        <title>2.1. MGWA Prototype</title>
        <p>The MGWA prototype (AISolver) was implemented using Python Flask [32] and Jinja2 [33], with a
lightweight HTML/CSS frontend and minimal UI dependencies (Fig. 2). The architecture is modular
and extensible, comprising several key components:
 Prompt Engine – Responsible for dynamically executing configured prompt instructions
using OpenAI-family large language models (LLMs). Each prompt defines its own model
selection (e.g., GPT-4.1), temperature, token limits, and other parameters. Prompts are



designed to transform input models from the previous stage into structured representations
for the next.</p>
        <p>Workflow Engine – Orchestrates the step-by-step generation of models in accordance with
the structure of a selected pipeline. Each transition between steps is governed by the
pipeline logic and proceeds automatically unless explicitly modified by the user.
Admin Interface – Enables system developers to define and configure modeling pipelines,
including prompt content, LLM parameters, and activation settings for each step. Prompt
development follows modern prompt engineering practices and is debugged in a dedicated
testing environment before deployment.</p>
        <p>User Interface – Supports the complete modeling workflow from task creation to
results review. It guides users through step-by-step generation of models based on
the pipeline configuration. Each model is displayed and edited on a dedicated page.</p>
        <p>The system’s core data entities include:
 Task (Problem) – Represents a user-defined modeling problem. A task is associated with a
specific pipeline that defines the modeling stages required for that problem type.
 Pipeline – A sequence of prompts representing distinct model abstraction levels (e.g.,
formalized, physical, mathematical, discrete, computational). The structure is configurable
and may include optional steps depending on the domain.
 Prompt – A configurable wrapper around a call to an LLM. In system terminology, a
prompt is not merely a textual instruction but a complete specification including LLM
selection, prompt body, execution parameters, and its position in the pipeline.
 Model – The output of a single prompt execution. Models are stored as text-based
content, which may include plain text, HTML, or JSON structures. The interface
integrates an interactive CKEditor [34] for user interaction with models, supporting
equations, images, and rich formatting.</p>
        <p>System administrators (who are also developers) design and configure pipelines by defining
prompt sequences. Each prompt step is bound to a specific transformation stage in the modeling
process. Prompts can be enabled or disabled per pipeline to suit different types of tasks. The
developer uses prompt engineering technics [35]. Once developed and tested on a testing
environment, pipelines are deployed to the main application.</p>
        <p>Domain experts, acting as end-users, create tasks by selecting an available pipeline and
providing an initial formal description of the problem. The system then automatically generates
models step-by-step using the configured prompts, following the chosen pipeline workflow, where
the output of one step serves as the input to the next. Each result is presented on a dedicated page,
with options for regeneration or manual refinement.</p>
        <p>The modeling flow follows a fixed sequence: Formalized Description, Mathematical Model,
Discrete Model, Computer Solution Draft.</p>
        <p>All outputs can be edited and revisited. Users may also return to previous steps, adjust the
input, and re-trigger generation from any point in the pipeline.</p>
        <p>Upon completion, the user obtains a complete chain of model representations, each
reflecting a distinct abstraction level [Fig. 3]. A dedicated summary view presents all
generated content across the pipeline steps. Previously created tasks remain editable and
can be iteratively improved.</p>
        <p>By exposing prompt engineering and template control to the administrator, the system supports
the creation of highly customized pipelines tailored to the specific requirements of different
scientific and engineering domains.</p>
        <p>Future extensions of the system are planned to enhance usability and collaboration. These
include PDF export of results, tools for collaborative editing, integrated evaluation metrics, and
activity logging. Following extended testing and debugging of the MGWA prototype, the system is
scheduled for integration with other AIMM components to support end-to-end problem-solving
workflows. This will include automated task execution, input management, and solution
orchestration within the AIMM infrastructure. Additionally, domain-specific pipelines will be
developed to support modeling scenarios in fields such as computational mechanics, process
engineering, and scientific computing.</p>
        <p>Future extensions of the system are planned to enhance usability and collaboration. These
include PDF export of results, tools for collaborative editing, integrated evaluation metrics, and
activity logging. Following extended testing and debugging of the MGWA prototype, the system is
scheduled for integration with other AIMM components to support end-to-end problem-solving
workflows. This will include automated task execution, input management, and solution
orchestration within the AIMM infrastructure. Additionally, domain-specific pipelines will be
developed to support modeling scenarios in fields such as computational mechanics, process
engineering, and scientific computing.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Prototype approbation</title>
        <p>The AISolver prototype was tested on a real-world engineering problem “Cantilever beam
subjected to a concentrated load at its free end”. A developer-level user can manage modeling
pipelines (Fig. 4). Prompts are configured within pipelines according to the modeling domain and
the specifics of the application area.</p>
        <p>Figure 5 illustrates the editing of pipeline settings, including the prompt body, LLM type, and
parameters used to configure the generation of a specific model within the pipeline workflow. The
parameter configuration in AI Solver was designed to ensure a stable and reproducible modeling
process. For all stages the GPT-4o model was used, providing an optimal balance between accuracy
and flexibility for scientific tasks. The Temperature parameter was set to 0.6, maintaining a balance
between diversity and logical consistency of results. For the physical model, the output size was
limited to Max tokens = 1024, while for the mathematical model it was increased to 2048 to
accommodate more complex systems of equations. The Top-P parameter remained at 1.0, ensuring
full coverage of generation variants. This configuration ensures consistency between stages, clear
structuring of generated outputs, and suitability for further automation of the model-building
process within the AIMM system.</p>
        <p>Figures 6–9 present the consecutive modeling stages: formalized problem description,
Mathematical Model, Discrete Model, and Computer Solution Draft. Each stage is supported by
LLM-powered transformations and allows interactive refinement of intermediate results.</p>
        <p>A distinctive feature of the MGWA is its high configurability: expert users can define
domainspecific prompt pipelines, while end-users are guided through a transparent and reproducible
modeling process.</p>
        <p>Each model is persistently stored in the database as structured text data (e.g., HTML, JSON),
allowing end users to revisit, review, and refine them at any stage of the workflow. In particular,
the Computer Solution Draft serves a dual role. On one hand, it provides a machine-readable
schema that enables automated generation of the user interface for task-specific input
configuration and integration with the AIMM solution workflow. On the other hand, it acts as a
user-facing guide, informing the domain expert about the expected structure, semantics, and
parameters of the input data required to proceed with problem resolution. The LLM demonstrates
rapid execution (usually within seconds), while its effective performance is directly contingent
upon user actions, such as input formulation and interaction timing</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusions</title>
      <p>This paper introduces a prompt-driven web application that streamlines the early stages of
scientific problem modeling. It supports modular configuration, domain-specific adaptation, and
LLM-guided transformation of problem formulations. As part of the larger AIMM ecosystem, it
plays a foundational role in transforming verbal task descriptions into computationally tractable
formats.</p>
      <p>A key innovation of MGWA is the use of prompt chaining, which structures the modeling
workflow into a configurable pipeline and enables modular application of language-model-driven
transformations at different levels of abstraction. The design of the system emphasizes flexibility:
expert users or administrators can tailor domain-specific pipelines by adjusting prompts and
parameters of the language model. At the same time, the tool is reusable, since defined pipelines
can be applied to a variety of tasks and fine-tuned to match the specifics of particular problem
categories. For end-users, the application offers a guided process, leading them step by step
through model creation while preserving the possibility to regenerate or refine results at any stage.
Finally, the system is domain-independent, ensuring applicability across a wide spectrum of
scientific and engineering disciplines.</p>
      <p>This work extends the AIMM concept toward practical applicability, bridging the gap between
natural-language task formulation and solver-ready computational structures. By enabling
transparent, adaptive, and domain-agnostic modeling workflows, the MGWA strengthens the
AIMM ecosystem as a foundation for intelligent numerical software. Future research will focus on
expanding the configurability of prompt pipelines, enhancing collaborative modeling features, and
applying the system to real-world domains.</p>
      <p>The results confirm the feasibility of extending AIMM from conceptual architecture toward a
working intelligent modeling environment. Future research will focus on expanding pipeline
configurability, collaborative modeling features, and applying the system in domains such as
navigation and motion control, where accuracy, adaptability, and real-time performance are
essential.</p>
    </sec>
    <sec id="sec-4">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT-4o for grammar and spelling
checks. After using this service, the authors reviewed and edited the content as needed and take
full responsibility for the publication’s content.
[14] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever, Improving Language Understanding by
Generative Pre-Training, OpenAI Blog (2018). URL:
https://openai.com/research/languageunderstanding-generative-pre-training.
[15] OpenAI, URL: https://openai.com/
[16] GPTChat, URL: https://openai.com/chatgpt/
[17] OpenAI GPT-3, URL: https://openai.com/index/gpt-3-apps/
[18] OpenAI GPT-3.5, URL: https://openai.com/index/gpt-3-5-turbo-fine-tuning-and-api-updates/
[19] OpenAI GPT-4, URL: https://openai.com/index/gpt-4/
[20] OpenAI GPT-5, URL: https://openai.com/index/introducing-gpt-5/
[21] Ding, W., Abdel-Basset, M., Hawash, H. &amp; Ali, A. M., Explainability of AI methods,
applications and challenges: A comprehensive survey, Information Science 615 (2022) 238-292 doi:
10.1016/j.ins.2022.10.013
[22] V.Kharchenko, H.Fesenko, O. Illiashenko, Basic model of non-functional characteristics for
assessment of artificial intelligence quality. Radioelectron. Comput Syst 2, (2022) 1-14. doi:
10.32620/reks.2022.2.11
[23] V.Kharchenko, H.Fesenko, O. Illiashenko, Quality Models for Artificial Intelligence Systems:
Characteristic-Based Approach, Development and Application. Sensors 22(13) (2022) 1-32. doi:
10.3390/s22134865
[24] Kharchenko, V.S. Conceptual foundations of warrantable artificial intelligence systems,
Reports of the National Academy of Sciences of Ukraine 2 (2025) 11-23.</p>
      <p>URL: http://jnas.nbuv.gov.ua/article/UJRN-0001571650
[25] A.N. Khimich, O.V. Popov, T.V. Chistyakova, E.A. Nikolaevskaya, P.S. Yershov, AI-based Intel
ligent System for Applied problems: conseption and architecture, in: 2025 IEEE 8th Interna
tional Conference on Methods and Systems of Navigation and Motion Control (MSNMC)
October 21-24, 2025, Kyiv, Ukraine
[26] A.N. Khimich, T.V. Chistyakova, V.A. Sydoruck, P.S. Yershov, Intellectual computer
mathematics system Inparsolver, Artificial Intelligence 25(4) (2020) 60–71, December doi: 10.15407/
jai2020.04.060
[27] SKIT SKIT-5, URL: http://icybcluster.org.ua/index.php?lang_id=2&amp;menu_id=5
[28] V.A. Sydoruk, P.S. Yershov, D.O. Bohurskyi, O.R. Marochkanych, Intellectualization of
calculations for mathematical modeling tasks of complex processes and objects, Comp. Math. 1 (2019)
143-150
[29] O.M. Khimich, S.V. Yershov, E.A. Nikolaevskaya, P.S. Yershov, Development of an Intelligent
Search Engine using GPT model for GrantsForScience platform, in: CEUR Workshop
Proceedings, volume 3777, September 2024 [Online], pp 111-119, Available:
https://ceur-ws.org/Vol-3777/short3.pdf
[30] L. Reynolds, K. McDonell, Prompt Programming for Large Language Models: Beyond the
Few</p>
      <p>Shot Paradigm, 2021, arXiv:2102.07350 [Online]. URL: https://arxiv.org/abs/2102.07350
[31] А.N. Khimich, I.N. Molchanov, А.V. Popov, Т.V. Chisttyakova, М.F. Yakovlev, Parallel
algorithms for solving problems of computational mathematics. Kyiv: Naukova dumka, 2008.
[32] Flask - Python Web Framework, URL: https://flask.palletsprojects.com/en/2.0.x/
[33] Jinja2, URL: https://jinja.palletsprojects.com/en/stable/
[34] CKEditor, URL: https://ckeditor.com/ckeditor-5/
[35] Prompt engineering technics, URL: https://www.promptingguide.ai/</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Khimich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sydoruk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yershov</surname>
          </string-name>
          ,
          <source>Intellectualization Of Computation Based On Neural Net - works For Mathematical Modeling, in: IEEE International Conference on Advanced Trends in Information Theory, ATIT</source>
          , pp.
          <fpage>445</fpage>
          -
          <lpage>448</lpage>
          . Decembre.
          <year>2019</year>
          , doi: 10.1109/ ATIT49449.
          <year>2019</year>
          .9030444. Available: https://ieeexplore.ieee.org/document/9030444
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.H.</given-names>
            <surname>Wilkinson</surname>
          </string-name>
          , Rounding Errors in Algebraic Processes, London:
          <string-name>
            <given-names>H.M.</given-names>
            <surname>Stat</surname>
          </string-name>
          . Off,
          <year>1963</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.H.</given-names>
            <surname>Wilkinson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Reinsch</surname>
          </string-name>
          ,
          <source>Handbook for Automatic Computation</source>
          . Springer Berlin, Heidelberg,
          <year>1971</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.V.</given-names>
            <surname>Voevodin</surname>
          </string-name>
          ,
          <article-title>Roundoff errors and stability in direct methods of linear algebra</article-title>
          .
          <source>М: CC MSU</source>
          ,
          <year>1969</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>I.N.</given-names>
            <surname>Molchanov</surname>
          </string-name>
          ,
          <article-title>Machine methods for solving applied problems</article-title>
          . Algebra, approximation of functions, Naukova dumka, Kyiv,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.N.</given-names>
            <surname>Khimich</surname>
          </string-name>
          ,
          <article-title>Perturbation bounds for the least squares problem</article-title>
          ,
          <source>Cybern. Syst. Anal</source>
          .
          <volume>32</volume>
          (
          <issue>3</issue>
          ),
          <fpage>434</fpage>
          -
          <lpage>436</lpage>
          (
          <year>1996</year>
          ). https://doi.org/10.1007/BF02366509
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Forsyth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Malcolm</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mouler</surname>
          </string-name>
          ,
          <source>Machine Methods of Mathematical Computing, М.: Mir</source>
          ,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Golub</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. Van Loan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Matrix</given-names>
            <surname>Computations</surname>
          </string-name>
          .
          <source>М.: Mir</source>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Lawson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Henson</surname>
          </string-name>
          ,
          <article-title>Numerical solution of problems by the method of least squares</article-title>
          .
          <source>М.: Nauka</source>
          ,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , Ł. Kaiser,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          , Attention is All You Need,
          <source>NeurIPS</source>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.1706.03762.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <article-title>Pre-training of Deep Bidirectional Transformers for Language Understanding, NAACL-HLT (</article-title>
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1810</year>
          .
          <volume>04805</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>T. B. Brown</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Mann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ryder</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Subbiah</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          <string-name>
            <surname>Kaplan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dhariwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Neelakantan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Shyam</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Sastry</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Askell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Agarwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Herbert-Voss</surname>
            , G. Krueger,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Henighan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Child</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ramesh</surname>
            ,
            <given-names>D. M.</given-names>
          </string-name>
          <string-name>
            <surname>Ziegler</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Winter</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Hesse</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            , E. Sigler,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Litwin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gray</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Chess</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Clark</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Berner</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>McCandlish</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Radford</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Sutskever</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Amodei</surname>
          </string-name>
          , Language Mod - els
          <string-name>
            <surname>are</surname>
          </string-name>
          Few-Shot Learners,
          <source>NeurIPS</source>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>2005</year>
          .
          <volume>14165</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Luan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Sutskever</surname>
          </string-name>
          ,
          <article-title>Language Models are Unsupervised Multitask Learners</article-title>
          , OpenAI
          <string-name>
            <surname>Blog</surname>
          </string-name>
          (
          <year>2019</year>
          ). URL: https://cdn.openai.
          <article-title>com/better-languagemodels/language_models_are_unsupervised_multitask_learners</article-title>
          .pdf.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>