<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>H. Hnatiienko);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Implementation of the GPT-4 Language Model for Responses in Social-Psychological Services within Digital Communication Processes ⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Hryhorii Hnatiienko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olena Prysiazhniuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anna Puzikova</string-name>
          <email>a.v.puzikova@cuspu.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olena Blyzniukova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          ,
          <addr-line>64/13, Kyiv, 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Volodymyr Vynnychenko Central Ukrainian State University</institution>
          ,
          <addr-line>Shevchenka str, 1, Kropyvnytskyi, 25006</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>The article examines some key points of using artificial intelligence tools in digital communication processes in the field of social services for prompt social and psychological support of users. The problems es to chatbots inquiries are analyzed. A way to process these results using the GPT-4 language model and the OpenAI API is proposed. The impact of the ChatGPT-4 temperature parameter value settings on the results of processing non-standard user responses with elements of fuzziness was studied. The proposed approach allows developers to integrate GPT-4 into their applications conveniently with the aim of automating the process of processing user responses and offloading operators of social support centers, which is relevant in the context of a personnel shortage in the Ukrainian labor market.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence (AI)</kwd>
        <kwd>large language models (LLM)</kwd>
        <kwd>ChatGPT</kwd>
        <kwd>digital communication</kwd>
        <kwd>secure data</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Digitalization involves the total introduction of digital tools into communication processes at all
levels and for various interaction channels. The practice of implementing AI-based chatbots has
proven to be effective for 24/7 personalized client support and has effectively replaced operators,
open-ended responses, it often turns out that, even when he is offered template answers, some
people, due to situational circumstances, current emotional state, and due to experienced stress, may
give answers outside the template options. Recognition to such answers required the intervention of
a human operator to clarify the information that the client meant. In most cases, this happens
without additional dialogue with the client, thanks to the inherent human ability to recognize
fuzziness and ambiguity inherent in natural language and interpret information within a clearly
defined framework [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        The military actions in Ukraine led to an increase in the number of people affected by the war
who need psychological and social support in various spectrums of life, namely, emotional,
informational, instrumental, financial, legal, providing opportunities for training and
selfdevelopment, social integration, and other forms of assistance. Existing centers of social and
psychological support have a limited resource of operators, therefore, to work with this population
category effectively, it is necessary to carry out operational electronic communication using artificial
intelligence technologies, in particular, large language models (LLM). Large language models have
created opportunities to develop chatbots that can support complex question and answer scenarios.
But for many practical situations we still lack an understanding of how meaningfully a chatbot can
simulating the activity of an operator [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>In the research we analyze the possibilities and effectiveness of using the ChatGPT-4 language
model in the processes of digital communication with clients of social support centers to solve
problems:
•
•
•
monitoring of customer needs in order to respond promptly to identified problems and
provide professional support from specialists;
reminding clients about timely completion of current tasks (in particular, about the terms of
assigned services and coordination of relevant actions);
providing recommendations regarding further referrals to specialists for the purpose of
psychological support and social adaptation.</p>
      <p>In order to determine the possibility and effectiveness of using ChatGPT to recognize and
experimentally, the researchers
proposed as a working hypothesis to consider the ChatGPT model as a fuzzy system that can capture
the uncertainty and ambiguity inherent to the natural language. In order to measure the overall
effectiveness of ChatGPT in re
methods and statistical data processing techniques are used.</p>
    </sec>
    <sec id="sec-2">
      <title>Overview of the capabilities of the ChatGPT language model for in digital communication processes</title>
      <p>
        Large language models are a field of artificial intelligence at the intersection of linguistics and
computer science [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. By learning from vast amounts of text data, language models can, in particular,
interpret the text entered by the user and generate human-readable text in response. There are
different types of pre-training architectures, including autoencoding models (e.g., BERT),
autoregressive models (e.g., GPT), and encoder-decoder models (e.g., T5) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. GPT (Generative
Pretraining of Transformers) is a type of neural network architecture useful in chatbots, which makes
them particularly effective for imitating human conversations [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
intelligence model ChatGPT. The ChatGPT model is designed for natural language communication
and is built using sophisticated Natural Language Processing, supervised learning, and reinforcement
learning to understand and generate text similar to human-generated text [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
solving the problem described in the research.
      </p>
      <p>
        ChatGPT was developed using a two-step process involving unsupervised pre-training followed
by supervised tuning [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. This model was previously trained on a massive text corpus (which includes
various sources such as books, articles, reviews, online chats, and human-generated data). This gave
turn gave it the
opportunity to generate quite accurate answers, even in the case of processing complex and
ambiguous contexts [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. After the pre-training phase, the model was fine-tuned using such further
tasks as completion of the text, answering questions and conducting a dialogue [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Thus, the main
the correct result with a high degree of probability [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        ChatGPT belongs to Large Language Models general-purpose models that are designed to work
with a wide range of tasks. According to the results presented in the research [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] ChatGPT
demonstrated a significant level of proficiency in answering the questions of a multimodal 12-item
exam. The authors of the research [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] point to the impressive performance of ChatGPT in various
language tasks and tests, which established it as one of the leading language models in the world.
      </p>
      <p>Along with significant positive results, researchers (including the OpenAI company) point to the
unreliable or nonsensical information [10, 11
sensitivity to settings in input phrases or retries of the same request. In other words, rephrasing a
request can contribute to generating a more accurate and correct answer [10].</p>
      <p>Among other problems of the ChatGPT model, the researchers note verbosity and abuse of
repetetive phrases (the reason for this is bias in the training data, since the trainers of the GPT-3.5
language model preferred long comprehensive answers [10,12]), inaccuracy (current models usually
questions in response to an ambiguous user request [10, 12], bias, responsibility for the created
content, transperancy, ethical issues regarding authorship, lack of creativity (the reason is that
ChatGPT performs repetitive text generation, which is based on pre-loaded data), etc.
ecosystem [13] and include API moderation to prevent and block dangerous content. As part of these
activities, OpenAI engaged more than 50 experts in such areas as AI alignment risks, cyber security,
biorisks, trust and security, and international security to test the model competitively to analyze the
additional capabilities of the updated GPT-4 model. According to their conclusions, the behavior of
the model was tested in high-risk areas, the assessment of which requires experience. Feedback and
data from these experts were used to soften and improve the model [14].</p>
      <p>Other problematic issues listed above, according to the authors, are not essential within the
framework of using the model to achieve the goal of this work, and therefore are not considered in
more detail.</p>
      <p>It should be noted that the GPT neural architecture has been chosen as a basis for the development
of the other language models, which also show good initial results. For example, the DialoGPT model,
created for generating responses in the process of a dialogue, allows processing multiple inputs and
generating highly personalized responses that are more relevant, meaningful, and consistent with
the context, compared to other powerful frameworks [15].</p>
      <p>Viewing ChatGPT as a fuzzy system that can capture the fuzziness and ambiguity inherent in
natural language [16] allows this tool to be applied to analyze human texts or texts produced by
generative artificial intelligence and use fuzzy logic to deal with the ambiguity of natural language
and provide more flexible responses.</p>
      <p>A detailed overview of the possibilities, prospects and potential of using ChatGPT in the areas of
customer service, business operations and communications within the framework Industry 4.0, is
given in the study [17]. In particular, among promising implementations of ChatGPT, a separate
processes stands out.</p>
      <p>The analysis of publicly available sources shows insufficient attention and lack of research
.</p>
    </sec>
    <sec id="sec-3">
      <title>The research task</title>
      <p>The article discusses the results of using the ChatGPT-4 language model to process responses from
users of social services at the Social Support Center (Kropyvnytskyi, Ukraine). To implement the
research program effectively a series of studies was conducted. First, an express survey was
conducted with the respondents of the approbation sample in the number of 138 people to identify
persons in need of immediate psychological help. Questions were sent to the respondents. The
155
answer was intended to be open-ended, but typical templates provided expected positive ("yes"/1)
responses were filtered from the response array. They went beyond the recommended template
responses and accounted for 20.3% of the total number of processed responses. This array of atypical
responses served as the source of raw data for the next two studies.
4 by formulating a corresponding request. This processing consisted of recognizing an atypical user
response and assigning it to one of the response categories specified by the original template. To
adjust the sensitivity of ChatGPT-4 to the recognition of respondents' answers, the adjustable
parameter "temperature" was used.</p>
      <p>In the second study, to access the effectiveness of ChatGPT-4 in recognizing atypical responses
given by respondents outside of the instructions, the researchers provided peer review. For this, the
question, the original answer of the respondent and the result of processing (recognition) of the
answer by the GPT-4 chat were provided to the expert. The expert's task is to assess whether
ChatGPT-4 recognized the user atypical response correctly and assigned it to the appropriate
category of template responses. The research team consisted of an employee of the Social Support
swers were
highlighted such as "explicitly positive", "implicitly positive", " explicitly negative " and "implicitly
negative". The performance of ChatGPT-4 recognition of atypical responses was evaluated by
comparing the processed response with a similar result obtained by an expert evaluation</p>
      <sec id="sec-3-1">
        <title>ChatGPT settings</title>
        <p>ChatGPT-4 has several parameters, the setting of which in accordance with the given task can
significantly affect the result [18, 19].</p>
        <p>For the problem considered in the research, the temperature parameter is relevant, which is used
to control the degree of randomness or unpredictability of the model's responses in the context of
ChatGPT. It determines how risky or conservative the responses will be depending on the value
provided, which can range from 0 to 1. According to the documentation, values for the temperature
parameter set in the range (0.2-0.5) tune the model to more predictable and conservative responses.
This means that it will choose the most likely options more often, which makes the answers more
accurate and with fewer errors. The value of the temperature parameter in the range (0.7-1.0) adjusts
the model to more diverse and creative responses. In other words, the model will choose less likely
options, which can lead to unexpected and original results. Thus, a temperature value closer to 0
makes the model more predictable, and closer to 1 makes the model more experimental.</p>
        <p>It should be noted that optimizing the temperature parameter in GPT models is a powerful
technique for developers to improve Human-ChatGPT collaboration. By experimenting with
different temperature settings, developers can adapt ChatGPT to perform a variety of tasks.
Experimentation is a key concept, as the optimal temperature setting may vary depending on the
specific use cases of ChatGPT and the task requirements
Generative models of artificial intelligence, in particular ChatGPT-4, generate answers based on
prompts, the detailed and thoughtful formulation of which contributes to obtaining accurate and
relevant answers [20]. The formulation of a prompt according to the strategies for achieving better
results [21] should include the following elements such as context, instructions, input data, output
indicator. The prompt for ChatGPT-4, formulated in the context of the problem under consideration,
is shown in Figure 1 and further is the value of the variable $condition, which is mentioned in the
code section below.</p>
        <sec id="sec-3-1-1">
          <title>Context</title>
          <p>•You are assisting an automatic survey program via SMS.</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Instructions</title>
        </sec>
        <sec id="sec-3-1-3">
          <title>Input data</title>
        </sec>
        <sec id="sec-3-1-4">
          <title>Output indicator</title>
          <p>•Automatic survey program’s question to the client: “Do you need any
psychological support?”.
•Your task is to interpret the client’s response to the question.
•The survey program works only with a few pre-programmed keywords: “need,
1(one), ok (okay), yes, do not need, 2 (two), no”. The client’s response: “$sent
Message\”.
•Evaluate whether the client’s response to the automatic survey program
questions can be interpreted as one of the keywords.
•If yes, return one of the provided words. Otherwise, return the original client’s
response without any changes. Provide only output data that will be
understandable to the reminder system.</p>
          <p>API (application programming interface) for recognizing the text entered by the user.</p>
          <p>A message to the chatbot user could be as following:
$sentMessage = 'Do you need any psychological support? Press 1 to confirm
or 2 to decline.'</p>
          <p>In the simplest case, to process the received results, you can create two arrays for standard
way (for example, sending a message to a human operator to contact the customer personally to
{
if (in_array(strtolower($response), ['need', '1', 'ok', 'okay', 'yes']))
$this-&gt;set('confirmation', 'User confirmed.');
}
elseif (in_array(strtolower($response), ['do not need', '2', 'no'])) {
$this-&gt;set('confirmation', 'User did not confirm.');
}
else {</p>
          <p>$this-&gt;set('confirmation', 'Unable to determine user response.');
}</p>
          <p>The use of the GPT-4 language model provides the possibility of additional analysis of the entered
text in order to confirm the positive or negative response of the user.</p>
          <p>Contacting OpenAI directly is done using the method getOpenAIResponse():
public function getOpenAIResponse(string
$triggerWords): ?string
$sentMessage,
string
$message,
string
This method takes the following parameters as input:
$sentMessage – the message that was sent to the user;
$message – user’s response;
$triggerWords – a list of possible user’s responses.</p>
          <p>The method body contains a request to OpenAI, the parameters of which are the name of the
language model 'gpt-4', as well as an array with messages. The first element of the array is a message
in which the context is specified, that is, a brief description of the situation with further instructions.
The second element of the array is a message containing the user's response, which is recognized by
OpenAI. Below is a code snippet that describes these steps:
$response = OpenAI\Completion::create([
'model' =&gt; 'gpt-4',
'messages' =&gt; [
[
],
[
],
]);
'role' =&gt; MessageRole::SYSTEM,
'content' =&gt; $condition,
'role' =&gt; MessageRole::USER,
'content' =&gt; $message,
],
'temperature' =&gt; 0.5,
error');</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>The results</title>
        <p>The implementation of processing the response and returning its text part, or the null value, can
be as follows:
$data = json_decode($response);
if (!empty($data-&gt;error)) {
throw new \Exception($data-&gt;error-&gt;message
??
'Unknown
}
return $data-&gt;choices[0]-&gt;message-&gt;content ?? null;
As a result of the express survey, whether the user needs urgent psychological help, approximately
23.9% of the answers (33 out of 138) turned out to be atypical. The answers that did not correspond
ined fuzziness and ambiguity (which was 28
responses, 20.3%), were selected among them and tested for recognition in ChatGPT-4. The answers
with an unknown result such as "Maybe", "I don't know" (5 answers) were not used in the testing.</p>
        <p>The performance evaluation of ChatGPT-4 recognition of atypical user responses took place in
two versions, in Ukrainian and English. The impact of the temperature parameter settings on the
e analysis was carried out with the
temperature parameter, in the following modes: 0, 0.3, 0.5, 0.8 and 1.
recognition results from ChatGPT-4, and the expert evaluation of the recognition performance, given
- Responses were processed with the temperature
parameter value of 0.</p>
        <p>When the temperature parameter value increased to 0.5, there was a qualitative jump in the ability
of ChatGPT-4 to recognize two more atypical responses from users from the "Implicit positive
responses" category. However, when the temperature parameter value was further increased to 1,
-4. It should be
noted considering the unexpected result which was obtained when the temperature parameter value
was changed from 0.8 to 1 during the recognition of the implicitly negative (according to experts)
to the authors, similar results of response recognition require the further involvement of qualified
specialists such as psychologists and linguists to provide consultations to the human operator to
work with the client in order to clarify his needs.</p>
        <p>Recognition result from Chat</p>
        <p>GPT-4
1. Positive responses</p>
        <p>Expert evaluation
2.1. Explicitly negative responses
Unnecessary
I appreciate it, but I don't want
it
I definitely don't want it
I don't have time for that
I am not one of those people
who need help
2.2. Implicitly negative responses
I think I am fine
I'm afraid I won't be able to see
a doctor
I believe that I can do it myself
I don't trust psychologists
I'm not sure it's necessary</p>
        <p>Need
Yes
Yes
Need
Yes
Yes</p>
        <p>Need
Looks like that</p>
        <p>Yes
Yes</p>
        <p>Need
I can try</p>
        <p>Yes</p>
        <p>Yes
2. Negative responses</p>
        <p>Do not need
Do not need
Do not need</p>
        <p>No
Do not need
Do not need
Do not need
Do not need</p>
        <p>Do not need
I'm afraid I won't be able to
see a doctor</p>
        <p>No
Do not need</p>
        <p>No
I'm not sure it's necessary
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
1.2. Implicitly positive responses
Looks like Looks like that</p>
        <p>that</p>
        <p>I can try I can try
2.2. Implicitly negative responses
I'm afraid I I'm afraid I won't
won't be be able to see a
able to see doctor
a doctor</p>
        <p>I'm not I'm not sure it's
nseucreessiat'rsy necessary
1. Positive responses</p>
        <p>Yes</p>
        <p>Ok
2. Negative responses</p>
        <p>Need</p>
        <p>Yes
I'm afraid I won't
be able to see a
doctor</p>
        <p>I'm afraid I won't
be able to see a
doctor</p>
        <p>I'm afraid I won't
be able to see a</p>
        <p>doctor
Do not need</p>
        <p>Do not need</p>
        <p>The corresponding results are compiled according to the temperature parameter and are
presented in Table 2.</p>
        <p>Figure 2 presents a comparison of ChatGPT-4 recognition performance of user responses in the
"Positive responses" category at different chat temperature modes. Similar information for the
"Negative responses" category is presented in Figure 3.</p>
        <p>It can be seen from the graphs that the influence of temperature settings on the performance of
-4 takes place recognizing clearly positive responses. When
the temperature increases, there is an unambiguously positive dynamic of its work efficiency, since
already at a temperature parameter value of 0.5 all such responses were recognized. On the other
hand, processing implicitly negative responses by ChatGPT-4, changing the temperature settings
shows less dynamics of impact on its work performance. In this case, as the temperature increased,
the number of recognized responses increased, but unrecognized responses also remained (Figure 3).</p>
        <p>Psycholinguistic studies testify that the processing of negative statements is more difficult than
positive statements [22, 23], regardless of the fact whether the negation is explicit (for example, not)
or implicit (for example, forget) [24, 25]. The authors of the study explain the results by referring to
the cognitive processing of negative statements, namely the increased cognitive load and difficulty
of processing the negation. It is noted that a statement containing an implicit negation can cause an
additional cognitive load which is necessary to understand its meaning. Language-based artificial
intelligence tools such as OpenAI's ChatGPT have limitations in performing complex reasoning tasks
[26, 27]. Although these models can interpret most queries and contexts, they occasionally face
limitations of understanding while dealing with ambiguous or contextually complex queries, which
we believe includes processing of implicit negative responses from users.</p>
        <p>It should be pointed out that the results of testing the effectiveness of ChatGPT-4 processing of
similar user responses "originally" provided in the Ukrainian language are fully consistent with the
results given in Tables 1-2.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Issues of secure data processing</title>
        <p>Discussing the use of chatbots with artificial intelligence in various spheres of public life, we cannot
ignore the significant risks associated with the processing of personal data, in particular those that
may constitute a violation of human rights and freedoms. Thus, due to alleged privacy violations
related to the use of ChatGPT, the Italian Data Protection Supervisory Authority issued a decision
on March 31, 2023. To restrict the use of the specified chatbot in public administration and business
[28]. Such actions of the Italian national body for the protection of personal rights are related to the</p>
        <p>In order to ensure the proper regulation of social relations related to the use of artificial
intelligence, to prevent the use of these technologies for groundless interference in private and
family life, to systematize risks in the specified area of legal regulation, in June 2023, the European
]. The
specified act of the European Union proposes to establish obligations for the owners of information
systems based on
content, illegal collection and subsequent profiling of information about individuals.</p>
        <p>Among the main cyber problems associated with chatbots such as ChatGPT, work [30] notes the
reduction of barriers for cybercriminals, which include social engineering attacks (inclining victims
to disclose confidential information), phishing attacks (sending malicious links or messages), identity
theft, data leakage, etc.</p>
        <p>OpenAI company constantly takes measures to keep the artificial intelligence ecosystem safe,
which include API moderation to prevent and block dangerous content. The developers of the
ChatGPT Enterprise version state that customer tips and company data are not used to train OpenAI
models, AES 256 is used for data encryption at rest, and TLS 1.2+ protocol is used during
transmission. ChatGPT Enterprise is also compatible with the SOC 2 standard [31].</p>
        <p>Therefore, developers of chatbots using ChatGPT need to provide security measures and access
controls to prevent unauthorized access to the system. Regarding the fulfillment of the requirements
tion and subsequent profiling of
information about individuals, the authors suggest using ChatGPT only for the narrow task of
interpreting ambiguous customer responses. At the same time, the protection of personal data of
users is carried out using the necessary technologies to ensure the security of personal data. This is
encryption to protect sensitive data, in particular, hashing is applied to user IDs (ID) and passwords
stored in the database, making them impossible to compromise in the event of a data breach. Access
to the ChatGPT API occurs through a secure HTTPS connection, which provides the necessary level
of data security during their transmission</p>
      </sec>
      <sec id="sec-3-4">
        <title>Conclusions and discussion</title>
        <p>In the article the authors have analyzed the possibilities and performance of the language model
GPT-4 for the tasks of the automating the processing of responses of social and psychological
services users through digital means of communication. The conclusions of the conducted research
show that the results of processing
noncorrelate well with expert assessments regarding the content and correctness of recognition. It was
found that the impact of GPT-4 temperature settings on the processing performance of implicitly
-4 temperature settings
on processing performance remains uncertain and needs further research in collaboration with
linguists and psychologists. It should be noted that according to the research results, for the most
accurate recognition of non- ses, it is recommended to use the temperature
parameter value in the range of (0.5-0.8). But we should also not ignore the results in which there is
a qualitative jump from one plane (for example, negative) to another (for example, positive).
integrate GPT-4 into their applications conveniently, using the capabilities of this model effectively.
to relieve the operators of social
service centers, which is relevant in the conditions of personnel shortage in the Ukrainian labor
market. However, it is important to note that these tools do not completely replace the involvement
of qualified professionals, instead, they serve as additional tools in the field of providing social and
psychological services to users and contribute to the overall digitalization of the process.
creating a neural network and its further training [32], which requires significant costs, both
financial (paying the work of programmers) and time (it is necessary to spend a certain amount of
time on training the network). Using alredy trained GPT-4 language model allows to reduce or to
eliminate some costs significantly.</p>
        <p>The obtained research results can be used by the developers of the GPT-4 model in order to
optimize and improve the quality of text recognition during weighted learning to adjust the weight
of errors in rare or important cases [33], as well as to improve the techniques used to analyze the
emotional state, mood and intonation in the text.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgements</title>
      <p>The work was carried out on an initiative basis. We express our gratitude to the Social Support
Center (Kropyvnytskyi, Ukraine) for the opportunity to conduct the research and providing data for
processing and analysis.</p>
      <p>We wish to thank the anonymous reviewers for their comments as they improved the quality of
this paper.</p>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
[10] Introducing ChatGPT, 2022. URL: https://openai.com/index/chatgpt/.
[11] J. Ziwei, Y. Tiezheng, X. Yan, L. Nayeon, I. Etsuko, F. Pascale, Towards Mitigating Hallucination
in Large Language Models via Self-Reflection, in: Findings of the Association for Computational
Linguistics: EMNLP, Singapore, 2023, pp.1827-1843. doi: 10.18653/v1/2023.findings-emnlp.123.
[12] L. Gao, J. Schulman, J. Hilton, Scaling Laws for Reward Model Over optimization, in:
Proceedings of the 40th International Conference on Machine Learning, Honolulu Hawaii USA,
2023, pp. 10835-10866. URL: https://dl.acm.org/doi/10.5555/3618408.3618845.
[13] OpenAI, OpenAI Charter, 2023. URL: https://openai.com/charter/.
[14] OpenAI, GPT-4, 2023. URL: https://openai.com/index/gpt-4-research.
[15] Y. Zhang, S. Sun, M. Galley, Y.-C. Chen, C. Brockett, X. Gao, J. Gao, J. Liu, B. Dolan, DIALOGPT:
Large-Scale Generative Pre-training for Conversational Response generation, in: Proceedings of
the 58th Annual Meeting of the Association for Computational Linguistics: System
Demonstrations, 2020, pp.270-278. doi: 10.18653/v1/2020.acl-demos.30.
[16] A. Mukherjee, ChatGPT: a Fuzzy System that talks Like a Human, J. of Mathematical Sciences
&amp; Computational Mathematics 251 (2024). doi: 10.15864/jmscm.5303.
[17] M. Javaid, A. Haleem, R. P. Singh, A study on ChatGPT for Industry 4.0: Background, potentials,
challenges, and eventualities, Journal of Economy and Technology 1 (2023) 127-143. doi:
10.1016/j.ject.2023.08.001.
[18] Setting Parameters in OpenAI, URL:
https://www.codecademy.com/article/setting-parametersin-open-ai.
[19] GPT-4 Technical Report, 2023. URL: https://arxiv.org/pdf/2303.08774.
[20] N. Gouws-Stewart, The ultimate guide to prompt engineering your GPT-3.5-Turbo model, 2024.</p>
      <p>URL: https://masterofcode.com/blog/the-ultimate-guide-to-gpt-prompt-engineering.
[21] Six strategies for getting better results, 2024. URL:
https://platform.openai.com/docs/guides/prompt-engineering/six-strategies-for-getting-betterresults.
[22] H. Hnatiienko, V. Snytyuk, N. Tmienova, O. Voloshyn, Application of expert decision-making
technologies for fair evaluation in testing problems, in: Proceedings of the 20th. annual
workshop on Information Technologies and Security, ITS 20, Kyiv, Ukraine, 2020, pp. 46 60.</p>
      <p>URL: https://ceur-ws.org/Vol-2859/paper5.pdf.
[23] H. Hnatiienko, Choice Manipulation in Multicriteria Optimization Problems, in: Proceedings
of the 19th.</p>
      <p>2019, pp. 234 245. URL: https://ceur-ws.org/Vol-2577/paper19.pdf.
[24] J. Maciuszek, M. Polak, M. Sekulak, There is no item vs. I wish there were an item: Implicit
negation causes false recall just as well as explicit negation, PLoS ONE 14 (2019). doi:
10.1371/journal.pone.0215283.
[25] M. Xiang, J. Grove, A. Giannakidou, Semantic and pragmatic processes in the comprehension of
negation: An event related potential study of negative polarity sensitivity, Journal of
Neurolinguistics 38 (2016) 71 88. doi: 10.1016/j.jneuroling.2015.11.001.
[26] Y. Liu, T. Han, S. Ma, J. Zhang, Y. Yang, J. Tian, H. He, A. Li, M. He, Z. Liu, Z. Wu, L. Zhao, D.</p>
      <p>Zhu, X. Li, N. Qiang, D. Shen, T. Liu, B. Ge, Summary of ChatGPT-Related research and
perspective towards the future of large language models, J. Meta-Radiology 1 (2023). doi:
10.1016/j.metrad.2023.100017.
[27] S. Finch, J. Choi, ConvoSense: Overcoming Monotonous Commonsense Inferences for
Conversational AI, J. Transactions of the Association for Computational Linguistics, 12 (2024),
484-506. doi: 10.1162/tacl_a_00659.
[28] OpenAI's ChatGPT Chatbot Blocked in Italy Over Privacy Concerns, 2023. URL:
https://www.euronews.com/next/2023/03/31/openais-chatgpt-chatbot-banned-in-italy-bywatchdog-over-privacy-concerns.
[29] Artificial intelligence regulation in the EU, 2023. URL:
https://multimedia.europarl.europa.eu/en/audio/-ai_EPBL2107202301_EN.
[30] G. Sebastian, Do ChatGPT and Other AI Chatbots Pose a Cybersecurity Risk?: An Exploratory
Study, International Journal of Security and Privacy in Pervasive Computing 15 (2023). doi:
10.4018/IJSPPC.320225.
[31] OpenAI, Introducing ChatGPT Enterprise, 2023. URL:
https://openai.com/index/introducingchatgpt-enterprise/.
[32] Z. Wu, Q. She, C. Zhou, Intelligent Customer Service System Optimization Based on Artificial
Intelligence, Journal of Organizational and End User Computing, 36 (2024). doi:
10.4018/JOEUC.336923.
[33] A. Voloshin, G. Gnatienko, E. Drobot, A Method of Indirect Determination of Intervals of
Weight Coefficients of Parameters for Metricized Relations Between Objects, Journal of
Automation and Information Sciences, 35 (2003), 25-30. doi: 10.1615/JAutomatInfScien.v35.i3.30.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O.</given-names>
            <surname>Prysiazhniuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Blyzniukova</surname>
          </string-name>
          ,
          <article-title>Application of Fuzzy Approach in Modeling of Psychodiagnostic Decision Support Systems for One Class of Tasks</article-title>
          ,
          <source>in: Proceedings of the 2th. symposium on Intelligent Solutions, IntSol</source>
          <year>2021</year>
          , Kyiv - Uzhhorod, Ukraine,
          <year>2021</year>
          , pp.
          <fpage>11</fpage>
          <lpage>20</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3106</volume>
          /Paper_2.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V.</given-names>
            <surname>Goar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yadav</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yadav</surname>
          </string-name>
          ,
          <source>Conversational AI for Natural Language Processing: An Review of ChatGPT</source>
          ,
          <source>International Journal on Recent and Innovation Trends in Computing and Communication</source>
          ,
          <volume>11</volume>
          (
          <year>2023</year>
          ),
          <fpage>109</fpage>
          <lpage>117</lpage>
          . doi:
          <volume>10</volume>
          .17762/ijritcc.v11i3s.
          <fpage>6161</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.C.</given-names>
            <surname>Kitamura</surname>
          </string-name>
          ,
          <article-title>ChatGPT is Shaping the Future of Medical Writing but Still Requires Human Judgment</article-title>
          ,
          <source>Radiology</source>
          <volume>307</volume>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1148/radiol.230171.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <article-title>General Language Model Pretraining with Autoregressive Blank Infilling, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics</article-title>
          , Dublin, Ireland,
          <year>2022</year>
          , pp.
          <fpage>320</fpage>
          <lpage>335</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2022</year>
          .
          <article-title>acl-long</article-title>
          .
          <volume>26</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>-Gravel</surname>
          </string-name>
          , E. Osmanlliu, Learning to Fake It:
          <article-title>Limited Responses and Fabricated References Provided by ChatGPT for Medical Questions</article-title>
          ,
          <source>Mayo Clinic proceedings: Digital Health</source>
          <volume>1</volume>
          (
          <year>2023</year>
          ),
          <fpage>226</fpage>
          -
          <lpage>234</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.mcpdig.
          <year>2023</year>
          .
          <volume>05</volume>
          .004.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.I.</given-names>
            <surname>Roumeliotis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.D.</given-names>
            <surname>Tselikas</surname>
          </string-name>
          , ChatGPT and
          <string-name>
            <surname>Open-AI Models</surname>
          </string-name>
          :
          <article-title>A Preliminary Review</article-title>
          ,
          <source>Future Internet</source>
          <volume>192</volume>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .3390/fi15060192.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Luan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. Sutskever</given-names>
            ,
            <surname>Language Models Are Unsupervised Multitask Learners</surname>
          </string-name>
          ,
          <year>2019</year>
          . URL: https://hayate-lab.com/wpcontent/uploads/2023/05/61b1321d512410607235e9a7457a715c.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Susnjak</surname>
          </string-name>
          ,
          <string-name>
            <surname>CHATGPT:</surname>
          </string-name>
          <article-title>The end of online exam integrity?</article-title>
          ,
          <source>J. Education Sciences</source>
          <volume>656</volume>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .3390/educsci14060656.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tworek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.P. de Oliveira Pinto</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Kaplan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Edwards</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Burda</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Joseph</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Brockman</surname>
          </string-name>
          , et al.,
          <source>Evaluating large language models trained on code</source>
          ,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2107.03374.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>