<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>ProfIT AI</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Development of an Intelligent Search Engine using GPT model for GrantsForScience platform</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksandr M. Khimich</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhii V. Yershov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elena A. Nikolaevskaya</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavlo S. Yershov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>V.M Glushkov Institute of Cybernetics of NAS of Ukraine</institution>
          ,
          <addr-line>Academician Glushkov Avenue, 40, Kyiv, 03187</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>4</volume>
      <fpage>25</fpage>
      <lpage>27</lpage>
      <abstract>
        <p>Grants serve as a primary source of funding for many scientific research projects. The idea of developing a GrantsForScience platform using advanced AI technologies is proposed to simplify these processes and increase their efficiency. The concept, architecture, and implementation of a microservice designed for the intelligent search of scientific grants are delved in the paper. It highlights the limitations of traditional manual grant search methods and elucidates the benefits of an automated approach. The technical facets of the implementation, particularly the use of GPT for analysing scientific publications, are thoroughly discussed.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Intelligent search</kwd>
        <kwd>scientific grants</kwd>
        <kwd>microservice</kwd>
        <kwd>GPT</kwd>
        <kwd>automation</kwd>
        <kwd>parsing</kwd>
        <kwd>API (Application Programming Interface)</kwd>
        <kwd>scientific publications 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In the modern world, research and innovation projects play a key role in developing new
technologies, improving the quality of life and solving global problems. At the same time, researchers
often face significant difficulties in finding funding and partners to implement their projects. On the
other hand, companies and investors are looking for opportunities to collaborate with scientific
institutions to develop innovations and implement new technologies.</p>
      <p>Grants serve as a primary source of funding for many scientific research projects. They cover
expenses for equipment, materials, researchers' salaries, and other costs associated with conducting
research. Timely and efficient grant searching is critical for the successful execution of scientific
projects. Securing grants ensures that researchers have the necessary resources to pursue innovative
and impactful studies. Manual grant search involves browsing numerous websites, databases, and
other sources of information. This process is time-consuming and often ineffective, as researchers
may miss important opportunities due to the sheer volume of information and limited time for
processing it. Additionally, manual search methods lack the ability to comprehensively analyze and
cross-reference data from multiple sources, leading to potential oversights. Therefore, the authors
came up with the idea of developing a platform GrantsForScience using advanced artificial
intelligence technologies to simplify these processes and increase their efficiency.</p>
      <p>
        The publications [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]-[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] can safely be called some of the most important publications in the field
of artificial intelligence and GPT models. They played a key role in the development of natural
language processing technologies and led to the creation of powerful language models. The
development of large language models has revolutionized natural language processing [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
These models have shown great potential in solving various NLP natural language processing tasks,
from natural language understanding (NLU) to generation tasks, even paving the way for artificial
general intelligence (AGI). The research and practical implementations related to natural language
processing (NLP) technologies based on the concept of artificial intelligence, generative AI and the
concept of complex networks aimed at creating semantic networks are presented in the monograph
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        There are currently many developments in the field of artificial intelligence. The most famous
are, of course, the products of the OpenAI company [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], such as, GPT [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. GPT is a natural language
processing technology based on a transformer architecture that learns from a large amount of text
data and is capable of generating high quality texts. These models are trained on huge data sets and
learn to understand the syntax and semantics of the language, which, in particular, makes them
powerful tools for building semantic networks and domain models. The ChatGPT model [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] is built
on top of the OpenAI GPT-3 [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], GPT-3.5 [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and GPT-4 [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] family of large language models. The
fine tuning of the chatbot was performed using both supervised learning methods and reinforcement
learning. Other notable projects using GPT include, among others: - GitHub Copilot [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] (using the
OpenAI Codex model, a descendant of GPT-3, configured for code generation); - Copy.ai and
Jasper.ai [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] (content generation for marketing purposes); - Algolia [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] (improving search engine
capabilities), SearchGPT [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] (prototype of a new AI search features).
      </p>
      <p>Meta [19] recently opened access to its new model Llama 3.1 405B [20], which according to many
tests surpasses such giants as GPT-4, Claude 3.5 Sonnet [21] and Google Gemini Pro [22]. Authors
will plan to research and compare this new model it with GPT model.</p>
      <p>Now let the concept, architecture, implementation and the technical facets (particularly the use
of GPT for analyzing scientific publications) of a microservice designed for the intelligent search of
scientific grants more detail.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Intelligent search model</title>
      <p>Intelligent search employs artificial intelligence (AI) and machine learning (ML) techniques to
analyze large volumes of data and find relevant information. In the context of scientific grants,
intelligent search can automatically process information about researchers, their publications, and
relevant scientific fields to identify the most suitable grants. This approach not only saves time but
also enhances the accuracy of the search results by considering a wide range of parameters and data
sources.</p>
      <p>Intelligent grant search offers several key advantages:
•
•
•
•
•</p>
      <p>Automation: Reduces the time and effort required for grant searching, allowing researchers
to focus on their core activities.</p>
      <p>Intelligence: Utilizes AI to find the most relevant grants based on the analysis of scientific
publications, thereby increasing the precision of search results.</p>
      <p>Result Quality: Provides accurate and relevant search results by considering the specific
research areas and interests of the scientist.</p>
      <p>Scalability: The system can be expanded to search a large number of sources, accommodating
the growing needs of the research community.</p>
      <p>Continuous Updates: Keeps researchers informed about new funding opportunities as they
become available, ensuring that they do not miss out on potential grants.</p>
      <sec id="sec-2-1">
        <title>2.1. Architecture of the Microservice for Intelligent Search</title>
        <p>Microservice architecture involves developing individual services that perform specific tasks and
can interact with each other via APIs. This modular approach allows for easy scaling of the system,
adding new features, and maintaining high availability and fault tolerance. Microservices can be
independently developed, deployed, and managed, which enhances the flexibility and resilience of
the overall system.</p>
        <p>Grant searching is performed based on input data such as ScopusID [23], ORCID [24], first name,
and last name of the researcher. These identifiers allow the system to gather comprehensive
information about the researcher’s publications and scientific contributions, which are critical for
matching the researcher with relevant grants.</p>
        <p>The search results (output data) are provided in a JSON response with a parameterized list of
found grants. Each grant entry includes the title, description, link, source name, and metadata. This
structured format facilitates easy integration with other systems and applications that the
researchers might be using.</p>
        <p>The architecture of the microservice includes the following components (Figure1):
•
•
•</p>
        <sec id="sec-2-1-1">
          <title>API Endpoints:</title>
          <p>a. GET /status: Health check of the service to ensure it is operational.
b. POST /run_search: Initiates the grant search task based on the provided input data.
c. GET /job_result/{{job_uuid}}: Retrieves the results of the grant search task using a unique
job identifier.</p>
          <p>Database: Stores information about users, their requests, and search results, ensuring data
persistence and reliability.</p>
          <p>Grant Search Mechanism: Comprises a parser, an analyzer, and a searcher, each responsible
for specific tasks in the grant search process.</p>
          <p>Parser. The parser searches for titles and abstracts of the scientist's publications in various
sources, such as Scopus. The result of the parser's work is a list of found publication titles.
This step is crucial for gathering the necessary data to analyse the researcher's areas of
expertise and interests.</p>
          <p>Analyzer. The analyzer determines the parameters of the publications found in the previous
step. It classifies the publications into three lists: research subjects, scientific fields, and
research directions. This classification is essential for accurately matching the researcher
with relevant grants.</p>
          <p>Searcher. The searcher conducts the grant search in open sources, such as the EU Fundings
&amp; Tenders Portal [25]. It uses the parameters of the publications, determined by the analyzer,
to find grants that align with the researcher’s work. This component ensures that the search
results are highly relevant and tailored to the researcher’s needs.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Prototype of GPT model for GrantsForScience</title>
        <p>The prototype of the intelligent search microservice for scientific grants consists of limited key
components that work together to facilitate the search and retrieval of relevant grant opportunities.
Figure 2 represents key elements of intelligent search microservice, implemented within a prototype
(highlighted by green and yellow colors).</p>
        <sec id="sec-2-2-1">
          <title>3 API endpoints described above. Parser using Scopus Search API [26]. Analyzer. Searcher using EU Fundings and Tenders Portal.</title>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Usage of GPT model</title>
        <p>The analyzer uses GPT-3.5-turbo to generate a JSON with search parameters based on the titles
of scientific articles. GPT-3.5-turbo was chosen for its optimal balance of cost and capabilities.
Interaction with GPT occurs via HTTP API, with a temperature setting of 0.5, determined
experimentally. This setting ensures a balance between creativity and coherence in the generated
responses. The model has been trained on a diverse range of internet text, which allows it to handle
various tasks, including text summarization, translation, and content generation. The temperature
parameter in GPT controls the randomness of the output. A lower temperature (close to 0) makes
the model's output more deterministic and focused, while a higher temperature (closer to 1) allows
for more randomness and creativity. For the grant search analyzer, a temperature of 0.5 was chosen
to maintain a balance between generating diverse responses and ensuring relevance to the input
data.</p>
        <p>...
{{"science_branches": [], "research_areas": [], "research_subjects": []}} JSON based on a list of scientific
articles names given: {list_of_names}" containing a list of article names provided. GPT returns JSON
filled by values (Figure 3). Part of the program code, utilized to analyze article names using GPT is
below:</p>
        <p>list_of_names = ", ".join(names)
message = f'Fill in {{"science_branches": [], "research_areas": [],
"research_subjects": []}} '\</p>
        <p>f'JSON based on a list of scientific articles names given: {list_of_names}'
result = self._ask_gpt(message)
def _ask_gpt(self, message, temperature=0.5):
logger.info(f"Asking GPT-3.5:\n {message}")
try:
gpt_result = self.client.chat.completions.create(
model="gpt-3.5-turbo-0125",
temperature=temperature,
response_format={"type": "json_object"},
messages=[</p>
        <p>{"role": "system", "content": "You are a helpful assistant designed to
output JSON list of strings"},</p>
        <p>{"role": "user", "content": message}
)
except Exception as e:</p>
        <p>raise GPTAPIError(f"GPT API error: \n{e}")</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Software Implementation</title>
        <p>The microservice is implemented as a Docker-compose application [27] with the following
containers:
•
•
•
•
web: Python 3.10, Flask [28], marshmallow, requests - API engine, service orchestration.
worker: Celery [29], Redis - Asynchronous execution of grant search tasks.
redis: Redis [30] - Database for storing intermediate results and task queues.</p>
        <p>dashboard: Celery, flower - Task monitoring and debugging.</p>
        <p>Application is hosted at Digitalocean [31] and is IP-restricted. Postman [32] HTTP client is used
to test API endpoints.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Prototype approbation</title>
        <p>Prototype was tested using real researcher profile data. In an example provided within this article
we used ORCID and ScopusID of an author (Figure 3).</p>
        <p>For a given researcher, a prototype returned 90 relevant grants found on EU Funding and Tenders
Portal - 10 for each of 9 keywords found by Analyzer on a basis of 12 articles found within Scopus.</p>
        <p>Job result response contains results of Searcher - list of grants (“grants”) that match keywords
defined by Analyser for articles found by Parser. Output of intermediate steps are included as well
(“articles”, “article_keywords”).</p>
        <p>Response JSON is located below, repeating parts are shortened by “…”.
{
"async": true,
"job_uuid": "90d2bda8-fb2e-4787-a934-f55b44fcda82",
"result": {
"articles": {
"ScopusParserV1": [
{
"authors": [</p>
        <p>"Nikolaevskaya E.A."
solutions",
],
"date": "2009-11-01",
"id": "2-s2.0-72449169485",
"source": "ScopusParserV1",
"summary": null,
"title": "Program-algorithmic methods to improve the accuracy of computer
},
……………………………………
]
},
"articles_keywords": {
"research_areas": [
"Numerical Methods",
"High Performance Computing",
"Algorithms"
],
"research_subjects": [
"Parallel Computing",
"Numerical Linear Algebra",
"Computational Mathematics"
],
"science_branches": [
"Numerical Analysis",
"Computer Science",
},
"career": { ……………………………………},
"errors": {},
"grants": {
"EUCommission": {
"Algorithms": [
{
"amount": [</p>
        <p>null
],
"currency": null,
"end_date": null,
"identifier": "HOP_ON_PROJECT101080142",
"match_by_key": "Algorithms",
"meta": {
"apiVersion": "2.120",
"database": "SEDIA",
"language": "en",
"programmePeriod": null
},
"source": "EuropeanCommissionGrantSearcher",
"start_date": "2022-11-01T01:00:00.000+0100",
"summary": "Efficient QUantum ALgorithms for IndusTrY",
"title": "Efficient QUantum ALgorithms for IndusTrY",
"url":
"https://ec.europa.eu/info/fundingtenders/opportunities/horizon/hop-on/101080142",</p>
        <p>"uuid": "ca8e2e5a-8120-4abc-b264-3e0a84d62897"
},
……………………………………
}</p>
        <p>]
},
"input_keywords": [],
"person": {
"first_name": "Олена",
"last_name": "Ніколаєвська",
"middle_name": "Анатоліївна",
"orcid": "0000-0002-5145-0189",
"scopus_id": "6503942582"
}
},
"status": "Completed successfully"</p>
      </sec>
      <sec id="sec-2-6">
        <title>2.6. Applied Usage</title>
        <p>}
•
•
•
•</p>
        <p>The microservice is intended to be used as part of the infrastructure for a system that searches
for grants from numerous open sources on the Internet. The system will:</p>
        <p>Allow organizations and scientists to create accounts and fill out profiles.</p>
        <p>Enable on-demand searches for grants based on the profiles.</p>
        <p>Companies (organizations) will be able to post tasks and search for performers among
registered users (scientists and organizations)
Send daily notifications about new grants in the sources, ensuring that researchers are always
up-to-date with the latest opportunities.</p>
        <p>The system will feature a user-friendly interface where researchers can input their profile
information and receive personalized grant recommendations. It will also provide dashboards for
monitoring search results and managing profiles.</p>
        <p>The microservice can be integrated with other research management systems, allowing seamless
data exchange and enhancing the overall efficiency of research administration.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusions</title>
      <p>The principles, architecture and technologies for creating microservices for intelligent search using
artificial intelligence technologies such as GPT are proposed, which allows creating a scalable,
reliable and efficient search system GrantsForScience. Intelligent search of scientific grants
significantly improves the efficiency and accuracy of research funding search. This approach not
only saves researchers time and effort, but also ensures that they do not miss valuable funding
opportunities. The next steps are to develop a comprehensive grant search system, namely,
expanding the functionality to cover more sources and provide more detailed search results; scaling
the search: adding more parsers and search engines to process more data and improve the accuracy
of the search; caching mechanisms, namely, implementing caching to reduce the number of queries
and speed up the search process; unit testing and technical updates (continuously improving the
system through thorough testing and regular updates).
[19] Meta, URL: https://about.meta.com/company-info/
[20] Llama 3.1 405B, URL: https://ai.meta.com/blog/meta-llama-3-1/
[21] Claude 3.5 Sonnet, URL: https://www.anthropic.com/news/claude-3-5-sonnet
[22] Google Gemini Pro, URL: https://deepmind.google/technologies/gemini/
[23] Scopus Database, URL: https://www.scopus.com/
[24] OrcID, URL: https://info.orcid.org/what-is-orcid/
[25] EU Fundings &amp; Tenders Portal. Retrieved, URL:
https://ec.europa.eu/info/fundingtenders/opportunities/portal
[26] Scopus Search API, URL: https://dev.elsevier.com/documentation/ScopusSearchAPI.wadl
[27] Docker Compose Documentation, URL: https://docs.docker.com/compose/
[28] Flask - Python Web Framework, URL: https://flask.palletsprojects.com/en/2.0.x/
[29] Celery - Distributed Task Queue, URL: https://docs.celeryproject.org/en/stable/
[30] Redis Documentation, URL: https://redis.io/about/
[31] DigitalOcean Hosting, URL: https://www.digitalocean.com/
[32] Postman HTTP Client, URL: https://www.postman.com/</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , Ł. Kaiser,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          , Attention is All You Need,
          <source>NeurIPS</source>
          (
          <year>2017</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.1706.03762.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <article-title>Pre-training of Deep Bidirectional Transformers for Language Understanding, NAACL-HLT (</article-title>
          <year>2019</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1810</year>
          .
          <volume>04805</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>T. B. Brown</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Mann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ryder</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Subbiah</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          <string-name>
            <surname>Kaplan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dhariwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Neelakantan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Shyam</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Sastry</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Askell</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Agarwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Herbert-Voss</surname>
            , G. Krueger,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Henighan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Child</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ramesh</surname>
            ,
            <given-names>D. M.</given-names>
          </string-name>
          <string-name>
            <surname>Ziegler</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Winter</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Hesse</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Chen</surname>
            , E. Sigler,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Litwin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gray</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Chess</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Clark</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Berner</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>McCandlish</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Radford</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          <string-name>
            <surname>Sutskever</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Amodei</surname>
          </string-name>
          , Language Models are FewShot Learners,
          <source>NeurIPS</source>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>2005</year>
          .
          <volume>14165</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Child</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Luan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Amodei</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Sutskever</surname>
          </string-name>
          ,
          <article-title>Language Models are Unsupervised Multitask Learners</article-title>
          , OpenAI
          <string-name>
            <surname>Blog</surname>
          </string-name>
          (
          <year>2019</year>
          ). URL: https://cdn.openai.
          <article-title>com/better-languagemodels/language_models_are_unsupervised_multitask_learners</article-title>
          .pdf.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Radford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Narasimhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Salimans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. Sutskever</given-names>
            ,
            <surname>Improving Language Understanding by Generative</surname>
          </string-name>
          Pre-Training, OpenAI
          <string-name>
            <surname>Blog</surname>
          </string-name>
          (
          <year>2018</year>
          ). URL: https://openai.com/research/languageunderstanding-generative
          <article-title>-pre-training.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Bernard</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Jansen</surname>
          </string-name>
          , Soon-gyo
          <string-name>
            <surname>Jung</surname>
          </string-name>
          , Joni Salminen.
          <article-title>Employing large language models in survey research</article-title>
          .
          <source>Natural Language Processing Journal</source>
          . Volume
          <volume>4</volume>
          ,
          <string-name>
            <surname>September</surname>
          </string-name>
          (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1016/j.nlp.
          <year>2023</year>
          .100020
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Wayne</given-names>
            <surname>Xin</surname>
          </string-name>
          <string-name>
            <given-names>Zhao</given-names>
            ,
            <surname>Kun Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Junyi</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Tianyi</given-names>
            <surname>Tang</surname>
          </string-name>
          , Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican
          <string-name>
            <surname>Dong</surname>
          </string-name>
          , et al.
          <article-title>A survey of large language models</article-title>
          .
          <source>arXiv preprint arXiv:2303.18223</source>
          ,
          <year>2023</year>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Ce</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Qian</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Chen</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Jun</given-names>
            <surname>Yu</surname>
          </string-name>
          , Yixin Liu, Guangzhou Wang, Kai Zhang, Cheng Ji, Qiben Yan,
          <string-name>
            <given-names>Lifang</given-names>
            <surname>He</surname>
          </string-name>
          , et al.
          <article-title>A comprehensive survey on pretrained foundation models: A history from bert to chatgpt</article-title>
          .
          <source>arXiv preprint arXiv:2302.09419</source>
          ,
          <year>2023</year>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Dmytro</given-names>
            <surname>Lande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Leonard</given-names>
            <surname>Strashnoy. GPT Semantic</surname>
          </string-name>
          <article-title>Networking: A Dream of the Semantic Web - The Time is Now</article-title>
          . - Kyiv: Engineering,
          <year>2023</year>
          . - 168 p.
          <source>ISBN 978-966-2344-94-3</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>[10] OpenAI, URL: https://openai.com/</mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Aymen</given-names>
            <surname>El Amri</surname>
          </string-name>
          .
          <article-title>The art and science of developing intelligent apps with OpenAI GPT-3</article-title>
          , DALL·E 2,
          <string-name>
            <surname>CLIP</surname>
          </string-name>
          , and
          <article-title>Whisper - Suitable for learners of all levels / Kindle Edition</article-title>
          ,
          <year>2023</year>
          . - 378 p.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>[12] GPTChat, URL: https://openai.com/chatgpt/</mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>OpenAI</surname>
            <given-names>GPT</given-names>
          </string-name>
          -3, URL: https://openai.com/index/gpt-3-apps/
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>OpenAI</surname>
            <given-names>GPT</given-names>
          </string-name>
          <source>-3</source>
          .5, URL: https://openai.com/index/gpt-3
          <article-title>-5-turbo-fine-tuning-and-api-updates/</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>OpenAI</surname>
            <given-names>GPT</given-names>
          </string-name>
          -4, URL: https://openai.com/index/gpt-4/
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <article-title>GitHub CopilotCopy</article-title>
          .ai, URL: https://github.com/features/copilot
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Jasper</surname>
          </string-name>
          .ai, URL: https://www.jasper.ai/comparison/jasper-vs-chatgpt
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>[18] Algolia: https://www.algolia.com/doc/</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>