<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards AI-Supported Research: a Vision of the TIB AIssistant</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sören Auer</string-name>
          <email>auer@tib.eu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Allard Oelen</string-name>
          <email>allard.oelen@tib.eu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mohamad Yaser Jaradeh</string-name>
          <email>jaradeh@l3s.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mutahira Khalid</string-name>
          <email>mutahira.khalid@tib.eu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Farhana Keya</string-name>
          <email>farhana.keya@tib.eu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sasi Kiran Gaddipati</string-name>
          <email>sasi.gaddipati@tib.eu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jennifer D'Souza</string-name>
          <email>jennifer.dsouza@tib.eu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenz Schlüter</string-name>
          <email>lorenz.schlueter@stud.uni-hannover.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amirreza Alasti</string-name>
          <email>amirreza.alasti@stud.uni-hannover.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gollam Rabby</string-name>
          <email>gollam.rabby@l3s.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Azanzi Jiomekong</string-name>
          <email>jiomekong@tib.eu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oliver Karras</string-name>
          <email>oliver.karras@tib.eu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>L3S Research Center, Leibniz University of Hannover</institution>
          ,
          <addr-line>Hannover</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Leibniz University of Hannover</institution>
          ,
          <addr-line>Hannover</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>TIB - Leibniz Information Centre for Science and Technology</institution>
          ,
          <addr-line>Hannover</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>University of Yaounde 1</institution>
          ,
          <addr-line>Yaounde</addr-line>
          ,
          <country country="CM">Cameroon</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The rapid advancements in Generative AI and Large Language Models promise to transform the way research is conducted, potentially ofering unprecedented opportunities to augment scholarly workflows. However, efectively integrating AI into research remains a challenge due to varying domain requirements, limited AI literacy, the complexity of coordinating tools and agents, and the unclear accuracy of Generative AI in research. We present the vision of the TIB AIssistant, a domain-agnostic human-machine collaborative platform designed to support researchers across disciplines in scientific discovery, with AI assistants supporting tasks across the research life cycle. The platform ofers modular components - including prompt and tool libraries, a shared data store, and a flexible orchestration framework - that collectively facilitate ideation, literature analysis, methodology development, data analysis, and scholarly writing. We describe the conceptual framework, system architecture, and implementation of an early prototype that demonstrates the feasibility and potential impact of our approach.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;AI-Supported Research</kwd>
        <kwd>LLMs for Science</kwd>
        <kwd>Scholarly AI Platform</kwd>
        <kwd>Scholarly Research Assistant</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The developments of Generative AI, and specifically Large Language Models (LLMs), have had a
significant impact in many areas of our society [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] already. Additionally, in the scientific domain, LLMs
are increasingly utilized, for example, to assist researchers with academic writing [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. LLMs are used
across a wide variety of scholarly domains, such as medicine in life sciences [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], social sciences [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
chemistry [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], law [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and coding in computer science [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        While many approaches have been proposed and demonstrated, it remains challenging for researchers
to get started with LLMs in their field. The ability of individual researchers to optimally leverage AI
for their research strongly depends on their AI literacy, i.e., their ability to evaluate, communicate
with, and collaborate using AI [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. AI can be used to support domain-independent tasks, such as
ifnding related work, assisting with paper authoring, and proofreading, as well as for domain-specific
tasks, including supporting methodologies, implementations, or evaluations. While the possibilities
are virtually unlimited, the benefits that a single researcher gains from AI-assisted research heavily
depend on the user and the specifics of her research work, which determine the required prompts and
the tasks that can be outsourced to the LLM. Prompt engineering is a crucial aspect for efective LLM
oCmunity
oDmain
eSrv
eSrv
nacle-toEr
ceyansprT
ixbtyFle
user interfaces for them.
      </p>
      <p>
        single task.
Protocol) servers consisting of collections of tools, core system modules, and the system’s design principles.
usage [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], but can be a bottleneck for non-AI experts [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Even if researchers possess the necessary
skills to operate LLMs, research work relies on diverse tools designed to support specific tasks, such
as data analysis in computing environments like R Studio or digital libraries that support knowledge
discovery. The efective integration of Generative AI with the diverse tools used in research remains a
challenge. For example, the integration of an LLM-based assistant with digital libraries can provide
additional context via Retrieval-Augmented Generation [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] or by calling external services using Tool
Callings [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>Based on these considerations, we identified the following challenges for AI-assisted research:
• Challenge 1: lacking domain-specific AI literacy to leverage AI for research tasks efectively.
• Challenge 2: the skill to efectively engineer prompts and context injection.
• Challenge 3: leveraging existing tools and services into AI workflows and providing appropriate
• Challenge 4: technical capability to organize and orchestrate diferent AI agents to accomplish a
In this work, we present our vision for an AI-supported, domain-agnostic research platform, named
TIB AIssistant. Figure 1 depicts the conceptual framework of our approach. The platform serves
as a central repository for scholarly AI agents and their corresponding prompts. Additionally, the
platform provides the scholarly tools necessary to accomplish research tasks. To serve researchers
across various domains, we argue that the flexibility of the system is crucial for accommodating the
diverse requirements and use cases arising from diverse research work. Figure 2 illustrates an example
research life cycle within the TIB AIssistant. The user begins with using the TIB AIssistant to generate
ideas, proceeds through the diferent phases of the life cycle (supported by various agents), and utilizes
external tools as needed. The data store stores results from diferent agents and makes them available
to other agents.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Vision of the TIB AIssistant</title>
      <p>Our vision for the TIB AIssistant is to empower researchers through an AI-supported, human-centered,
and domain-agnostic platform that redefines the conduct of scholarly research. We envision a
collaborative research environment where humans and machines co-create knowledge. Rather than aiming
for full automation, the TIB AIssistant centers on human-machine collaboration, enabling researchers
to retain control, orchestrate processes, and critically evaluate AI-generated results throughout the
research life cycle.</p>
      <p>At the core of this vision is a flexible, modular, and transparent infrastructure that facilitates AI
integration without imposing rigid workflows. The TIB AIssistant is conceived as a lightweight yet
powerful research hub where customizable AI agents, scholarly tools, and curated prompts work
together seamlessly. Each element of the system — ranging from prompt libraries to external tool
integrations — is designed to be interoperable, extensible, and openly accessible.</p>
      <p>We aim to lower the barrier to AI adoption in academia by addressing the four key challenges faced by
researchers: i) understanding the scope of AI in domain-specific tasks, ii) developing efective prompts
and contextual inputs, iii) integrating external scholarly tools into AI workflows, iv) and coordinating
diverse AI agents to perform complex research processes.</p>
      <p>Inspired by principles from Integrated Development Environment (IDE) interfaces, the TIB AIssistant
provides a research-friendly environment where users can initiate ideation and literature exploration,
formulate research questions, iterate on methodologies, analyze and synthesize results, and ultimately
author and refine scholarly publications. This vision is grounded in a set of foundational design
principles: Personalization, Customizability, Trustworthiness and Transparency, Error-tolerance, and Open
science and community engagement. Ultimately, the TIB AIssistant aspires to become a central AI hub
for scholarly research, not just a tool, but an evolving community-driven platform that transforms how
research is conceptualized, conducted, and communicated in the age of generative AI.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Related Work</title>
      <p>The use of LLMs to support scholarly activities is a growing field. Existing approaches can be broadly
categorized into single task assistance and those that ofer a more integrated, multi-task framework.</p>
      <sec id="sec-3-1">
        <title>3.1. Single Task Assistants</title>
        <p>
          A significant body of work focuses on leveraging LLMs to streamline specific, often labor-intensive,
components of the research process. For instance, the STORM approach [13] provides a systematic
approach to the research and pre-writing stages, which are the core of a literature review. Similarly,
ResearchAgent [14] is an LLM-powered system focused on research idea generation by defining
problems, proposing methods, and designing experiments. The system utilizes collaborative LLM-powered
reviewing agents to refine these ideas iteratively based on feedback. Another specialized application of
LLM agents is simulating complex scholarly interactions. The AgentReview framework [15], for
example, utilizes LLM-based agents to simulate the entire peer-review process, allowing for the study of its
dynamics, including reviewer bias and the influence of author identity. Also, there are domain-specific
tools such as Name2SMILES (for converting molecule names to SMILES), ReactionPlanner (for multi-step
synthesis planning), PatentCheck (for checking compound patent status), and SafetySummary (for
retrieving safety information) from the ChemCrow platform [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], for research-related tasks in chemistry.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Multi-Task Assistants</title>
        <p>Moving beyond single-task applications, a growing number of initiatives aim to create more
comprehensive, multi-faceted research assistants. These systems often integrate several capabilities to support
researchers throughout their workflow. Paper Copilot [ 16], for example, functions as a personalized
academic assistant that maintains a real-time updated database of research papers. It can derive a user’s
research profile, analyze the latest trending topics, and provide advisory services, thereby combining
multiple support functions into a single system. Furthermore, there is The AI Scientist [17], a framework
designed for fully automated, open-ended scientific discovery. This system represents a significant step
towards end-to-end automation by autonomously performing a sequence of research tasks. Starting
with ideation based on existing literature, followed by experimentation, and finally, the paper authoring.
This results in a complete manuscript in LaTeX, which LLM agents also review.</p>
        <p>While these approaches demonstrate the potential of full automation, their rigid, pipeline-driven
nature highlights several challenges that our vision for the TIB AIssistant aims to address. The emphasis
on a fully autonomous process limits the role of the researcher, contrasting with our core principle of
human-machine collaboration, where the human expert orchestrates and validates each step. The fixed
alibry
p-ublicansotP
uPblicaton
paerP writng
nAalysi
epn
puload
amgnify-ls
ublhorn
ocg
uqestion
tsar
ilst
paerP tile
esction
.
concrete examples of the three core system modules, as shown in Figure 1. External tools are listed, along with
items stored within the data store.
workflow within the AI assistant also fall short of our goal for a customizable and flexible platform, where
users can modify prompts, select diferent LLMs, and integrate their tools across various disciplines.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Framework for AI-Assisted Research</title>
      <p>We now describe the main components that form the foundational framework of our envisioned
approach toward AI-supported research. The framework’s concepts are discussed next and summarized
in Figure 1. The platform provides an integrated environment for researchers, much like an IDE
for software developers. It consists of a Graphical User Interface (GUI) allowing users to interact
with various agents. We consider this platform as a lightweight wrapper that integrates the diferent
components listed below. If existing approaches or tools are available, the platform should implement
these services instead of attempting to replicate their functionalities.</p>
      <sec id="sec-4-1">
        <title>4.1. Core System Modules</title>
        <sec id="sec-4-1-1">
          <title>Prompt Library</title>
          <p>The Prompt Library is a collection of system prompts tailored toward specific tasks
of the research life cycle. A list of prompts minimizes the need for researchers to create their own
prompts, often relying on time-consuming trial-and-error. The prompt library enables researchers to
share their approaches with others easily. In addition to the prompt, metadata must be assigned to the
prompt to indicate what task is addressed (e.g., research question formulation, finding related literature,
etc.). Multiple variants of prompts can exist for the same task, thus providing alternatives in case a
prompt does not produce the expected result. Finally, users should be able to see and modify prompts
when using them within the platform.</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>Tool Library</title>
          <p>The Tool Library integrates external services into the platform. This makes it possible
to connect the platform to external tools, for example, to fetch additional publication data from Crossref
and ORCID, or to fetch related work via Semantic Scholar [18], ORKG [19], or ORKG Ask [20]. Tools are
called automatically, where the LLM decides, based on the description of the tool and the user’s input,
whether a tool should be called or not. To ensure tools can be added dynamically, a Model Context
Protocol (MCP) [21] can be used. MCP provides a standardized approach to provide external access
to LLMs. In this case, we are specifically interested in enabling calling tools. This enables users to
integrate existing scholarly MCP servers, allowing them to access a range of scholarly tools quickly
and easily. For developers, it is possible to set up an MCP server to make their tools available to the
platform. To facilitate external tool integration via MCP servers, we plan to develop an MCP GUI tool,
enabling the easy setup of an MCP server for tools that utilize REST endpoints.</p>
          <p>Data Store The ability of diferent agents to communicate with each other can be accomplished
via a centralized data store. Compared to keeping all generated content in the context of the LLM,
this approach has several benefits: the constraint of context window size is less problematic since the
data is stored in a separate store, and the agents are self-contained. They can be used in isolation,
making it easier to reason about what is happening within the agent. The data store can be a relational
database, storing specific data under a predefined key (e.g., research questions, bibliography, etc.). When
necessary, the database can be accessed, and the respective content is added to the context. A more
advanced approach could provide the LLM with a tool that allows it to access the database, enabling it
to read from and write to the database automatically.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. System Design Principles</title>
        <p>Human-Machine Collaboration In the spectrum between full automation of research and humans
doing most of the required tasks, a hybrid approach where humans and machines collaborate ofers
the best of both worlds. In such a hybrid approach, the human researcher primarily orchestrates,
directs, and reviews AI-supported processes. In this model, researchers have control at all times and
review and evaluate the output created by the AI at each step. This means that user interfaces must be
designed to go beyond the conventional conversational prompt-response style, allowing users to modify
intermediary results and decide when to proceed to the next step. However, due to the conversational
setup, processes such as iterative refinement enable users to refine AI-generated responses through a
human-in-the-loop approach further. We argue that this type of control is essential for an AI-supported
research assistant to be both useful and adopted by researchers. Therefore, the interface should not aim
to provide an automated research pipeline, but instead ofer a highly customizable environment that
researchers can use to integrate AI support into their workflows.</p>
        <p>Personalization A sophisticated memory system is crucial for enhancing the platform’s efectiveness
through deep personalization. This feature enables dynamic context engineering [22], a process where
only the most relevant information for a given task is selectively retrieved or fetched and then provided to
the LLMs, optimizing both relevance and computational eficiency. Beyond concrete tasks, the memory
system constructs a user profile that learns about domain expertise, stylistic preferences, formatting
conventions, and preferred terminology over time. Moreover, the memory component maintains a
comprehensive record of the user’s research history. This historical knowledge would empower the
assistant to guide and coordinate various (sub-)agents, ensuring their outputs are consistent and aligned
with the overarching user preferences.</p>
        <p>Customizability The platform should be domain-agnostic and suficiently flexible to support diferent
workflows. Users should be able to customize the platform to support their use cases. This begins with
the prompts, where users can try out diferent variants and edit them as needed. Additionally, the list of
tools that the LLM can execute during a chat session should be modifiable to limit the scope of available
tools and better direct the LLM in selecting the appropriate tool. As previously mentioned, the platform
should serve as a lightweight tool connecting diferent services. The ability to customize the interface
is therefore crucial to support a variety of use cases.</p>
        <p>Transparency and Trustworthiness For transparency and reproducibility reasons, for all generated
artifacts, provenance data has to be recorded, capturing the creators, model name, and model version,
system and user prompts, invoked tools, etc. To provide complete transparency, this provenance data
should be published alongside the research paper as research data. We envision this data to be published
in a machine-readable format, for example, via RO-Crates [23], to facilitate machine actionability. These
aspects will also help users verify the accuracy and originality of the generated content.
Error-Tolerance With an error-tolerant interface, we ensure that users remain in control and can
modify any data generated by the AI. This means that all messages, including system and user messages,
as well as the generated data, should be modifiable by the user. Additionally, the user should be able to
see which data is provided as input when the LLM calls a tool. This helps determine whether the tool
was called as expected. Based on our previously mentioned aspect of Human-Machine Collaboration,
the user can decide which data to use and which to discard. Regarding external tool callings, since we
do not control these services ourselves, we should assume that these services may respond diferently
than expected (e.g., because of a temporary issue, rate limiting, or changes in API specifications). In
such cases, the AIssistant should be error tolerant by gracefully handling such errors.
Flexibility We envision a centrally hosted platform. Users of the platform should have the flexibility
to choose the models and LLM providers they prefer. This also serves the purpose of selecting providers
based on geographical locations, legal requirements, and privacy concerns, among other factors. Smaller
models can handle simple tasks, while larger models can execute more complex tasks. A limited number
of free tokens can be provided to each user per day, which can be managed through an authentication
system. A Bring Your Own Key (BYOK) approach can be used to allow users to use as many tokens as
necessary. As an alternative, users should also be able to run the platform locally on their computers,
as the source code is publicly available.</p>
        <p>Community Driven We aim to develop the platform in collaboration with academics from various
domains. As there are virtually unlimited use cases for AI-supported research, the platform relies on
community contributions to create prompts and add tools. An additional way to contribute is by adding
functionalities to the platform itself.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Outlook</title>
      <p>In this work, we laid the foundation for an AI-supported research platform. To address challenge 1, we
propose a library of prompts that showcases to researchers the types of tasks that can be performed by
LLMs and the tools required for these tasks. Challenge 2 is addressed by reducing the need to engineer
prompts and by automatically providing the required context for a prompt through the data store.
Challenge 3 is tackled by integrating external scholarly tools via tool calls and external MCP servers.
Finally, challenge 4 is addressed by providing predefined research life cycle workflows, where diferent
agents can interact with each other by means of sharing data via the data store.</p>
      <p>A key aspect of our approach is the community efort to create and curate prompts and tools in such
a manner that they are helpful for other users in the community. We aim to provide the technical means
to make this possible. This includes the addition of community features, such as voting for helpful
prompts and sharing custom workflows with other users. In the end, we envision that the prompt
library will contain diferent versions of prompts aiming to accomplish the same research task. This
follows the assumption that there is no one-size-fits-all approach, but that diferent prompt variants are
helpful for diferent use cases.</p>
      <p>An initial prototype is implemented, integrating the proposed framework concepts into a workable
research life cycle. A demonstration of this approach is published [24]. Figure 2 depicts parts of the
prototype implementation. Only domain-agnostic assistants are implemented in the prototype (i.e.,
ideation, research questions, state-of-the-art, paper writing). Outputs from, for example, ideation, are
utilized by other assistants, such as when formulating research questions and during paper writing.
Furthermore, the tools and data store items listed in the figure are also implemented. The prototype
implementation shows the feasibility of our approach and demonstrates how the core system modules
are integrated. The source code is available online.1. We plan to further develop the prototype into a
publicly available online service, where researchers can get started with AI-assisted research.
1https://gitlab.com/TIBHannover/orkg/tib-aissistant/web-app</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>We thank our colleague Markus Stocker for his valuable comments in reviewing this paper. This work
was co-funded by NFDI4DataScience (ID: 460234259) and by the TIB Leibniz Information Centre for
Science and Technology.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors utilized ChatGPT, Gemini, and Grammarly to draft
content, enhance content, paraphrase and reword, improve writing style, and Perform Grammar and
spelling checks. After using this service, the authors reviewed and edited the content as needed and
take full responsibility for the publication’s content.
[13] Y. Shao, Y. Jiang, T. A. Kanell, P. Xu, O. Khattab, M. S. Lam, Assisting in writing
wikipedialike articles from scratch with large language models, 2024. doi:https://doi.org/10.48550/
arXiv.2402.14207.
[14] J. Baek, S. K. Jauhar, S. Cucerzan, S. J. Hwang, Researchagent: Iterative research idea generation
over scientific literature with large language models, 2025. doi: 10.48550/arXiv.2404.07738.
[15] Y. Jin, Q. Zhao, Y. Wang, H. Chen, K. Zhu, Y. Xiao, J. Wang, Agentreview: Exploring peer review
dynamics with llm agents, 2024. doi:10.48550/arXiv.2406.12708.
[16] G. Lin, T. Feng, P. Han, G. Liu, J. You, Paper copilot: A self-evolving and eficient llm system for
personalized academic assistance, 2024. doi:10.48550/arXiv.2409.04593.
[17] C. Lu, C. Lu, R. T. Lange, J. Foerster, J. Clune, D. Ha, The ai scientist: Towards fully automated
open-ended scientific discovery, 2024. doi: 10.48550/arXiv.2408.06292.
[18] R. Kinney, C. Anastasiades, R. Authur, I. Beltagy, J. Bragg, A. Buraczynski, I. Cachola, S. Candra,
Y. Chandrasekhar, A. Cohan, et al., The semantic scholar open data platform, 2025. doi:10.48550/
arXiv.2301.10140.
[19] S. Auer, et al., Open Research Knowledge Graph: A Large-Scale Neuro-Symbolic Knowledge
Organization System, in: Handbook on Neurosymbolic AI and Knowledge Graphs, IOS Press,
2025. URL: https://doi.org/10.3233/FAIA250216.
[20] A. Oelen, M. Y. Jaradeh, S. Auer, Orkg ask: A neuro-symbolic scholarly search and exploration
system, arXiv preprint arXiv:2412.04977 (2024). doi:10.48550/arXiv.2412.04977.
[21] Introduction - Model Context Protocol — modelcontextprotocol.io, https://modelcontextprotocol.</p>
      <p>io/introduction, 2024. [Accessed 23-07-2025].
[22] L. Mei, J. Yao, Y. Ge, Y. Wang, B. Bi, Y. Cai, J. Liu, M. Li, Z.-Z. Li, D. Zhang, C. Zhou, J. Mao, T. Xia,
J. Guo, S. Liu, A survey of context engineering for large language models, 2025. doi:10.48550/
arXiv.2507.13334.
[23] S. Soiland-Reyes, P. Sefton, M. Crosas, L. J. Castro, F. Coppens, J. M. Fernández, D. Garijo, B. Grüning,
M. L. Rosa, S. Leo, E. Carragáin, M. Portier, A. Trisovic, R.-C. Community, P. Groth, C. Goble,
Packaging research artefacts with ro-crate, Data Science 5 (2022) 97–138. doi:10.3233/DS-210053.
[24] A. Oelen, S. Auer, Tib aissistant: a platform for ai-supported research across research life cycles,
ISWC 2025 Companion Volume, November 2–6, 2025, Nara, Japan (2025).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Haque</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Exploring chatgpt and its impact on society</article-title>
          ,
          <source>AI and Ethics</source>
          <volume>5</volume>
          (
          <year>2025</year>
          )
          <fpage>791</fpage>
          -
          <lpage>803</lpage>
          . doi:
          <volume>10</volume>
          .1007/s43681-024-00435-4.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>W.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lepp</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Cao</surname>
          </string-name>
          , S. Liu,
          <string-name>
            <given-names>S.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          , et al.,
          <article-title>Mapping the increasing use of llms in scientific papers</article-title>
          ,
          <source>arXiv preprint arXiv:2404.01268</source>
          (
          <year>2024</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.2404.01268.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Thirunavukarasu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S. J.</given-names>
            <surname>Ting</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Elangovan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gutierrez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. F.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S. W.</given-names>
            <surname>Ting</surname>
          </string-name>
          ,
          <article-title>Large language models in medicine</article-title>
          ,
          <source>Nature medicine 29</source>
          (
          <year>2023</year>
          )
          <fpage>1930</fpage>
          -
          <lpage>1940</lpage>
          . doi:
          <volume>10</volume>
          .1038/ s41591-023-02448-8.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Grossmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Feinberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Parker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Christakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. E.</given-names>
            <surname>Tetlock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. A.</given-names>
            <surname>Cunningham</surname>
          </string-name>
          ,
          <article-title>Ai and the transformation of social science research</article-title>
          ,
          <source>Science</source>
          <volume>380</volume>
          (
          <year>2023</year>
          )
          <fpage>1108</fpage>
          -
          <lpage>1109</lpage>
          . doi:
          <volume>10</volume>
          .1126/ science.adi1778.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Bran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Schilter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Baldassari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. D.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Schwaller</surname>
          </string-name>
          ,
          <article-title>Augmenting large language models with chemistry tools</article-title>
          ,
          <source>Nature Machine Intelligence</source>
          <volume>6</volume>
          (
          <year>2024</year>
          )
          <fpage>525</fpage>
          -
          <lpage>535</lpage>
          . doi:
          <volume>10</volume>
          . 1038/s42256-024-00832-8.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Siino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Falco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Croce</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rosso</surname>
          </string-name>
          ,
          <article-title>Exploring llms applications in law: A literature review on current legal nlp approaches</article-title>
          , IEEE Access (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2025</year>
          .
          <volume>3533217</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kazemitabaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Z.</given-names>
            <surname>Henley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Denny</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Craig</surname>
          </string-name>
          , T. Grossman, Codeaid:
          <article-title>Evaluating a classroom deployment of an llm-based programming assistant that balances student and educator needs</article-title>
          ,
          <source>in: Proceedings of the 2024 chi conference on human factors in computing systems</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          . doi:
          <volume>10</volume>
          .1145/3613904.3642773.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Long</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Magerko</surname>
          </string-name>
          ,
          <article-title>What is ai literacy? competencies and design considerations</article-title>
          ,
          <source>in: Proceedings of the 2020 CHI conference on human factors in computing systems</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1145/ 3313831.3376727.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>N.</given-names>
            <surname>Knoth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tolzin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Janson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Leimeister</surname>
          </string-name>
          ,
          <article-title>Ai literacy and its implications for prompt engineering strategies</article-title>
          ,
          <source>Computers and Education: Artificial Intelligence</source>
          <volume>6</volume>
          (
          <year>2024</year>
          )
          <article-title>100225</article-title>
          . doi:
          <volume>10</volume>
          . 1016/j.caeai.
          <year>2024</year>
          .
          <volume>100225</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>J. D.</surname>
            Zamfirescu-Pereira,
            <given-names>R. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Hartmann</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          <string-name>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Why johnny can't prompt: how non-ai experts try (and fail) to design llm prompts</article-title>
          ,
          <source>in: Proceedings of the 2023 CHI conference on human factors in computing systems</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          . doi:
          <volume>10</volume>
          .1145/3544548.3581388.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Perez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piktus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Petroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Karpukhin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Goyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Küttler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          , W.-t. Yih,
          <string-name>
            <given-names>T.</given-names>
            <surname>Rocktäschel</surname>
          </string-name>
          , et al.,
          <article-title>Retrieval-augmented generation for knowledge-intensive nlp tasks</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>33</volume>
          (
          <year>2020</year>
          )
          <fpage>9459</fpage>
          -
          <lpage>9474</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>T.</given-names>
            <surname>Schick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dwivedi-Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dessi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Raileanu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lomeli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Hambro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Cancedda</surname>
          </string-name>
          , T. Scialom,
          <article-title>Toolformer: Language models can teach themselves to use tools</article-title>
          , in: A.
          <string-name>
            <surname>Oh</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Naumann</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Globerson</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Saenko</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Hardt</surname>
          </string-name>
          , S. Levine (Eds.),
          <source>Advances in Neural Information Processing Systems</source>
          , volume
          <volume>36</volume>
          ,
          <string-name>
            <surname>Curran</surname>
            <given-names>Associates</given-names>
          </string-name>
          , Inc.,
          <year>2023</year>
          , pp.
          <fpage>68539</fpage>
          -
          <lpage>68551</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>