<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>AWACopilot: A Secure On-Premise Large Language Model-Based Solution for Enhanced Patent Drafting</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohamad Homam Mawaldi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zenun Kastrati</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexander Gustafsson</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AWA Sweden AB</institution>
          ,
          <addr-line>Matrosgatan 1 211 18 Malmö</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Linnaeus University</institution>
          ,
          <addr-line>Universitetsplatsen 1, 352 52 Växjö</addr-line>
        </aff>
      </contrib-group>
      <abstract>
        <p>Patent drafting is a complex and high-stakes process for securing intellectual property rights. During the patent prosecution phase, maintaining confidentiality is crucial, making cloud-based third-party services inadequate for patent drafting assistance due to data security concerns. This study proposes AWACopilot, a secure, on-premise solution comprising a web service that leverages open-source large language models (LLMs) to assist patent attorneys in the intricate patent application drafting process. AWACopilot generates key patent sections such as background, abstract, detailed description, etc., from human-crafted claims, addressing the data security risks posed by cloud-based AI services. Its modular architecture enables customization and adaptability to diferent patent tasks. Although challenges remain-including reliance on LLM capabilities and the need for rigorous content verification-this study demonstrates the potential for secure, AI-driven solutions to enhance patent drafting workflows.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Intellectual Property</kwd>
        <kwd>Patents Drafting</kwd>
        <kwd>LLM</kwd>
        <kwd>Prompt Engineering</kwd>
        <kwd>Privacy</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        A nation’s investment in research and development (R&amp;D) with its research workforce is a key driver
of innovation, leading to a robust patent portfolio [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This portfolio can subsequently stimulate GDP
growth and improve access to international markets. Thus, intellectual property law is not just a legal
formality but a vital mechanism for converting R&amp;D into tangible benefits. Notably, a study by the
European Patent Ofice (EPO) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] revealed that early-stage patent filings enhance startup’s chances of
obtaining venture capital funding by a factor of 6.4, while also significantly increasing the likelihood of
a successful exit for its investors. This underscores the advantages of patents for enterprises of all sizes.
      </p>
      <p>
        Patent prosecution, the legal and administrative process of obtaining a patent, begins with drafting an
application that defines the invention’s scope, as later additions of new subject matter are prohibited [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
Typically handled by patent attorneys due to its complexity, successful prosecution requires technical
and legal expertise. A patent application’s key sections, including the background (contextualizing the
invention), claims (defining legal boundaries), description (detailing the invention), drawings (visual
support), and abstract (summary), must be consistent. Strategic claim drafting seeks broad protection
while ensuring novelty and non-obviousness, and incorporates narrower claims for litigation defense.
The remaining sections of the application are drafted with consistent language from the claims to fully
enable the invention without limiting language. This interplay of introduced terms, sought protection in
the description sections, with several stakeholders involved, makes patent drafting time-consuming and
intellectually demanding. Recognizing these challenges, practitioners and researchers have increasingly
turned to software tools and natural language processing (NLP) techniques to assist in the patent
prosecution process.
      </p>
      <p>
        Recent advancements in deep learning (DL), particularly the development of the transformer
architecture [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], have given rise to powerful large language models (LLMs) like ChatGPT [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and led to a
new era of technological innovation. Despite the challenges posed by LLMs like hallucinations [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], their
impact on various industries is profound, such as healthcare [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], tourism [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], and the legal sector [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
The legal field, particularly in intellectual property (IP), stands to gain substantially from the intrinsic
NLP capabilities of LLMs. This is due to the sector’s reliance on extensive and complex documents
that require a deep comprehension of the specific meaning and implications of each term [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. LLMs,
equipped with word embeddings and attention mechanisms, could provide state-of-the-art performance
compared to classical machine learning &amp; DL approaches [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ].
      </p>
      <p>
        Patent claims, the heart and defining element of patents, receive significant attention in the research
that studies utilizing LLMs for patent generation [
        <xref ref-type="bibr" rid="ref10 ref13">10, 13</xref>
        ]. In contrast, products in the commercial sector
aimed at end users adopt a more holistic approach, providing services that extend beyond mere LLM
patent claim generation. Software companies like ClaimMaster Software LLC [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], PowerPatent Inc.
[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], and Rowan TELS Corp. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] provide services ranging from patent proofreading and drafting to the
generation of ofice action responses enhanced by LLMs. Although some solutions may initially suggest
local processing, the computationally intensive nature of LLMs inference typically necessitates routing
all AI-driven tasks to cloud-based infrastructure. This reliance on cloud services raises serious concerns
about data security and privacy. The possibility that major LLM providers like OpenAI, Google, and
Microsoft might use interaction data to improve their models poses a threat to the confidentiality of
patent applications during the crucial prosecution phase. Once the details of an invention become public
knowledge, the novelty can be destroyed. To mitigate these risks, this study proposes deploying a secure,
on-premise patent drafting solution backed by local LLMs, minimizing reliance on AI cloud services
and ensuring greater control over sensitive data, supporting patent attorneys in patent drafting. This
approach leverages state-of-the-art open-source LLMs and prompt engineering techniques to generate
patent sections, such as the background, abstract, and detailed description, directly from human-crafted
patent claims, without requiring prior fine-tuning of LLMs. This proposed solution will be implemented
and deployed internally at AWA1, an international intellectual property firm ofering a comprehensive
range of IP services.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Early adoption of LLMs for patent-related tasks can be traced back to the release of GPT-2. In [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ],
researchers demonstrated GPT-2’s ability to generate patent claims, observing its "unreasonably fast"
adaptation, as the model learned to produce text resembling the structure and style of patent claims
after fine-tuning on a dataset of half a million examples. The recommendations for future research from
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] were addressed in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], where they fine-tuned GPT-2 on a considerably larger dataset of 11 million
patents, focusing on generating patent titles, abstracts, and claims. However, both studies concluded
that future research should concentrate on improving LLM generation, as it currently falls short of
achieving human-level performance in these tasks.
      </p>
      <p>
        Continuing to focus on patent claims, the study by [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] introduced a distinctive approach by employing
a GPT-based model, PatentGPT-J, which was pretrained on patent text rather than fine-tuned, in contrast
to the previously mentioned papers. The objective was to assess PatentGPT-J’s efectiveness in aiding
patent drafting, rather than generating complete texts. Specifically, the study aimed to evaluate the
reduction in keystrokes for typists with the next-word suggestions provided by the model. Interestingly,
larger parameter variants of the model did not outperform those with fewer parameters for this task.
InstructPatentGPT [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], which was also based on PatentGPT-J, introduced a groundbreaking approach
in the field that transcends traditional fine-tuning. It enhanced its model using Reinforcement Learning
from Human Feedback (RLHF) to generate patent claims, expecting that this training method would
elevate the quality of the produced claims. The model learns to refine the language of the claims it
generates, adjusting aspects such as length and terminology by incorporating feedback related to the
"granted" and "pre-grant" statuses of patent applications during the prosecution process. Consistent
with previous studies, [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] noted that the claims produced still require significant improvements
to align with the standards required by patent ofices. A more recent study, [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], pre-trained and
ifne-tuned four models—LLaMA-2-7B and 13B, as well as Mistral-7B and 8x7B—specifically for the
generation of biomedical patent claims. The research concludes that although the claims generated
by the LLMs exhibited potential, human oversight is essential to ensure that these claims satisfy the
critical patentability requirements of quality, novelty, and non-obviousness.
      </p>
      <p>
        In addition to patent claims, patent specifications have received considerable attention, as
demonstrated by the study conducted by [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. In their research, the authors utilized claims and drawings as
the foundation for generating the remainder of a patent application. They employed two pre-trained
models: a decoder-only model, GPT-J, which contains 6 billion parameters, and an encoder-decoder
model, T5, with 11 billion parameters. Both models were fine-tuned on a dataset of patents
specifically related to computing arrangements based on biological models. The evaluation of these models
revealed improvements in the specifications of the generated patents. However, the authors stressed
the importance of including patent attorneys in carefully reviewing the generated specifications to
ensure both quality and accuracy, as models hallucinate descriptions of claim features and drawing
descriptions. The necessity for including a patent attorney in the process is emphasized in the work of
[
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], which raises concerns about privacy and the interpretability of LLMs’ outputs. The conclusion
drawn is that LLMs should serve to augment patent attorneys’ expertise rather than replace them. The
authors suggest a human-in-the-loop workflow as a method to mitigate hallucinations and maintain
high-quality output for the designated tasks.
      </p>
      <p>
        Fine-tuning LLMs is a resource-intensive process, requiring substantial GPU resources, a suitable
ifne-tuning dataset, and considerable time investment. This can hinder the rapid adaptation of LLMs
to emerging tasks and newly released foundation models within organizations. To mitigate these
challenges, techniques like Low-Rank Adaptation (LoRA) [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] have emerged as efective solutions.
LoRA optimizes model fine-tuning by adjusting only a small subset of parameters, known as low-rank
adapters. This method involves freezing the pre-trained weights and incorporating trainable rank
decomposition matrices, significantly reducing the number of trainable parameters while maintaining
or even improving performance on downstream tasks. Another line of research focused on prompt
engineering, where techniques like chain-of-thought prompting improve complex reasoning without
ifne-tuning [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Prompting has also been shown to outperform domain-specific fine-tuned models. For
example, GPT-4, guided by carefully designed prompts, surpassed Flan-PaLM 540B and Med-PaLM 2
on the MultiMedQA benchmark [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. These results suggest that prompt engineering could serve as a
powerful and eficient alternative to fine-tuning.
      </p>
      <p>
        The quality of a patent application, rooted in the strength of its claims, leads to faster approval
timelines [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. The language in patent applications, particularly in claims, difers from everyday English
and does not match the training data used for LLMs. This complexity positions patent attorneys as
the most qualified for the task, consistent with prior research. This study utilizes prompt engineering
to enhance LLM outputs in generating patent sections, such as summaries, abstracts, and detailed
descriptions, directly from human-crafted claims while incorporating a human-in-the-loop method, as
suggested by earlier studies. This approach avoids the lengthy process of fine-tuning. It combines these
elements into a user-friendly, on-premise deployed solution, aiming to improve the workflow of patent
attorneys at AWA during the drafting process, all while upholding the highest security standards.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed Solution</title>
      <p>
        Security and privacy were key priorities during the implementation of AWACopilot. To that end, LLM
inference is performed on a dedicated server within the AWA internal network, isolated from external
access and accessible only to authorized AWA devices. The solution was deployed using Docker [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ]
on the server, with all outbound internet connections restricted for added security. Furthermore, the
solution frontend was implemented as a web application instead of a native application, which eliminates
the need for user-side installation. This design choice facilitated rapid prototyping and iterative
development, streamlining modification, observability, and updates. Access to the web application was
restricted to authorized users, requiring individual credentials for interface access. Figure 1 presents a
screenshot of the frontend of the proposed solution.
      </p>
      <p>AWACopilot, as illustrated in Figure 2, processes user requests via a multi-stage architecture. User
instructions, combined with the task obtained from the AWA-SystemPrompt API, are passed from the
frontend to the backend. Crucially, attached documents submitted by the user are handled based on the
user’s selection: either they are used to retrieve relevant information through a Retrieval-Augmented
Generation (RAG) process, through a vector database, or the entire document is directly passed to the
LLM. The backend then forwards the assembled instructions and context to both the LLM Server and
the observability server, concurrently notifying the frontend about the ongoing processing. As the LLM
Server generates tokens, the frontend receives and displays them in real-time. Upon completion, the
observability server records the full interaction in the database.</p>
      <p>The solution is built upon the open-source platform Open-WebUI [29]. Open-WebUI, developed
using Svelte2 with JavaScript for the frontend and FastAPI [30] with Python for the backend, provides a
foundation with an active community and a rich feature set. The backend provides support for user
management, file handling, and RAG functionality [ 31]. The capabilities of Open-WebUI can be easily
expanded through the use of functions on the frontend and pipelines on the backend. The following
paragraphs will detail how these two features were utilized. Such functionalities made this platform
ideal for agile development and rapid prototyping compared to building a platform from scratch.</p>
      <p>To allow users to assign tasks to LLMs using specific system prompts, a crucial requirement for
AWACopilot was to decouple these prompts from the underlying LLMs. In contrast, Open-WebUI tightly
integrates system prompts and inference parameters to a predefined LLM, leading to an inflexible
system that fails to meet the needs of this study. This limitation prompted the development of
AWASystemPrompt, which addresses the missing functionality in Open-WebUI. AWA-SystemPrompt is
a REST API created with FastAPI and supported by a straightforward document database utilizing
JSON storage. It provides system prompts specifically designed for patent-related tasks, like claim
analysis, abstract generation, technical field generation, background generation, and detailed description
generation. Since the Open-WebUI could not be extended to accommodate such features, a branch
was created from the main repository. Subsequently, the frontend was updated to utilize this API
to pass the instruction to the LLMs through the backend, ofering users a dropdown menu to select
from the available system prompts. Furthermore, users can create, edit, delete, and share prompts,
extending beyond the default system prompts. Given their specialized knowledge and expertise in
various scientific and technical fields, patent attorneys are ideally positioned to create efective prompts
for patent-related tasks. Their deep understanding of both patent law and complex technical language
enables them to craft precise instructions that yield the desired results for each specific case. These
custom prompts can be made accessible to other AWACopilot users or kept private. This modular
approach, implementing system prompt management as a separate API, minimized modifications to the
core Open-WebUI codebase, thereby simplifying future updates and integration with new Open-WebUI
releases. As illustrated in Figure 3, a system prompt is a JSON object that contains instructions for the
LLM and parameters for sampling. These parameters govern various aspects of the LLM’s behavior,
including the context window (number of tokens the model considers) and the generation length
(number of tokens the model produces). Moreover, parameters like temperature and Mirostat afect
the randomness and coherence of the output, influencing the model’s "creativity" and adherence to the
specified task.</p>
      <p>For the LLM server, Ollama [32] was chosen due to its ease of deployment and management. Ollama
supports the serving of LLMs through a REST API, whether the models are sourced from Ollama.com
or other repositories like Hugging Face3. In both scenarios, the selected model is downloaded, stored
on the server, and served through the Ollama API, which can then relay LLM requests. Upon receiving
a request, Ollama loads the model into memory, a process whose duration depends on the number of
the model’s weights. Once the model is loaded, Ollama performs inference and subsequently ofloads
the model either upon receiving a new request for a diferent model or after five minutes of inactivity.
Alternative frameworks, including vLLM [33] and Max from Modular 4, were also assessed, especially for
their superior inference performance compared to Ollama. However, their less eficient dynamic model
loading and ofloading capabilities based on API requests rendered them suboptimal for AWACopilot’s
needs. The strategy of local deployment influenced the selection of LLMs, emphasizing models suitable
for on-premise execution. As a result, the focus was placed on open-source models with permissive
licenses, including the Llama3 [34], Gemma [35], and Deepseek [36] families.
2https://svelte.dev/
3https://huggingface.co/
4https://www.modular.com/max</p>
      <p>To ensure comprehensive monitoring, debugging, and user support, the open-source observability
framework Langfuse [37] was deployed in a dedicated Docker container. Langfuse, which implements
OpenTelemetry5 and integrates with Open-Web UI via its backend pipeline feature through a REST API,
captures detailed information about each LLM interaction. Serving as a model- and framework-agnostic
LLM engineering platform, Langfuse enables debugging, analysis, and iteration on LLM applications,
allowing it to capture detailed information about each LLM interaction. This data includes input tokens,
output tokens, and the generated output itself, which facilitates the creation of statistics on LLM
performance and ofers valuable insights for debugging and improving system prompts. Additionally,
through the functions feature of Open-WebUI, users could rate LLM responses on a scale from 0 to 10
and provide textual feedback. This feedback gets collected and linked to the recorded trace in Langfuse,
enabling continuous improvement and iterative refinement of both LLMs and system prompts.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>Although this study has an exploratory nature and does not present results, several preliminary
observations emerged from the exploratory process. One key insight is the crucial role of LLMs’ context
window size: patent attorneys often work with lengthy materials such as invention disclosures, patent
claims, and prior art documents, making it essential for LLMs to process large inputs efectively. While a
RAG approach can be efective, in certain situations, providing the LLM with the entire document within
its context window enables a more thorough analysis, allowing the model to capture the intricate details
and nuances necessary for complex tasks that demand a holistic understanding. Another observation
is that even a well-performing AI tool might not achieve adoption if it does not integrate naturally
into the workflows already established in patent drafting; simply ofering a tool is not enough without
facilitating habitual use. Additionally, LLMs sometimes fail to generate suficiently detailed or complete
outputs for long and complex tasks, suggesting that agentic or multi-step workflows may be necessary.
Overall, these exploratory insights highlight that launching a product involves much more than technical
performance — it requires a deep focus on usability, integration, and real-world practicality to truly
serve end users.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Limitations</title>
      <p>While this study demonstrates the deployment of a secure, on-premise solution for patent drafting
assistance, several limitations should be taken into consideration. The infrastructure requirements for
local deployment present a scalability challenge. Operating LLMs on-premise necessitates substantial
and costly GPU resources, not only for initial model loading but also to reserve suficient memory for
the model’s context window. This constraint may limit the number of model parameters that can be
loaded, which can potentially lead to performance degradation. It may also reduce throughput when
serving multiple concurrent users. As a result, this could create a perception of unreliability for the end
user. Consequently, broader adoption and responsiveness may be hindered compared to cloud-based
alternatives.</p>
      <p>Furthermore, workflow integration poses a challenge. AWACopilot is accessible via a web application,
which may introduce friction for attorneys accustomed to established drafting practices. Additionally,
recent advancements in agentic workflows, leveraging reasoning and the collaborative capabilities
of diverse LLMs, ofer promising potential for enhanced patent task performance; however, their
integration was not feasible within the scope and timeframe of this study. Future research could
investigate the incorporation of such complex AI workflows. Finally, the evaluation of AWACopilot’s
efectiveness is currently based on early-stage feedback and limited-scale testing. A comprehensive
assessment of its impact on attorney productivity and drafting quality in a real-world production
environment remains an essential area for future investigation.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Work</title>
      <p>This study explored deploying AWACopilot, a secure on-premise solution utilizing open-source LLMs
to assist AWA patent attorneys in drafting patent applications sections from human-crafted claims. It
addressed critical data security concerns during sensitive prosecution phases while demonstrating a
modular and adaptable system design. Future work could focus on developing a quantifiable
methodology to evaluate the efectiveness of diferent LLM-system prompt pairs in patent generation tasks. This
includes investigating whether larger parameter models consistently yield better results and assessing
the tool’s impact on attorney workflows through expanded user studies, surveys, and feedback sessions.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>While preparing this study, the authors utilized API calls to gpt-4o-mini, gemini-2.0-flash, and o3-mini
in order to: Paraphrase and reword, Improve writing style, and Grammar and spelling check. Following
the use of these services, the authors reviewed and edited the content as necessary and take full
responsibility for the publication’s content.</p>
      <p>Linux J. 2014 (2014) 2:2.
[29] T. J. Baek, Open-webui: User-friendly ai interface (supports ollama, openai api, and more), https:
//github.com/open-webui/open-webui, 2023. BSD-3-Clause License.
[30] S. Ramírez, Fastapi, 2018. URL: https://github.com/fastapi/fastapi, fastAPI framework, high
performance, easy to learn, fast to code, ready for production.
[31] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih,
T. Rocktäschel, S. Riedel, D. Kiela, Retrieval-augmented generation for knowledge-intensive NLP
tasks, in: Proceedings of the 34th International Conference on Neural Information Processing
Systems, NIPS ’20, Curran Associates Inc., Red Hook, NY, USA, 2020, pp. 9459–9474.
[32] Ollama contributors, Ollama, https://github.com/ollama/ollama, 2025.
[33] W. Kwon, Z. Li, S. Zhuang, Y. Sheng, L. Zheng, C. H. Yu, J. Gonzalez, H. Zhang, I. Stoica, Eficient
Memory Management for Large Language Model Serving with PagedAttention, in: Proceedings
of the 29th Symposium on Operating Systems Principles, SOSP ’23, Association for Computing
Machinery, New York, NY, USA, 2023, pp. 611–626. doi:10.1145/3600006.3613165.
[34] A. Grattafiori, et al., The Llama 3 Herd of Models, 2024. doi:10.48550/ARXIV.2407.21783.
[35] A. Kamath, et al., Gemma 3 Technical Report, 2025. doi:10.48550/ARXIV.2503.19786.
[36] D. Guo, et al., DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement</p>
      <p>Learning, 2025. doi:10.48550/ARXIV.2501.12948.
[37] Langfuse contributors, Langfuse: Open source llm engineering platform, https://github.com/
langfuse/langfuse, 2025. MIT License.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Rubilar-Torrealba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chahuán-Jiménez</surname>
            , H. De La Fuente-Mella
          </string-name>
          ,
          <article-title>Analysis of the Growth in the Number of Patents Granted and Its Efect over the Level of Growth of the Countries: An Econometric Estimation of the Mixed Model Approach</article-title>
          , Sustainability
          <volume>14</volume>
          (
          <year>2022</year>
          )
          <article-title>2384</article-title>
          . doi:
          <volume>10</volume>
          . 3390/su14042384.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>EUIPO</surname>
          </string-name>
          ,
          <string-name>
            <surname>EPO</surname>
          </string-name>
          , Patents, Trade Marks and
          <string-name>
            <given-names>Startup</given-names>
            <surname>Finance</surname>
          </string-name>
          , Study,
          <string-name>
            <surname>EUIPO</surname>
          </string-name>
          , EPO,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>World</given-names>
            <surname>Intellectual Property Organization</surname>
          </string-name>
          ., WIPO Patent Drafting Manual., second edition ed., World Intellectual Property Organization, Geneva, Switzerland,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .34667/TIND. 44657.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Vaswani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shazeer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Parmar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Uszkoreit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Polosukhin</surname>
          </string-name>
          , Attention Is All You Need,
          <year>2017</year>
          . doi:
          <volume>10</volume>
          .48550/ARXIV.1706.03762.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5] OpenAI, Introducing ChatGPT, https://openai.com/index/chatgpt/,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yu</surname>
          </string-name>
          , W. Ma,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Qin</surname>
          </string-name>
          , T. Liu,
          <source>A Survey on Hallucination in Large Language Models: Principles</source>
          , Taxonomy, Challenges, and Open Questions,
          <source>ACM Trans. Inf. Syst</source>
          .
          <volume>43</volume>
          (
          <year>2025</year>
          )
          <volume>42</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>42</lpage>
          :
          <fpage>55</fpage>
          . doi:
          <volume>10</volume>
          .1145/3703155.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fatima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Shafique</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Fadlalla Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. S. Mustafa,</surname>
          </string-name>
          <article-title>ChatGPT in medicine: A cross-disciplinary systematic review of ChatGPT's (artificial intelligence) role in research, clinical practice, education, and patient interaction</article-title>
          ,
          <source>Medicine</source>
          <volume>103</volume>
          (
          <year>2024</year>
          )
          <article-title>e39250</article-title>
          . doi:
          <volume>10</volume>
          .1097/MD. 0000000000039250.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gursoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ,
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <surname>H.</surname>
          </string-name>
          and
          <article-title>Song, ChatGPT and the hospitality and tourism industry: An overview of current trends and future research directions</article-title>
          ,
          <source>Journal of Hospitality Marketing &amp; Management</source>
          <volume>32</volume>
          (
          <year>2023</year>
          )
          <fpage>579</fpage>
          -
          <lpage>592</lpage>
          . doi:
          <volume>10</volume>
          .1080/19368623.
          <year>2023</year>
          .
          <volume>2211993</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Gan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>Large language models in law: A survey</article-title>
          ,
          <source>AI</source>
          Open 5
          <article-title>(</article-title>
          <year>2024</year>
          )
          <fpage>181</fpage>
          -
          <lpage>196</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.aiopen.
          <year>2024</year>
          .
          <volume>09</volume>
          .002.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>L.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Goetz</surname>
          </string-name>
          ,
          <source>Natural Language Processing in Patents: A Survey</source>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv. 2403.04105. arXiv:
          <volume>2403</volume>
          .
          <fpage>04105</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Ariai</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Demartini, Natural Language Processing for the Legal Domain: A Survey of Tasks, Datasets, Models, and</article-title>
          <string-name>
            <surname>Challenges</surname>
          </string-name>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .48550/ARXIV.2410.21306.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>C. M. Greco</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Tagarelli</surname>
          </string-name>
          ,
          <article-title>Bringing order into the realm of Transformer-based language models for artificial intelligence and law</article-title>
          ,
          <source>Artificial Intelligence and Law</source>
          <volume>32</volume>
          (
          <year>2024</year>
          )
          <fpage>863</fpage>
          -
          <lpage>1010</lpage>
          . doi:
          <volume>10</volume>
          .1007/ s10506-023-09374-7.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Casola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lavelli</surname>
          </string-name>
          , Summarization, simplification, and generation:
          <source>The case of patents, Expert Systems with Applications</source>
          <volume>205</volume>
          (
          <year>2022</year>
          )
          <article-title>117627</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2022</year>
          .
          <volume>117627</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>ClaimMaster Software</surname>
            <given-names>LLC</given-names>
          </string-name>
          ,
          <article-title>Patent claim master</article-title>
          , https://www.patentclaimmaster.com,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>PowerPatent</given-names>
            <surname>Inc</surname>
          </string-name>
          ., PowerPatent: Patent prosecution software, https://powerpatent.com,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Rowan</surname>
            <given-names>TELS Corp</given-names>
          </string-name>
          ,
          <article-title>Efective patent drafting with rowan patents</article-title>
          , https://rowanpatents.com,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>J.-S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hsiang</surname>
          </string-name>
          ,
          <article-title>Patent claim generation by fine-tuning OpenAI GPT-2</article-title>
          , World Patent Information
          <volume>62</volume>
          (
          <year>2020</year>
          )
          <article-title>101983</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.wpi.
          <year>2020</year>
          .
          <volume>101983</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Christofidellis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Torres</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Roveri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Swaminathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Vandierendonck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zubarev</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Manica, PGT: A prompt based generative transformer for the patent domain</article-title>
          ,
          <source>in: ICML 2022 Workshop on Knowledge Retrieval and Language Models</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>J.-S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Evaluating generative patent language models</article-title>
          ,
          <source>World Patent Information</source>
          <volume>72</volume>
          (
          <year>2023</year>
          )
          <article-title>102173</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.wpi.
          <year>2023</year>
          .
          <volume>102173</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.-S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>InstructPatentGPT: Training patent language models to follow instructions with human feedback</article-title>
          ,
          <source>Artificial Intelligence and Law</source>
          (
          <year>2024</year>
          ).
          <source>doi:10.1007/s10506-024-09401-1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>F.-C.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-L. Pan</surname>
          </string-name>
          ,
          <article-title>Evaluating application of large language models to biomedical patent claim generation</article-title>
          ,
          <source>World Patent Information</source>
          <volume>80</volume>
          (
          <year>2025</year>
          )
          <article-title>102339</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.wpi.
          <year>2025</year>
          .
          <volume>102339</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K. R.</given-names>
            <surname>Mudhiganti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <article-title>Patentformer: A Novel Method to Automate the Generation of Patent Applications</article-title>
          , in: F.
          <string-name>
            <surname>Dernoncourt</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Preoţiuc-Pietro</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Shimorina (Eds.),
          <source>Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track</source>
          , Association for Computational Linguistics, Miami, Florida,
          <string-name>
            <surname>US</surname>
          </string-name>
          ,
          <year>2024</year>
          , pp.
          <fpage>1361</fpage>
          -
          <lpage>1380</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2024</year>
          .emnlp-industry.
          <volume>101</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>L. V.</given-names>
            <surname>Bui</surname>
          </string-name>
          ,
          <article-title>Advancing patent law with generative AI: Human-in-the-loop systems for AI-assisted drafting, prior art search, and multimodal IP protection</article-title>
          ,
          <source>World Patent Information</source>
          <volume>80</volume>
          (
          <year>2025</year>
          )
          <article-title>102341</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.wpi.
          <year>2025</year>
          .
          <volume>102341</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wallis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Allen-Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          , W. Chen, LoRA:
          <article-title>Low-Rank Adaptation of Large Language Models</article-title>
          , in: International Conference on Learning Representations,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuurmans</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bosma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ichter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. H.</given-names>
            <surname>Chi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Chainof-thought prompting elicits reasoning in large language models</article-title>
          ,
          <source>in: Proceedings of the 36th International Conference on Neural Information Processing Systems</source>
          , NIPS '22, Curran Associates Inc.,
          <string-name>
            <surname>Red</surname>
            <given-names>Hook</given-names>
          </string-name>
          ,
          <string-name>
            <surname>NY</surname>
          </string-name>
          , USA,
          <year>2022</year>
          , pp.
          <fpage>24824</fpage>
          -
          <lpage>24837</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>H.</given-names>
            <surname>Nori</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. T.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Carignan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Edgar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Fusi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Larson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>McKinney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. O.</given-names>
            <surname>Ness</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Poon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Usuyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>White</surname>
          </string-name>
          , E. Horvitz,
          <string-name>
            <surname>Can Generalist Foundation Models Outcompete</surname>
          </string-name>
          Special-Purpose
          <source>Tuning? Case Study in Medicine</source>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2311.16452. arXiv:
          <volume>2311</volume>
          .
          <fpage>16452</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>D.</given-names>
            <surname>Harhof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wagner</surname>
          </string-name>
          ,
          <source>The Duration of Patent Examination at the European Patent Ofice, Management Science</source>
          <volume>55</volume>
          (
          <year>2009</year>
          )
          <fpage>1969</fpage>
          -
          <lpage>1984</lpage>
          . doi:
          <volume>10</volume>
          .1287/mnsc.1090.1069.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>D.</given-names>
            <surname>Merkel</surname>
          </string-name>
          ,
          <article-title>Docker: Lightweight Linux containers for consistent development and deployment,</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>