<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>O. Pastukh);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Methods for integrating large language models into requirements management in agile methodologies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleh Zaiats</string-name>
          <email>ozaiats@tntu.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmytro Mykhalyk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vasyl Yatsyshyn</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleh Pastukh</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmytro</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ternopil Ivan Puluj National Technical University</institution>
          ,
          <addr-line>Ruska, 56, Ternopil, 46001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Yuriy Fedkovych Chernivtsi National University</institution>
          ,
          <addr-line>Kotsiubynskoho, 2, Chernivtsi, 58002</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>Agile software development methodologies rely on effective requirements management to maintain alignment between evolving business needs and delivered functionality. This article investigates the integration of Large Language Models (LLMs) into five widely used Agile frameworks: Scrum, Kanban, Extreme Programming (XP), the Scaled Agile Framework (SAFe), and Lean Software Development with a focus on enhancing requirement capture, refinement, prioritization, traceability, and feedback processing. For each methodology, the study maps specific LLM capabilities to native workflow stages, illustrating opportunities for automation and augmentation without undermining core Agile principles. The findings demonstrate that LLMs can reduce ambiguity, accelerate refinement cycles, and strengthen stakeholderdeveloper collaboration, while also highlighting risks such as hallucinations, bias, and data privacy concerns. The article concludes that effective integration requires tailored strategies, human oversight, and alignment with organizational priorities, and identifies future research directions including domainspecific fine-tuning and long-term impact evaluation.</p>
      </abstract>
      <kwd-group>
        <kwd>Large Language Models</kwd>
        <kwd>Agile</kwd>
        <kwd>requirements engineering</kwd>
        <kwd>Scrum</kwd>
        <kwd>Kanban</kwd>
        <kwd>SAFe</kwd>
        <kwd>XP</kwd>
        <kwd>Lean</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Effective requirements management is one of the most critical success factors in software
development [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In Agile environments, where change is expected and feedback loops are short,
requirements must be captured, refined, prioritized, and validated in a way that preserves clarity
while enabling rapid delivery. However, Agile teams often face recurring challenges: requirements
arrive in inconsistent formats, stakeholder input may be incomplete or ambiguous, and the pace of
change can lead to gaps in traceability or overlooked dependencies. These challenges are
particularly pronounced in complex software systems where quality management and systematic
evaluation approaches are essential [
        <xref ref-type="bibr" rid="ref3">3,19</xref>
        ].
      </p>
      <p>
        Recent advances in Large Language Models (LLMs), such as GPT-4, Claude, and other
transformer-based systems, have created new opportunities to address these challenges. Trained on
vast amounts of text data, LLMs excel at understanding, classifying, summarizing, and generating
natural language. These capabilities position them as promising assistants in the requirements
engineering process – from transforming raw stakeholder input into structured backlog items, to
maintaining requirement quality across the entire development lifecycle. The integration of
artificial intelligence and data science approaches in software development [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] demonstrates the
potential for automated assistance in traditionally manual processes.
      </p>
      <p>
        This article explores methods for integrating LLMs into requirements management processes
within five Agile methodologies: Scrum, Kanban, Extreme Programming (XP), the Scaled Agile
Framework (SAFe), and Lean Software Development. For each methodology, we map LLM
capabilities to native workflow stages, propose concrete enhancement points, and illustrate the
changes using process diagrams. The goal is to identify integration strategies that deliver
measurable value – such as reduced ambiguity, faster backlog preparation, improved prioritization,
and better traceability – while respecting the core principles of each methodology and established
software quality frameworks [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        While the potential of LLMs in Agile requirements management is significant, their adoption
requires careful consideration of risks, including hallucinated outputs, embedded bias,
overreliance on automation, and data confidentiality concerns. As such, this study emphasizes
humanin-the-loop approaches, where LLMs augment but do not replace human judgment in
qualitycritical decisions [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>The findings contribute both to the theoretical understanding of AI-assisted requirements
engineering and to its practical application in Agile settings. They provide a foundation for future
research into domain-specific fine-tuning, integration into project management tools, and empirical
studies on productivity and quality outcomes.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Overview of Agile Methodologies and Approaches to Requirements</title>
    </sec>
    <sec id="sec-3">
      <title>Management</title>
      <p>Agile software development encompasses a variety of frameworks, each with its own
philosophy, workflow structure, and approach to managing requirements. Although the Agile
Manifesto emphasizes “working software over comprehensive documentation” and “responding to
change over following a plan,” requirements remain central to ensuring product success.
Understanding how requirements are gathered, refined, and maintained across different Agile
methods is a prerequisite for identifying effective Large Language Model (LLM) integration points.
2.1.</p>
      <sec id="sec-3-1">
        <title>Scrum</title>
        <p>Kanban’s visual and continuous approach allows for rapid adaptation to change but relies
heavily on the quality of requirement intake and ongoing communication to ensure clarity.
2.3.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Extreme Programming (XP)</title>
        <p>
          XP emphasizes technical excellence, continuous integration, and frequent releases.
Requirements are often represented as story cards discussed directly with an on-site customer. This
method integrates requirement specification tightly with automated testing, using practices such as
test-driven development (TDD) [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>The stages of the Extreme Programming pipeline are shown in Figure 3.</p>
        <p>Extreme Programming Pipeline includes following stages:</p>
        <p>Stage 1 - Planning Game, where customer and team collaborate to define user stories and
release planning.</p>
        <p>Stage 2 - Design Simply: Create the simplest design that works, using metaphors and CRC cards.
Stage 3 - Write Tests First: Create unit tests before coding.</p>
        <p>Stage 4 - Code in Pairs: Implement code with pair programming.</p>
        <p>Stage 5 - Continuous Integration: Integrate and test frequently.</p>
        <p>Stage 6 - Listen and Adapt: Gather feedback from running software and adjust stories
accordingly.</p>
        <p>XP’s lightweight, test-first approach to requirements promotes rapid responsiveness to
customer needs while maintaining high-quality standards through continuous testing.
2.4.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Scaled Agile Framework (SAFe)</title>
        <p>
          SAFe (Scaled Agile Framework) is the world's leading framework for implementing Agile
practices at enterprise scale, created by Dean Leffingwell in 2011 to address the challenge of
coordinating multiple Agile teams across large organizations while maintaining the flexibility and
responsiveness of traditional Agile methods [
          <xref ref-type="bibr" rid="ref10 ref11">10,11</xref>
          ]. The framework features a multi-level structure
spanning Portfolio, Large Solution, Program, and Team levels, with Program Increments (PIs)
providing 8-12 week planning cycles that synchronize multiple teams, Agile Release Trains (ARTs)
organizing collections of 5-12 Agile teams working together, hierarchical requirements flowing
from Portfolio Epics through Capabilities and Features down to Stories, and continuous delivery
mechanisms with built-in feedback loops. SAFe combines Lean, Agile, and DevOps principles into a
comprehensive system that helps organizations deliver value faster while maintaining quality and
alignment across all levels of the enterprise.
        </p>
        <p>SAFe Pipeline stages are shown in figure 4.</p>
        <p>Stages of SAFe are:
Stage 1 – Portfolio Strategy - define portfolio epics and strategic themes at the portfolio level.
Stage 2 – Solution Planning - break down portfolio epics into capabilities and program epics.
Stage 3 – Program Planning - decompose capabilities/epics into features during PI planning.</p>
        <p>Stage 4 – Team Execution - break features into stories and execute in iterations within Program
Increments.</p>
        <p>Stage 5 – Continuous Delivery - deliver value incrementally across ARTs with regular inspect
and adapt cycles.</p>
        <p>SAFe provides a structured yet flexible approach to scaling Agile beyond individual teams. By
organizing work hierarchically from strategic portfolio epics down to implementable user stories,
and coordinating delivery through Program Increments and Agile Release Trains, SAFe enables
large organizations to maintain Agile's benefits – rapid response to change, continuous
improvement, and customer focus – while providing the coordination and governance needed at
enterprise scale.</p>
        <p>The framework's emphasis on alignment, transparency, and continuous delivery makes it
particularly valuable for organizations building complex products that require coordination across
multiple teams, departments, and even business units. With over 20,000 organizations worldwide
using SAFe, it has proven its effectiveness in helping enterprises achieve business agility in today's
rapidly changing market conditions.
2.5.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Lean Software Development</title>
        <p>
          Lean emphasizes eliminating waste and delivering maximum value to the customer as quickly
as possible. Requirements are kept minimal and often validated by quick prototyping.
Decisionmaking is decentralized, and documentation is limited to essential artifacts [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. Figure 5 shows the
pipeline of Lean software development.
        </p>
        <p>Pipeline stages:
Stage 1 - Define what customers truly value and eliminate unnecessary work.</p>
        <p>Stage 2 - Visualize workflow from concept to delivery, identifying waste and bottlenecks.
Stage 3 - Establish continuous workflow by removing delays and optimizing process steps.
Stage 4 - Implement demand-driven delivery where work starts only when customers need it.
Stage 5 - Continuously improve processes through regular feedback and waste elimination.</p>
        <p>Lean’s streamlined approach enables fast delivery and adaptability but depends on clear
prioritization and disciplined validation to avoid pursuing low-value work.</p>
        <p>Comparative analysis of requirement management characteristics in Agile frameworks are
displayed in table 1.
3. Types of Large Language Models and Their Potential in Requirements
Management</p>
        <p>Large Language Models (LLMs) are advanced AI systems trained on extensive corpora of text to
perform a wide range of language-related tasks, including comprehension, summarization,
classification, and generation of natural language. Their transformer-based architectures enable
them to capture contextual relationships in text with high accuracy, making them a potentially
transformative tool in requirements engineering (RE) for Agile environments. This section outlines
the classification of LLMs, their capabilities relevant to requirements management, and the risks
associated with their application.
3.1.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Classification of LLMs</title>
        <p>
          LLMs can be categorized along several dimensions that determine their suitability for Agile
requirements workflows [
          <xref ref-type="bibr" rid="ref13 ref14">13,14</xref>
          ]
1. By scope of training
 General-purpose models – trained on diverse datasets from multiple domains (e.g.,
OpenAI GPT-4, Anthropic Claude, Google Gemini). They are adaptable to various
contexts but may require prompt engineering for domain-specific accuracy.
 Domain-specific models – fine-tuned on specialized corpora, such as requirements
documentation, industry standards, or regulatory guidelines. These models (e.g.,
legal-specific LLMs, medical domain LLMs) provide higher accuracy in their
respective domains. Research has demonstrated the effectiveness of automated
processing and analysis of domain-specific texts, such as medical documentation,
using machine learning and deep learning approaches [18], which parallels the
potential for specialized requirements engineering models.
2. By accessibility
 Proprietary models – commercial offerings with API-based access and high
performance but limited transparency in training data (e.g., GPT-4, Claude, Gemini).
 Open-source models – available for self-hosting and customization (e.g., LLaMA 2,
        </p>
        <p>Falcon, MPT), offering greater control over data privacy and compliance.
3. By deployment model
 Cloud-hosted LLMs – operated by the provider, enabling scalable use but raising
concerns over confidentiality and compliance.
 On-premises/self-hosted LLMs – deployed within organizational infrastructure,
ensuring data control but requiring more computational resources and maintenance.</p>
        <p>Understanding the classification of LLMs is crucial for selecting an appropriate model that
aligns with the organization’s domain, security requirements, and technical capabilities.
3.2.</p>
      </sec>
      <sec id="sec-3-6">
        <title>Capabilities Relevant to Agile Requirements Management</title>
        <p>
          LLMs can enhance different stages of the requirements engineering process in Agile contexts,
from initial backlog creation to continuous refinement and traceability. The following capabilities
are the most relevant for practical integration[
          <xref ref-type="bibr" rid="ref16 ref17">16,17</xref>
          ]:
1. Automated requirements classification
 Sorting and tagging requirements into categories such as functional,
nonfunctional, constraints, and quality attributes.
 Detecting duplicates or contradictions that may otherwise be overlooked during
manual backlog review, particularly in large, multi-team environments.
2. Transformation of user stories into technical specifications
 Expanding a short, business-oriented user story into a full set of acceptance
criteria.
 Suggesting relevant architectural considerations or component-level implications,
which can be reviewed by technical leads before development begins.
 In some cases, even generating draft interface definitions or pseudo-code that
illustrate the intended functionality.
3. Requirements traceability
 Maintaining bidirectional links between requirements, design elements, test
cases, and deployment tasks.
 Automatically updating traceability matrices when backlog items are modified,
merged, or split, helping teams meet compliance or audit requirements without
significant manual overhead.
4. Validation and completeness checks
 Detecting vague or ambiguous terms (e.g., “fast,” “user-friendly”) and prompting
clarification.
 Highlighting missing elements such as acceptance tests, performance thresholds,
or security considerations.
 Comparing related backlog items to ensure consistent terminology and avoid
subtle conflicts.
5. Integration with project management tools
 Embedding AI assistance directly in platforms like Jira, Azure DevOps, or Trello,
enabling in-context refinement of requirements.
 Automating prioritization based on changing business metrics, dependencies, or
risk assessments.
 Providing analytics on backlog quality over time, such as the ratio of well-formed
to incomplete user stories.
        </p>
        <p>LLMs bring multi-faceted benefits to Agile requirements management – not only by
accelerating repetitive tasks like classification and traceability but also by improving requirement
clarity, consistency, and adaptability to changing business priorities.
3.3.</p>
      </sec>
      <sec id="sec-3-7">
        <title>Risks and Limitations</title>
        <p>
          While the promise of LLMs in Agile requirements management is considerable, their adoption
comes with notable caveats that teams must address before large-scale deployment [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]:
 Hallucinations and factual inaccuracies – LLMs occasionally produce content that
sounds plausible but is factually wrong or entirely fabricated. This is particularly
dangerous in requirements engineering, where such errors could propagate through
design and testing before being detected.
 Data privacy and confidentiality – Cloud-hosted LLMs often require sending prompts
and context outside the organization’s network. For projects involving sensitive or
regulated information, this raises compliance risks and may require either
anonymization of inputs or on-premises deployment.
 Bias in generated content – Pre-trained models can inadvertently introduce bias in
requirement phrasing, prioritization, or suggested solutions. While often subtle, such
bias may influence product decisions in ways that conflict with inclusivity or fairness
goals.
 Over-reliance on automation – There is a danger that teams start trusting AI-generated
backlog items, acceptance criteria, or dependencies without sufficient human review.
Agile thrives on collaboration, and over-delegating decisions to an algorithm may
reduce team engagement and shared ownership.
 Operational considerations – Frequent API calls to commercial LLMs can significantly
increase project costs, while self-hosting large models may require specialized
hardware, higher energy consumption, and ongoing model maintenance.
        </p>
        <p>To harness the benefits of LLMs without undermining Agile principles, organizations must
combine AI assistance with robust governance – including human-in-the-loop review, data
handling policies, and continuous monitoring of output quality. Large Language Models can be
classified by their scope, accessibility, and deployment approach, each offering distinct trade-offs
between flexibility, control, and resource requirements. Their capabilities extend well beyond
simple text generation – supporting classification, refinement, traceability, validation, and tool
integration – making them highly relevant to Agile requirements management.</p>
        <p>At the same time, effective adoption demands awareness of potential risks: inaccuracies, bias,
data security concerns, over-reliance on automation, and operational costs. These challenges
highlight the importance of tailored integration strategies that account for the unique workflows of
different Agile methodologies. The following section builds on these insights by exploring how
LLMs can be systematically embedded into Scrum, Kanban, XP, SAFe, and Lean pipelines, including
practical examples and visual process models.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Methods and Potential Integration Paths of LLMs into Agile</title>
    </sec>
    <sec id="sec-5">
      <title>Methodologies</title>
      <p>The application of LLMs in Agile environments should be adapted to the specific workflow and
artifacts of each methodology. This section proposes practical integration strategies for five widely
used frameworks – Scrum, Kanban, Extreme Programming (XP), SAFe, and Lean – and illustrates
their use through diagrams and a detailed case study for Scrum.
4.1.</p>
      <sec id="sec-5-1">
        <title>LLM-Enhanced Scrum Pipeline</title>
        <p>Stage 7.1 – Sprint review</p>
        <p>Stage 7.2 (LLM) - LLM can summarize stakeholder feedback, map it to new/updated stories, and
note terminology drift. Produce a short change log and traceability suggestions back to epics/tests.</p>
        <p>Stage 7.3 – Validate and update requirements based on Product Owner input and using LLM
summary. Continue with stage 3.</p>
        <p>Figure 6 presents a visualization of the modified Scrum pipeline with integrated LLM models.</p>
        <p>LLMs can improve Scrum requirements management by clarifying stakeholder inputs,
converting them into well-structured user stories with acceptance criteria, and assisting in backlog
refinement through dependency detection, risk highlighting, and story adjustments. They support
sprint planning with prioritization hints and capacity checks, aid development by suggesting
acceptance tests and edge cases, and post-sprint by analyzing feedback to update the backlog. This
reduces ambiguity, speeds refinement, and strengthens traceability while keeping human oversight
central.
4.2.</p>
      </sec>
      <sec id="sec-5-2">
        <title>LLM-enhanced Kanban pipeline</title>
        <p>While Kanban excels at transparency and adaptability, its minimal documentation approach can
lead to vague or incomplete cards if the intake process is not well structured. Large Language
Models can improve Kanban’s requirement handling by transforming unstructured stakeholder
inputs into well-formed cards, enriching them with acceptance criteria, and maintaining clarity as
work progresses. They can also assist in prioritization, detect bottlenecks, and generate meaningful
release summaries without breaking Kanban’s lightweight workflow. Figure 7 displays the
modified Kanban pipeline with LLMs models integration.</p>
        <sec id="sec-5-2-1">
          <title>LLM-Enhanced Kanban pipeline:</title>
          <p>Stage 1.1 – Gather raw requirements</p>
          <p>Stage 1.2 (LLM) - Before anything appears on the Kanban board, LLMs can process stakeholder
inputs – from meeting notes, emails, or chat logs – to extract the core requirement, remove noise,
and rewrite it as a concise, well-structured card candidate. They can also propose initial tags and
detect missing details, generating clarifying questions for the product owner.</p>
          <p>Stage 2.1 - Add structured requirement as Kanban card.</p>
          <p>Stage 2.1 (LLM) - At this stage, the LLM can further clean wording, group similar cards, and
ensure the descriptions follow the team’s standards and the format is consistent.</p>
          <p>Stage 3.1 - Prioritize according to business needs.</p>
          <p>Stage 3.2 (LLM) - An LLM can review deadlines, business objectives, and dependency chains to
propose a priority order. It might also highlight quick wins or warn about items with high
complexity or risk.</p>
          <p>Stage 4.1 - To Do.</p>
          <p>Stage 4.2 (LLM) - Before work begins, the LLM can suggest acceptance criteria, generate
example scenarios, and identify unclear requirements.</p>
          <p>Stage 5 - In Progress.</p>
          <p>Stage 6 - Done.</p>
          <p>Stage 7.1 - Continuous or batch release.</p>
          <p>Stage 7.2 (LLM) - When features are delivered, the LLM can summarize changes for release
notes and trace delivered items back to their original requests or epics. This supports transparency,
traceability, and compliance documentation.</p>
          <p>Integrating LLMs into Kanban supports requirement gathering, card creation, prioritization, and
continuous improvement. LLMs can extract and format stakeholder requests into structured
Kanban cards, enrich them with tags and acceptance criteria, and assist in priority ordering based
on business objectives and dependencies. During development, they help maintain clarity as work
items progress and, after release, generate summaries and trace items back to original requests.
This results in cleaner intake, faster prioritization, and improved traceability without disrupting
Kanban’s continuous flow.
4.3.</p>
        </sec>
      </sec>
      <sec id="sec-5-3">
        <title>LLM-enhanced Extreme Programming (XP) pipeline</title>
        <p>While XP’s lightweight artifacts encourage flexibility, they can be challenging to keep
synchronized with evolving feedback and tests. By integrating Large Language Models, XP teams
can accelerate story creation from customer conversations, expand acceptance tests with
comprehensive scenarios, and maintain an up-to-date repository of requirements and tests. This
helps preserve XP’s agility while reducing the risk of overlooked or outdated requirements.</p>
        <p>LLM-Enhanced XP pipeline:
Stage 1.1 – Planning Game: Customer presents requirements and priorities</p>
        <p>Stage 1.2 (LLM) - LLM can normalize customer requirements, extract clear user stories following
XP format, generate story estimates based on complexity patterns, and suggest release planning
options. Create metaphors and system vocabulary to ensure shared understanding.</p>
        <p>Stage 2.1 – Design Simply: Team creates simplest design</p>
        <p>Stage 2.2 (LLM) – LLM can suggest simple design patterns, generate CRC cards automatically
from user stories, identify potential over-engineering risks, and propose refactoring opportunities
to maintain simplicity.</p>
        <p>Stage 3.1 – Write Tests First: Create unit tests</p>
        <p>Stage 3.2 (LLM) – LLM can auto-generate test cases from acceptance criteria, suggest edge cases
and boundary conditions, create test data sets, and ensure comprehensive test coverage following
TDD principles.</p>
        <p>Stage 4.1 – Code in Pairs: Implement functionality</p>
        <p>Stage 4.2 (LLM) – LLM can act as coding assistant during pair programming, suggest code
improvements, detect code smells in real-time, and provide refactoring suggestions while
maintaining collective code ownership.</p>
        <p>Stage 5.1 – Continuous Integration: Integrate code frequently</p>
        <p>Stage 5.2 (LLM) – LLM can analyze integration conflicts, suggest merge strategies, optimize
build processes, and predict integration risks based on code changes.</p>
        <p>Stage 6.1 – Listen and Adapt: Collect feedback from running software</p>
        <p>Stage 6.2 (LLM) – LLM can analyze user feedback patterns, correlate feedback with specific user
stories, suggest story adjustments, and identify emerging requirements for the next planning cycle.</p>
        <p>In Extreme Programming, LLMs can streamline the translation of customer conversations into
structured user stories, propose thorough acceptance tests including edge cases, and keep these
aligned through iterative feedback cycles. They help detect when tests or requirements become
outdated, maintain traceability, and generate clear changelogs. The result is faster feedback
incorporation, higher test coverage, and a continuously accurate repository of requirements and
tests, while preserving XP’s emphasis on close customer collaboration.
4.4.</p>
      </sec>
      <sec id="sec-5-4">
        <title>LLM-Enhanced SAFe Pipeline</title>
        <p>While SAFe offers robust planning and traceability mechanisms, its complexity can introduce
overhead in managing large volumes of requirements and dependencies.
Integrating Large Language Models into SAFe can streamline decomposition, improve consistency
in requirement descriptions, and enhance dependency management during Program Increment (PI)
planning. LLMs can also strengthen traceability across all levels of the hierarchy, ensuring that
business value is preserved from epic definition to feature delivery.</p>
        <p>Stage 1.1 – Portfolio Strategy: Define strategic themes and business objectives</p>
        <p>Stage 1.2 (LLM) - LLM can analyze market data, competitive intelligence, and business metrics
to suggest strategic themes. Generate portfolio epic proposals based on business objectives,
perform risk/benefit analysis, and map themes to value streams automatically.</p>
        <p>Stage 2.1 – Solution Planning: Break down portfolio epics</p>
        <p>Stage 2.2 (LLM) - LLM can decompose large portfolio epics into appropriately sized capabilities
and program epics. Suggest capability boundaries, identify cross-ART dependencies, generate
benefit hypotheses, and create traceability matrices between portfolio and program levels.</p>
        <p>Stage 3.1 – Program Planning: PI Planning preparation and feature decomposition
Stage 3.2 (LLM) - LLM can assist in PI Planning by breaking capabilities into right-sized
features, generating feature acceptance criteria, identifying program dependencies and risks,
suggesting team capacity allocation, and creating draft PI objectives.</p>
        <p>Stage 4.1 – Team Execution: Story creation and sprint execution</p>
        <p>Stage 4.2 (LLM) - LLM can decompose features into implementable user stories following
INVEST criteria, generate acceptance criteria and test cases, suggest story splitting techniques,
provide estimation support, and identify blockers or dependencies between stories.</p>
        <p>Stage 5.1 – Continuous Delivery: Release and feedback collection</p>
        <p>Stage 5.2 (LLM) - LLM can analyze delivery metrics across ARTs, synthesize stakeholder
feedback from inspect and adapt events, suggest process improvements, track value delivery
against PI objectives, and recommend adjustments for future PI planning cycles.</p>
        <p>Figure 9 displays LLM-Enhanced SAFe Pipeline.</p>
        <p>In SAFe, LLMs can assist at every level of requirement decomposition, from epics to user stories,
ensuring alignment with business value and portfolio priorities. They help maintain consistency in
acceptance criteria, visualize dependencies during PI planning, and preserve traceability across the
entire hierarchy. The result is improved clarity, stronger cross-team coordination, and easier
compliance reporting, all while supporting SAFe’s structured, multi-level planning approach.</p>
      </sec>
      <sec id="sec-5-5">
        <title>LLM-Enhanced Lean Software Development Pipeline</title>
        <p>The minimalism that Lean Software Development provides makes this methodology highly
adaptable, but it also risks omitting important details or traceability if not carefully managed. LLMs
can enhance Lean by ensuring even lightweight requirements are clear, consistent, and traceable,
without adding unnecessary bureaucracy.</p>
        <p>Figure 10 displays modified Lean Software Development Pipeline with integrated LLMs stages.</p>
        <sec id="sec-5-5-1">
          <title>LLM-Enhanced Lean Software Development Pipeline:</title>
          <p>Stage 1.1 - Define what customers truly value</p>
          <p>Stage 1.2 (LLM) - LLM can analyze customer feedback, surveys, and usage data to identify true
value drivers. Generate customer personas, prioritize value propositions, and suggest features that
eliminate non-value-adding work automatically.</p>
          <p>Stage 2.1 - Visualize workflow from concept to delivery</p>
          <p>Stage 2.2 (LLM) - LLM can auto-generate value stream maps from process documentation,
identify waste patterns across similar workflows, suggest bottleneck removal strategies, and predict
where delays are likely to occur.</p>
          <p>Stage 3.1 - Establish continuous workflow</p>
          <p>Stage 3.2 (LLM) - LLM can optimize process sequences, suggest workflow automation
opportunities, identify resource allocation improvements, monitor flow metrics in real-time, and
recommend process standardization approaches.</p>
          <p>Stage 4.1 - Implement demand-driven delivery</p>
          <p>Stage 4.2 (LLM) - LLM can analyze demand patterns to predict customer needs, optimize pull
system triggers, suggest capacity adjustments based on demand forecasting, and automate work
initiation based on customer signals.</p>
          <p>Stage 5.1 - Continuously improving processes</p>
          <p>Stage 5.2 (LLM) - LLM can analyze performance metrics to identify improvement opportunities,
suggest kaizen initiatives, synthesize feedback from multiple sources, track improvement impact,
and recommend next optimization cycles.</p>
          <p>This approach shows how AI enhances each Lean principle while maintaining the focus on
waste elimination and continuous flow that defines Lean methodology.</p>
          <p>LLMs can summarize user reactions, identify common improvement themes, and propose
nextstep requirements. This helps ensure the next iteration is informed by clear, structured insights
instead of raw, unfiltered feedback.</p>
          <p>This chapter examined how Large Language Models can be embedded into the requirements
management processes of five Agile methodologies: Scrum, Kanban, Extreme Programming (XP),
the Scaled Agile Framework (SAFe), and Lean Software Development. For each methodology, we
mapped LLM capabilities directly to its native workflow stages, introduced specific enhancement
opportunities, and illustrated them with annotated process diagrams.</p>
          <p>Across all frameworks, LLMs were shown to add value at key points such as requirement
intake, backlog or board preparation, prioritization, acceptance criteria generation, and
postdelivery feedback analysis. The depth of integration varied according to the methodology: Scrum
and SAFe benefit from structured decomposition and traceability support; Kanban gains from
improved card clarity and prioritization; XP from enriched acceptance testing and synchronization
with evolving feedback; and Lean from structured yet lightweight documentation that preserves
agility while reducing waste.</p>
          <p>A consistent pattern emerged – LLMs can reduce ambiguity, accelerate refinement, and
strengthen traceability without disrupting the core principles of each methodology. However, the
chapter also reinforced that these benefits depend on maintaining human oversight, tailoring
integration to the workflow, and ensuring alignment with organizational priorities and compliance
requirements.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion</title>
      <p>The integration of Large Language Models into Agile requirements management offers a
promising avenue for enhancing efficiency, consistency, and traceability across diverse
development contexts. By aligning LLM capabilities with the distinct workflows of Scrum, Kanban,
Extreme Programming, SAFe, and Lean Software Development, this study has shown that AI can
effectively support both structured and lightweight Agile approaches.</p>
      <p>LLMs excel in transforming unstructured stakeholder input into well-formed requirements,
enriching backlog items or work cards with acceptance criteria, identifying dependencies, and
maintaining traceability across the development lifecycle. They also facilitate faster refinement
cycles, more informed prioritization, and clearer post-delivery feedback analysis. While the level
and style of integration vary by methodology, the underlying benefits are consistent: reduced
ambiguity, improved requirement quality, and more efficient collaboration between business and
technical stakeholders.</p>
      <p>However, the deployment of LLMs in requirements engineering is not without risks. Challenges
such as hallucinations, bias, over-reliance on automation, and data privacy concerns underscore the
need for robust governance and human-in-the-loop oversight. Moreover, integration strategies
must be tailored to each methodology’s principles to avoid undermining the very agility they aim
to support.</p>
      <p>Future research should explore domain-specific LLM fine-tuning, evaluate real-world
productivity impacts over extended projects, and investigate hybrid human–AI collaboration
models that optimize both speed and accuracy. As technology matures, LLM-assisted requirements
management has the potential to become an integral part of Agile practice, bridging the gap
between human creativity and machine efficiency.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <sec id="sec-7-1">
        <title>The author(s) have not employed any Generative AI tools.</title>
        <p>[18] Semchyshyn, V., &amp; Mykhalyk, D. (2023). Automated processing and analysis of medical texts.</p>
        <p>ITTAP, 300-305.
[19] Lypak, O.H., Lytvyn, V., Lozynska, O., Rzheuskyi, A., Dosyn, D. (2019). Formation of Efficient
Pipeline Operation Procedures Based on Ontological Approach. Advances in Intelligent
Systems and Computing, 871, 571–581.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Kharchenko</surname>
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yatsyshyn</surname>
            <given-names>V</given-names>
          </string-name>
          .
          <article-title>Rozrobka ta keruvannya vymohamy do prohramnoho zabezpechennya na osnovi modeli yakosti</article-title>
          .
          <source>Visnyk TDTU</source>
          .
          <year>2009</year>
          . Tom
          <volume>14</volume>
          . №1. S.
          <volume>201</volume>
          -
          <fpage>207</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Pastukh</surname>
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yatsyshyn</surname>
            <given-names>V</given-names>
          </string-name>
          .
          <article-title>Development of software for neuromarketing based on artificial intelligence and data science using high-performance computing and parallel programming technologies</article-title>
          .
          <source>Scientific Journal of TNTU. Tern.: TNTU</source>
          ,
          <year>2024</year>
          . Vol
          <volume>113</volume>
          . No 1. P.
          <volume>143</volume>
          -
          <fpage>149</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Yatsyshyn</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kharchenko</surname>
            <given-names>А</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Galay</surname>
            <given-names>І</given-names>
          </string-name>
          .
          <source>The Method of Quality Management Software. Proceeding of the VIIth International Conference "Perspective technologies and methods in MEMS design"</source>
          <fpage>11</fpage>
          -
          <lpage>14</lpage>
          May 2011 - Polyana, Ukraine: Publishing House Vezha&amp;Co.
          <year>2011</year>
          .- p.
          <fpage>228</fpage>
          -
          <lpage>230</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>[4] ISO/IEC 14598: Information Technology - Software product evaluation. Parts</source>
          <volume>1</volume>
          to 6:
          <fpage>1999</fpage>
          -
          <lpage>2001</lpage>
          , International Organization for Standardization, Geneva,
          <fpage>1999</fpage>
          -
          <lpage>2001</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Schwaber</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Sutherland</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>The Scrum Guide. The Definitive Guide to Scrum: The Rules of the Game</article-title>
          . Scrum.org.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Rubin</surname>
            ,
            <given-names>K. S.</given-names>
          </string-name>
          (
          <year>2012</year>
          ).
          <article-title>Essential Scrum: A Practical Guide to the Most Popular Agile Process</article-title>
          .
          <string-name>
            <surname>Addison-Wesley Professional</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>D. J.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>Kanban: Successful Evolutionary Change for Your Technology Business</article-title>
          . Blue Hole Press.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Ahmad</surname>
            ,
            <given-names>M. O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Markkula</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Oivo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Kanban in software development: A systematic literature review</article-title>
          .
          <source>39th Euromicro Conference on Software Engineering and Advanced Applications</source>
          ,
          <fpage>9</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Wells</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Extreme Programming: A gentle introduction</article-title>
          .
          <source>ExtremeProgramming.org.</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Leffingwell</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2018</year>
          ).
          <article-title>SAFe 4.5 Reference Guide: Scaled Agile Framework for Lean Enterprises</article-title>
          .
          <string-name>
            <surname>Addison-Wesley Professional</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Knaster</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Leffingwell</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>SAFe 4.0 Distilled: Applying the Scaled Agile Framework for Lean Software</article-title>
          and
          <string-name>
            <given-names>Systems</given-names>
            <surname>Engineering</surname>
          </string-name>
          . Addison-Wesley Professional.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Poppendieck</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Poppendieck</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          (
          <year>2003</year>
          ).
          <source>Lean Software Development: An Agile Toolkit. Addison-Wesley Professional.</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Brown</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mann</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ryder</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , et al. (
          <year>2020</year>
          ).
          <article-title>Language models are few-shot learners</article-title>
          .
          <source>Advances in Neural Information Processing Systems</source>
          ,
          <volume>33</volume>
          ,
          <fpage>1877</fpage>
          -
          <lpage>1901</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Vaswani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shazeer</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parmar</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , et al. (
          <year>2017</year>
          ).
          <article-title>Attention is all you need</article-title>
          .
          <source>Advances in Neural Information Processing Systems</source>
          ,
          <volume>30</volume>
          ,
          <fpage>5998</fpage>
          -
          <lpage>6008</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Ji</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frieske</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , et al. (
          <year>2023</year>
          ).
          <article-title>Survey of hallucination in natural language generation</article-title>
          .
          <source>ACM Computing Surveys</source>
          ,
          <volume>55</volume>
          (
          <issue>12</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tworek</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jun</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , et al. (
          <year>2021</year>
          ).
          <article-title>Evaluating large language models trained on code</article-title>
          .
          <source>arXiv preprint arXiv:2107</source>
          .
          <fpage>03374</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Churchill</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maes</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , et al. (
          <year>2020</year>
          ).
          <article-title>From human-human collaboration to human-AI collaboration: Designing AI systems that can work with people</article-title>
          .
          <source>Proceedings of the 2020 CHI Conference Extended Abstracts</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>