<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>HH4AI: A Methodological Framework for AI Human Rights Impact Assessment under the EU AI Act</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Deloitte Financial Advisory S.r.l. S.B.</institution>
          ,
          <addr-line>Milan</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università degli Studi di Milano</institution>
          ,
          <addr-line>Milan</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper introduces the HH4AI Methodology, a structured approach to assessing the impact of AI systems on human rights, focusing on compliance with the EU AI Act and addressing technical, ethical and regulatory challenges. The paper highlights AI's transformative nature, driven by autonomy, data and goal-oriented design, and how the EU AI Act promotes transparency, accountability and safety. A key challenge is defining and assessing "high-risk" AI systems across industries, complicated by the lack of universally accepted standards and AI's rapid evolution. To address these challenges, the paper explores the relevance of ISO/IEC and IEEE standards, focusing on risk management, data quality, bias mitigation and governance. It proposes a Fundamental Rights Impact Assessment (FRIA) methodology, a gate-based framework designed to isolate and assess risks through phases including an AI system overview, a human rights checklist, an impact assessment and a final output phase. A filtering mechanism tailors the assessment to the system's characteristics, targeting specific areas like accountability, AI literacy, data governance and transparency. The structured approach enables systematic filtering, comprehensive risk assessment and mitigation planning, efectively prioritizing critical risks and providing clear remediation strategies. This promotes better alignment with human rights principles and enhances regulatory compliance.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence</kwd>
        <kwd>Fundamental Rights</kwd>
        <kwd>Impact Assessment</kwd>
        <kwd>EU AI Act</kwd>
        <kwd>AI Governance</kwd>
        <kwd>AI Ethics</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Artificial Intelligence (AI) encompasses technologies perorming tasks such as reasoning, learning,
decision-making and perception. The EU AI Act, defined in Article 3(1), describes AI systems as
technologies operating autonomously to process inputs and generate outputs impacting various
environments. This definition emphasizes autonomy, data-driven learning and adaptability.</p>
      <p>The Act’s broad scope encompasses methodologies like machine learning and symbolic reasoning,
reflecting AI’s evolving nature. Assessing high-risk systems involves analyzing technological and
contextual factors, but compliance remains challenging due to AI’s rapid evolution, methodological
diversity and the absence of universally accepted standards.</p>
      <p>AI assessment complexity arises from the interdependence of models, data and external variables that
create unpredictable interactions. Continuous updates can alter system behavior without transparency,
while inconsistent frameworks and difering regulatory priorities across jurisdictions hinder alignment.
Ensuring fairness, transparency and accountability is particularly challenging for opaque models.
Efective global governance requires harmonizing EU regulations with international frameworks to
avoid trade barriers and encourage innovation.</p>
      <p>Resource constraints, especially afecting SMEs, complicate compliance eforts. A structured
methodology is essential for efective risk assessment, compliance and promoting trustworthy,
human-rightsaligned AI systems.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Legal and Regulatory Background</title>
      <sec id="sec-2-1">
        <title>2.1. The Challenges of AI Assessment</title>
        <p>
          The EU AI Act [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] establishes a comprehensive framework for regulating AI systems within the EU,
emphasizing their autonomy, data-driven nature and adaptive capabilities (Article 3). It encompasses
diverse methodologies such as planning, reasoning, knowledge representation and learning, as noted in
Recital 12. Risk management procedures (Article 9) are required for high-risk systems, involving risk
identification, assessment and mitigation, while low-risk systems may implement these voluntarily.
Data governance and reporting requirements emphasize GDPR compliance (Article 10), cybersecurity
(Article 15) and data quality (Articles 10 and 15) [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. Systems must include quality control mechanisms
(Article 17), maintain technical documentation (Article 11), log activities (Article 12) and high-risk
systems must be registered in a public database (Article 13) to ensure transparency and accountability.
Additional provisions require transparency and human oversight (Articles 13 and 14).
        </p>
        <p>
          The Act outlines conformity assessment processes, distinguishing between internal conformity
assessments (Articles 16 and 43) and independent evaluations for biometric systems (Article 43). Compliance
with harmonized standards published by the European Commission presumes alignment with the Act
(Article 40) [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
        <p>
          Compliance requirements include procedural frameworks for risk management, documentation,
control and conformity assessment, as well as technical adherence to harmonized standards, codes of
conduct and best practices. While harmonized standards, codes of practice [
          <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
          ] and codes of conduct
[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] are crucial for structured risk management, data governance and transparency, their expected release
in 2-3 years leaves organizations without definitive benchmarks for compliance.
        </p>
        <p>The lack of harmonized standards creates ambiguity in interpreting requirements, requiring
organizations to rely on existing frameworks such as ISO/IEC 23894, which aligns with the Act’s objectives.
Industry guidelines and research institutions also ofer practical compliance references. However,
organizations must remain adaptable, ensuring that current strategies can align with forthcoming
standards and codes of practice. Cross-industry collaboration is essential to share insights and prepare
for standardized frameworks.</p>
        <p>Implementation challenges persist due to the absence of a universally accepted reference framework,
making compliance eforts inconsistent and context-dependent. Assessments vary based on application
rather than technology, complicating replication and consistency. Additionally, evolving standards
driven by technological advancements create a shifting compliance target and the appropriate detail level
for assessments remains unclear, especially when balancing self-assessment with empirical validation.</p>
        <p>These challenges highlight the need for a structured, flexible approach to AI risk management that
aligns with evolving standards and best practices while fostering transparency, accountability and
compliance.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Human Rights and Ethical Considerations</title>
        <p>AI systems impact human rights across national, European and international levels, raising ethical
and legal concerns. Achieving a balance between technological innovation and fundamental rights
protection requires navigating a multi-level legal framework involving constitutions, judicial rulings,
rights charters and other regulatory sources.</p>
        <p>
          The EU AI Act aims to establish a uniform framework prioritizing human-centric AI aligned with
fundamental rights as outlined in the Charter of Fundamental Rights of the European Union [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. It seeks
to promote trustworthy AI that safeguards health, safety, democracy, rule of law and environmental
protection while fostering innovation.
        </p>
        <p>From a human rights perspective, key concerns include equality, privacy, transparency and
environmental protection. Biases introduced during AI training and testing can perpetuate discrimination,
reflecting societal inequities. Ensuring fairness requires eliminating biases at the design stage. Privacy
concerns arise from AI systems processing personal or biometric data, enabling extensive surveillance
that threatens personal safety and state security. Transparency is essential for fairness, bias detection
and privacy protection, requiring users to understand AI processes, data sources and decision-making
logic. Additionally, while AI can enhance sustainability eforts, its energy consumption can adversely
impact the environment, conflicting with sustainable development principles.</p>
        <p>Ethical considerations intersect with human rights through transparency, accountability and
continuous monitoring of AI systems to prevent inequalities. Establishing a clear regulatory framework
that addresses liability for harm caused by AI systems while promoting human-centered AI governance
remains crucial. Balancing safety, innovation and human rights protection requires prioritizing
transparency, accountability and education to ensure AI systems enhance rather than undermine fundamental
rights.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. International Frameworks</title>
        <p>
          The European Union’s AI Act represents a stringent regulatory model categorizing AI systems by
risk level, with strict obligations on high-risk applications and prohibitions on unacceptable ones.
Its extraterritorial reach ensures compliance with standards of transparency, human oversight and
accountability for systems impacting the EU market [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
        <p>In contrast, international soft-law frameworks like the OECD AI Principles, UNESCO’s
Recommendation on the Ethics of AI and the Council of Europe’s Framework Convention on AI emphasize voluntary
principles of fairness, accountability and responsible governance. While influential in shaping global AI
policy, these frameworks lack direct enforcement mechanisms.</p>
        <p>
          The United States follows a decentralized, sector-specific approach, lacking a comprehensive federal
AI law. Instead, it relies on existing statutes, agency guidance and state-level regulations. Notably, the
National Institute of Standards and Technology (NIST) has issued a non-binding AI Risk Management
Framework, promoting voluntary risk assessment principles for AI systems [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. This fragmented
landscape leads to inconsistencies and debates about the need for a cohesive federal strategy.
        </p>
        <p>The divergence between the EU’s legally binding approach and the U.S.’s market-driven,
selfregulatory model reflects broader tensions in global AI governance. While international bodies push for
regulatory alignment through high-level principles, difering enforcement strategies and legal traditions
hinder cross-border interoperability.</p>
        <p>The EU AI Act’s influence is evident in regulatory discussions in Canada, Japan and Brazil, which
are exploring risk-based models. However, global harmonization remains elusive due to diferences in
enforcement mechanisms and legal frameworks. As AI technologies advance, the interplay between
binding regulations, voluntary principles and sector-specific guidelines will shape future governance,
emphasizing the need for continued international cooperation to address AI’s risks and benefits
efectively.</p>
        <p>This analysis highlights the fragmented nature of current AI governance and underscores the need
for a comprehensive, interdisciplinary approach to AI impact assessment centered on fundamental
human rights.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Standards and Guidelines</title>
      <sec id="sec-3-1">
        <title>3.1. Standards for AI Assessment</title>
        <p>
          The assessment of AI systems relies on established standards and frameworks providing guidance on
risk management, transparency and accountability. Key standards include ISO/IEC [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], IEEE [
          <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
          ] and
frameworks developed by the National Institute of Standards and Technology (NIST) [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>
          ISO/IEC 23894 [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] addresses risk management, aligning closely with the AI Act’s regulatory
requirements, by providing structured methods for risk identification, assessment and mitigation. ISO/IEC
25012 [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] focuses on data quality, emphasizing accuracy, completeness and consistency, essential for
high-quality datasets used in AI training and operation. ISO/IEC TR 24027 [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] targets bias identification
and mitigation to ensure fairness. Governance frameworks such as ISO/IEC 38507 [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] provide guidance
on integrating AI into organizational structures to enhance accountability and oversight.
        </p>
        <p>
          Further complementing these standards, ISO/IEC 42001 [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] and ISO/IEC 42005 [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ] ofer frameworks
for managing AI systems throughout their lifecycle. ISO/IEC 42001 defines requirements for AI
management systems, supporting continual monitoring, evaluation and ethical alignment. ISO/IEC 42005,
still under development, aims to standardize AI impact assessments across social, environmental and
economic dimensions, with guidance for integrating these assessments into risk management processes
and maintaining transparency and accountability.
        </p>
        <p>
          NIST frameworks also play a critical role. The AI Risk Management Framework (AI RMF) [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] serves
as a flexible, voluntary guide for managing AI-related risks through a comprehensive and iterative
process. The AI 600-1 standard [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] focuses specifically on the risks associated with generative AI
technologies, including harmful content creation, bias and misuse of generated data. Additionally, the
NIST Privacy Framework ofers insights into managing privacy risks, a critical concern for AI systems
handling sensitive or personal data.
        </p>
        <p>
          IEEE standards, part of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems,
emphasize ethical AI development. IEEE 7002-2022 [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] provides guidelines for accountability,
transparency, fairness and safety in AI systems, promoting responsible decision-making. IEEE 7010-2020
[
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] ofers a framework for assessing the impact of AI systems on human well-being, particularly in
sensitive domains like healthcare and data privacy.
        </p>
        <p>
          Collectively, these standards and frameworks address key aspects of AI assessment such as risk
management, data quality, bias mitigation, governance and ethical considerations. However, they lack
specificity for assessing compliance with the AI Act [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], requiring organizations to adapt and combine
these guidelines to their unique use cases. Consequently, a tailored approach integrating multiple
standards is essential to bridge the gaps and ensure comprehensive compliance with technical and
regulatory requirements.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Guidelines from Research and Industry</title>
        <p>
          Recent advancements in AI have led various institutions and stakeholders to establish frameworks for
AI assessment and evaluation. The Alan Turing Institute proposes a robust framework prioritizing
transparency, accountability and robustness, advocating for rigorous testing against adversarial scenarios
and utilizing explainability tools such as SHAP (SHapley Additive exPlanations) and LIME (Local
Interpretable Model-agnostic Explanations) for interpretability [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
        </p>
        <p>
          The European Union Agency for Cybersecurity (ENISA) focuses on security and resilience,
recommending continuous monitoring, risk assessment and standardized metrics to assess AI system
performance under various conditions [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ].
        </p>
        <p>
          The Partnership on AI emphasizes fairness and bias mitigation, advocating for fairness-aware
algorithms and diverse datasets to minimize discriminatory outcomes [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ].
        </p>
        <p>These guidelines highlight essential aspects of AI assessment, including explainability, robustness,
security and fairness, providing valuable insights that complement formal standards and regulatory
frameworks.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Overview of Tools</title>
        <p>As AI systems increasingly permeate critical domains, the potential for human rights violations arising
from misuse or ethical misalignment grows. To address these challenges, various organizations have
developed tools aimed at assessing and mitigating AI-related risks. Below, we compare some of the
most prominent tools, highlighting their strengths and limitations.</p>
        <p>Microsoft’s Responsible AI Impact Assessment (RAIIA) provides a structured framework for ensuring
responsible AI development and deployment. It ofers templates and guidance for evaluating AI systems
against principles such as fairness, reliability, transparency, privacy and inclusiveness. However, its
reliance on qualitative assessments can limit consistency.</p>
        <p>Google’s AI Toolkit encompasses a suite of tools for responsible AI development, including Explainable
AI (XAI), fairness indicators and model cards. While comprehensive, its efectiveness depends heavily
on the technical expertise of users and is not designed for regulatory compliance.</p>
        <p>IBM’s AI Fairness 360 (AIF360) is an open-source toolkit that ofers metrics, algorithms and visualization
tools to detect and mitigate bias in machine learning models. Its strength lies in its transparency
and accessibility; however, its focus is mainly on fairness, lacking broader governance and ethical
considerations.</p>
        <p>OpenAI’s Guidelines emphasize responsible use of large language models and generative AI systems.
While ofering valuable best practices, these guidelines remain high-level and are not directly applicable
to compliance with specific regulatory frameworks.</p>
        <p>Ethical AI Toolkit by the Montreal AI Ethics Institute focuses on societal impact, providing worksheets
for ethical impact assessments. While promoting a holistic approach, it lacks technical depth and
automation, making it less practical for large-scale AI deployment.</p>
        <p>Hugging Face’s Model Evaluation Tools ofer insights into performance and fairness for pre-trained
NLP models. Although efective in enhancing explainability, their applicability is limited to specific
model types and lacks comprehensive governance features.</p>
        <p>These tools highlight diverse approaches to AI assessment, from fairness-focused toolkits to broader
ethical frameworks. However, many lack integration with formal regulatory requirements, underscoring
the need for more comprehensive and adaptable assessment methodologies.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Comparison and Insights</title>
        <p>Comparing existing standards, guidelines and tools reveals varying strengths and limitations in AI
impact assessment. While standards like ISO/IEC 23894 and 42001 provide structured risk management
frameworks, they often lack concrete metrics for ethical assessment and broader societal impacts.
ISO/IEC 25012 focuses on data quality but is not tailored for comprehensive AI assessment. NIST
frameworks (AI RMF, AI 600-1) ofer robust technical guidance but may be resource-intensive for
smaller organizations and insuficient for addressing ethical and human rights concerns. IEEE standards,
particularly IEEE 7002-2022 and 7010-2020, emphasize ethics and societal impacts but remain high-level
and lack practical implementation steps.</p>
        <p>Key insights from Table 1:
• Transparency and Accountability: standards like ISO/IEC 42001 and Microsoft’s RAIIA
emphasize structured governance and accountability mechanisms.
• Technical Guidance: NIST frameworks provide comprehensive guidance for managing
AIrelated risks, though with a strong focus on technical implementation.
• Ethics and Human-Centricity: IEEE 7002-2022 and 7010-2020 highlight ethical considerations
but lack practical guidelines for real-world deployment.
• Bias Mitigation: IBM’s AI Fairness 360 ofers concrete tools for addressing fairness, but with
limited scope for broader AI governance.
• Scalability Issues: tools like Microsoft’s RAIIA require substantial resources and expertise,
making them dificult to implement for smaller organizations.</p>
        <p>To address the limitations identified in Table 1, it is essential to integrate multiple frameworks
and tools, leveraging their strengths while mitigating their weaknesses. Future eforts should focus
on enhancing interdisciplinary collaboration, improving accessibility and developing comprehensive
assessment methodologies that align with both ethical and regulatory standards.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Proposed Methodology for AI Assessment</title>
      <sec id="sec-4-1">
        <title>4.1. Overview of the Methodology</title>
        <p>This chapter introduces the Fundamental Rights Impact Assessment (FRIA) methodology by HH4AI,
specifically developed to assess and mitigate the potential impacts of systems on fundamental rights. The
current methodology is designed for organizations seeking compliance with the AI Act while ensuring
Standard / Tool
that their systems adhere to fundamental human rights principles. By employing a gate-based structure
with three main phases plus a concluding output stage (see Figure 1), the methodology streamlines the
analysis process and ensures that only the most relevant impact progress to detailed evaluation.</p>
        <p>At the core of the methodology is a structured assessment framework based on well-defined impact
domains and guiding criteria. The impact domains cover key dimensions of AI-related impacts,
including Data Governance, Human Oversight and Control and Fairness and Non-Discrimination. These
guiding criteria serve as reference points for assessing AI systems’ alignment with fundamental rights
and regulatory requirements.</p>
        <p>To ensure relevance and eficiency, the methodology employs a filtering mechanism driven by key
factors, referred to as "drivers", such as the type of system, its life cycle stage and its domain of application.
This structured filtering ensures that only applicable impacts and evaluation criteria are considered,
avoiding unnecessary assessments. The Human Rights Checklist in Phase 1 serves as the primary tool
for this evaluation, presenting targeted questions that assess whether an AI system’s functionalities
pose impacts warranting deeper analysis. Based on the results of this phase, the methodology identifies
which impacts need further examination through defined impact scenarios.</p>
        <p>Impact scenarios play a crucial role in the methodology, illustrating concrete situations where an AI
system could compromise fundamental rights. Each scenario undergoes a structured self-evaluation,
assessing its relevance, severity and the efectiveness of existing impact mitigation measures. This
evaluation considers multiple dimensions, including the impact on individuals and society, the dificulty
of reversing potential harm and the duration of the consequences. Scenarios classified as relevant
trigger specific remediation actions to mitigate impacts.</p>
        <p>Building on this structured foundation, the methodology advances through three progressive phases,
introduced at a high level earlier, which are described in detail in Section 4.2. Upon completion of the
assessment, the methodology generates a comprehensive final output , as explained in Section 4.3.
This output consolidates the assessment findings in both graphical and tabular form, summarizing
identified impacts, the efectiveness of existing controls and recommended mitigation actions. In doing
so, it provides decision-makers with a clear, actionable overview of the AI system’s impact, thereby
facilitating efective impact management and regulatory compliance.</p>
        <p>A key diferentiator of this methodology is its gate-based approach, ensuring eficiency by
progressively refining the analysis and focusing only on the most relevant impacts. This stepwise refinement
prevents unnecessary assessments, optimizes resource allocation and enhances the clarity of impact
evaluation. The methodology’s structured yet flexible design allows it to adapt to various AI applications
while maintaining a rigorous human rights framework. The benefits of this approach extend beyond
compliance; by embedding ethical considerations and proactive impact management into the AI life
cycle, it enhances transparency, accountability and trust in AI systems. These aspects, along with other
key advantages, are explored in Section 4.4, where the methodology’s innovations and benefits are
analyzed in detail.</p>
        <p>Finally, Section 4.5 presents concluding reflections on the methodology’s strengths, particularly its
structured adaptability and role in reinforcing human rights protections throughout the AI system’s
life cycle. This final discussion underscores how the methodology ensures a systematic and efective
approach to human rights impact assessment, supporting both regulatory compliance and ethical AI
governance.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Phases of the Methodology</title>
        <p>We present here a detailed explanation of each phase of the methodology, describing the key elements
that compose each phase, their interactions, the specific outputs they produce and their connection to
the subsequent phase.</p>
        <sec id="sec-4-2-1">
          <title>4.2.1. Phase 0 - AI System Overview</title>
          <p>Phase 0 establishes the foundation for the impact assessment process by gathering essential information
about the AI system. It defines the system’s purpose, identifies key stakeholders and outlines the
operational context. Additionally, it includes domain applicability questions to determine whether the
system operates in sensitive areas, such as biometric data, processing or critical decision-making, which
influence the selection of checklist questions in Phase 1. Similarly, it defines the system’s life cycle
stage (e.g., development, deployment or post-deployment), ensuring that the subsequent assessment is
tailored to its current state.</p>
          <p>Another crucial aspect of this phase is establishing a dedicated process for maintaining and updating
the AI System Overview, including clear accountability for the individuals responsible. This ensures
that the assessment remains accurate and reflects any changes to the system over time. By setting
out these responsibilities and procedures from the outset, the output of Phase 0 provides a clear and
well-defined scope for the assessment, laying the groundwork for identifying potential impacts in the
following phase.</p>
          <p>As shown in Figure 2, the transition from Phase 0 to Phase 1 follows a structured filtering process.
This ensures that only the most relevant requirements proceed for further evaluation, optimizing the
eficiency of the assessment.</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>4.2.2. Phase 1 - Human Rights Checklist</title>
          <p>Phase 1 systematically identifies potential human rights impacts through a structured Human Rights
Checklist. This checklist is designed to assess the AI system’s impact by linking each evaluation question
to guiding criteria, which are directly mapped to fundamental rights.</p>
          <p>To ensure contextual relevance, the checklist questions are dynamically filtered based on two key
factors: the system’s life cycle stage and its domain applicability. This tailored approach ensures that
only questions relevant to the specific AI system under evaluation are considered. Each checklist item
is also assigned to specific internal stakeholders, ensuring that subject-matter experts evaluate the areas
where they have direct oversight and expertise.</p>
          <p>The relevance of each criterion is determined through the responses to the checklist. If a criterion
receives a high relevance score, indicating a potentially significant impact on fundamental rights in the
context of the specific AI system under evaluation, then the assessment proceeds to Phase 2, where
a more detailed analysis is conducted. This transition from Phase 1 to Phase 2 follows a structured
ifltering process, as illustrated in Figure 3, ensuring that only the most critical impacts advance to
deeper evaluation while optimizing eficiency.</p>
        </sec>
        <sec id="sec-4-2-3">
          <title>4.2.3. Phase 2 - Impact Assessment</title>
          <p>Phase 2 involves a detailed evaluation of the impacts identified in Phase 1, focusing on multiple
impact scenarios for each guiding criterion. These scenarios are designed to assess a wide range of
potential impacts to fundamental rights, including ethical, legal and social implications. The internal
stakeholder responsible for each criterion conducts this assessment, determining whether efective
controls exist within the organization to mitigate the identified impacts. Stakeholders are required to
provide documentation or other evidence demonstrating the efectiveness of these controls, as well as
to specify the individual or department responsible for maintaining and overseeing them.</p>
          <p>The impact assessment considers multiple evaluation dimensions to ensure a comprehensive
understanding of each impact scenario. Stakeholders assess:
• The efect on individuals , analyzing the potential impact on individual rights (e.g., privacy
violations, discrimination).
• The efect on society , considering broader societal implications (e.g., increased inequality, biases
in decision-making).
• The efort required to mitigate or reverse the impact , evaluating how dificult it would be to
address the issue once it has occurred.
• The duration of the efect , estimating whether the impact is short-term, long-term or potentially
irreversible.</p>
          <p>The evaluation process is structured around a three-level self-evaluation scale, where each impact
scenario is classified as:
• Relevant: the scenario poses a significant impact to fundamental rights and requires immediate
action.
• Partially Relevant: the scenario presents moderate impacts that may require intervention but
are not immediately critical.</p>
          <p>• Irrelevant: the scenario does not apply or has no meaningful impact on fundamental rights.</p>
          <p>For each scenario assessed as Relevant or Partially Relevant, a remedial action is proposed to mitigate
the identified impact. The remediation process includes:
• Action Type: the category of intervention (e.g., policy revision, additional control
implementation, training or awareness programs).
• Action Description: a detailed explanation of the corrective measure and how it will mitigate
the identified impact.</p>
          <p>• Action Owner: the responsible individual, team or department ensuring the implementation
and efectiveness of the corrective action.</p>
          <p>Once all impact scenarios have been evaluated and appropriate remedial actions suggested, the final
classification of the impact on fundamental rights is determined for each guiding criterion. If multiple
relevant impact scenarios are identified, additional mitigation strategies may be necessary to ensure
compliance and impact reduction. However, if most scenarios are classified as Irrelevant, no further
action or in-depth analysis is required for that specific criterion.</p>
          <p>This structured, multi-dimensional approach ensures that AI-related impacts to fundamental rights
are systematically identified, assessed and mitigated, while maintaining accountability and transparency
throughout the process.</p>
          <p>As illustrated in Figure 4, the transition from Phase 2 to the Output stage ensures that only scenarios
classified as relevant and having a significant impact require corrective actions. If a scenario is deemed
relevant but without a significant impact, no further action is required. Scenarios classified as not
relevant are excluded from the final output. This structured filtering approach ensures that remediation
eforts are targeted, eficient and aligned with the identified impacts, maintaining an efective and
accountable impact assessment process.</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Final Output</title>
        <p>The final output of the Fundamental Rights Impact Assessment provides a comprehensive summary of
the assessment results across all phases. This output consists of both graphical and tabular
representations to facilitate a clear and structured interpretation of the evaluation process.</p>
        <p>The tabular overview presents a structured breakdown of the assessments and evaluations conducted
in Phase 1 and Phase 2, detailing relevance scores, stakeholder responses and identified impacts. The
graphical overview complements this by ofering a visual representation of key insights, ensuring an
intuitive and easily digestible format for decision-makers.</p>
        <p>The final output is structured into two primary components:
• An overview of results, which includes both the graphical and tabular representations of the
assessment conducted in Phase 1 (requirements analysis) and Phase 2 (impact scenario evaluation).
• A remediation actions section, detailing the list of required actions, their types and the responsible
stakeholders for implementation.</p>
        <p>The final output ensures that all identified impacts and corresponding remediation actions are
documented in a structured manner. The graphical and tabular overviews provide a clear impact profile,
while the remediation section ensures accountability by assigning ownership to corrective actions. This
comprehensive output enables decision-makers to track, evaluate and implement impact mitigation
strategies efectively, ensuring that fundamental rights considerations are addressed throughout the AI
system’s life cycle.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Innovation and Benefits</title>
        <p>The FRIA methodology introduces several key innovations and benefits, enhancing the efectiveness
and applicability of AI impact assessments while ensuring a structured and actionable approach to
impact mitigation.</p>
        <p>• Detailed impact Scenario Analysis: by defining multiple scenarios for each guiding criterion,
the methodology enables a comprehensive evaluation of potential impacts. This granular approach
ensures a thorough understanding of how an AI system may impact fundamental rights and
allows for the development of precise, targeted mitigation strategies.
• Stakeholder-Driven Evaluation: the assessment process integrates the expertise of internal
stakeholders, leveraging their real-world insights into system design, deployment and governance.
This ensures that impact identification and mitigation strategies are based on practical knowledge
of existing controls and operational impacts.
• Self-Evaluation Scale: a standardized three-level scale (Relevant, Partially Relevant or Irrelevant)
quantifies the significance of each identified impact. This structured approach facilitates clear
decision-making and ensures that only substantial impacts advance to deeper analysis and
remediation.
• Human Rights Mapping: impacts and scenarios are systematically categorized based on guiding
criteria linked to fundamental rights. This structured alignment provides organizations with
a transparent, legally grounded understanding of how AI functionalities may afect individual
rights.
• Flexibility and Context-Specific Adaptation : the methodology adapts to diferent AI use
cases by tailoring the assessment based on the system’s domain and life cycle stage. This ensures
that organizations focus on relevant impacts without performing unnecessary evaluations.
• Proactive impact Mitigation: beyond identifying impacts, the methodology prescribes concrete
remedial actions for scenarios deemed Relevant or Partially Relevant. These interventions, ranging
from policy revisions to technical controls and training programs, ensure that the assessment
process is solution-oriented, actively supporting organizations in enhancing compliance and
minimizing potential harm.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.5. Final Remarks</title>
        <p>The FRIA methodology provides a structured, systematic and scalable framework for assessing and
mitigating the impact of AI systems on fundamental rights. By following a gate-based approach, it
ensures that only the most relevant impacts undergo detailed evaluation, optimizing resources while
maintaining a high level of scrutiny. This structured assessment process enables organizations to
integrate ethical considerations, regulatory compliance and impact management into AI development
and deployment strategies.</p>
        <p>The methodology not only identifies and evaluates impacts but also assesses the efectiveness of
existing safeguards and establishes accountability for their continuous monitoring. The final output
ofers a comprehensive overview of impact levels and required remediation actions, ensuring that
decision-makers have a clear understanding of potential impacts and the necessary steps to mitigate
them. This structured approach enhances transparency in AI governance, making impact assessment
results both accessible and actionable.</p>
        <p>Beyond regulatory compliance, the methodology fosters a proactive approach to responsible AI
development by embedding fundamental rights considerations throughout the AI system life cycle. This
allows organizations to move beyond a reactive compliance mindset toward continuous improvement
in AI ethics and governance. The structured remediation process ensures that identified impacts are
not only acknowledged but also addressed through concrete actions, reinforcing accountability and
fostering trust in AI systems.</p>
        <p>By systematically aligning AI impact assessment with human rights principles and governance
best practices, the HH4AI FRIA methodology supports organizations in achieving AI accountability,
regulatory alignment and ethical governance. It provides a robust framework for mitigating AI-related
impacts while promoting sustainable and responsible AI development, ensuring that fundamental rights
remain a priority in the design, deployment and operation of AI systems.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>The proposed gate-based framework ofers a structured and scalable approach to assessing AI systems’
impacts on fundamental rights. Through its phased structure and filtering mechanism, it prioritizes
critical risks, enhancing compliance with emerging regulations while promoting transparency,
accountability and ethical governance across AI life cycles.</p>
      <p>The methodology achieves a balance between flexibility and rigor by adapting to diverse AI
applications while ensuring that accountability, literacy and data governance are systematically addressed.
Its scenario-based approach allows targeted scrutiny of high-risk functionalities, optimizing resource
allocation and providing clear remediation processes.</p>
      <p>However, challenges persist, particularly in adapting the framework to various regulatory
environments and rapidly evolving AI technologies. Efective implementation relies on organizational maturity,
access to specialized personnel and robust governance structures. Furthermore, the framework’s
applicability across sectors may require tailored adaptations to accommodate specific regulatory or ethical
requirements.</p>
      <p>Future work aims to enhance the methodology by integrating quantitative metrics within Phase 2,
particularly for evaluating fairness, reliability and transparency. Incorporating numerical indicators will
sharpen risk estimation, facilitate benchmarking across AI systems and provide a more comprehensive
basis for evidence-based remediation. Continued refinement of assessment techniques, coupled with
broader stakeholder engagement, will further improve the framework’s adaptability, rigor and relevance.</p>
      <p>Ultimately, the methodology ofers a practical tool for aligning technical measures with ethical
principles and regulatory requirements. By promoting transparency, accountability and trust, it supports
responsible AI development and deployment that prioritizes fundamental rights. Its structured approach
provides a foundation for future enhancements, ensuring that AI systems remain compliant, ethical
and beneficial in diverse application domains.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The work reported in this paper has been partly funded by the European Union - NextGenerationEU,
under the National Recovery and Resilience Plan (NRRP) Mission 4 Component 2 Investment Line 1.5:
Strengthening of research structures and creation of R&amp;D “innovation ecosystems”, set up of “territorial
leaders in R&amp;D”, within the project “MUSA - Multilayered Urban Sustainability Action” (contract n.
ECS 00000037).</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools in the preparation of this paper.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>E.</given-names>
            <surname>Union</surname>
          </string-name>
          ,
          <source>Regulation (eu)</source>
          <year>2024</year>
          /
          <article-title>1689 of the european parliament and of the council</article-title>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Union</surname>
          </string-name>
          ,
          <source>Regulation (eu)</source>
          <year>2016</year>
          /
          <article-title>679 of the european parliament and of the council</article-title>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>I.</given-names>
            <surname>Bartle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Vass</surname>
          </string-name>
          ,
          <article-title>Self-regulation and the regulatory state: A survey of policy and practice</article-title>
          ,
          <source>Citeseer</source>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fichter</surname>
          </string-name>
          ,
          <article-title>Voluntary regulation: codes of practice and framework agreements, in: Comparative Employment Relations in the Global Economy</article-title>
          , Routledge,
          <year>2013</year>
          , pp.
          <fpage>414</fpage>
          -
          <lpage>430</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. C.</given-names>
            <surname>Antonucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Scocchi</surname>
          </string-name>
          ,
          <article-title>Codes of conduct and practical recommendations as tools for self-regulation and soft regulation in eu public afairs</article-title>
          ,
          <source>Journal of Public Afairs</source>
          <volume>18</volume>
          (
          <year>2018</year>
          )
          <article-title>e1850</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[6] NIST, Artificial Intelligence Risk Management Framework - Technical Report NIST AI 100-1</source>
          ,
          <year>2023</year>
          . URL: https://www.nist.gov/news-events/news/2023/01/ nist-releases
          <article-title>-draft-artificial-intelligence-risk-management-framework.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Oviedo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rodriguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Trenta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cannas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Natale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Piattini</surname>
          </string-name>
          ,
          <article-title>Iso/iec quality standards for ai engineering</article-title>
          ,
          <source>Computer Science Review</source>
          <volume>54</volume>
          (
          <year>2024</year>
          )
          <fpage>100681</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Schif</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ayesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Musikanski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Havens</surname>
          </string-name>
          , Ieee 7010:
          <article-title>A new standard for assessing the well-being implications of artificial intelligence</article-title>
          ,
          <source>in: 2020 IEEE international conference on systems, man, and cybernetics</source>
          (SMC), Ieee,
          <year>2020</year>
          , pp.
          <fpage>2746</fpage>
          -
          <lpage>2753</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Winfield</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Booth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Dennis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Egawa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hastie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Jacobs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. I.</given-names>
            <surname>Muttram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. I.</given-names>
            <surname>Olszewska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rajabiyazdi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Theodorou</surname>
          </string-name>
          , et al.,
          <article-title>Ieee p7001: A proposed standard on transparency</article-title>
          ,
          <source>Frontiers in Robotics and AI</source>
          <volume>8</volume>
          (
          <year>2021</year>
          )
          <fpage>665729</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dunietz</surname>
          </string-name>
          , E. Tabassi,
          <string-name>
            <given-names>M.</given-names>
            <surname>Latonero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <article-title>A plan for global engagement on ai standards</article-title>
          ,
          <year>2024</year>
          . URL: https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=958389. doi:https://doi.org/10. 6028/NIST.
          <source>AI</source>
          .
          <volume>100</volume>
          -
          <fpage>5</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Floridi</surname>
          </string-name>
          ,
          <article-title>On the brussels-washington consensus about the legal definition of artificial intelligence</article-title>
          ,
          <source>Philosophy &amp; technology 36</source>
          (
          <year>2023</year>
          )
          <fpage>87</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Gualo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Verdugo</surname>
          </string-name>
          , I. Caballero,
          <string-name>
            <given-names>M.</given-names>
            <surname>Piattini</surname>
          </string-name>
          ,
          <article-title>Data quality certification using iso/iec 25012: Industrial experiences</article-title>
          ,
          <source>Journal of Systems and Software</source>
          <volume>176</volume>
          (
          <year>2021</year>
          )
          <fpage>110938</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13] ISO/IEC, Iso/iec tr 24027:
          <article-title>2021 information technology-artificial intelligence (ai)-bias in ai systems and ai aided decision making</article-title>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14] ISO/IEC, Iso/iec tr 38507:
          <article-title>2022 information technology - governance of it - governance implications of the use of artificial intelligence by organizations</article-title>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15] ISO/IEC, Iso/iec 42001:
          <source>2023 information technology - artificial intelligence - management system</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>[16] ISO/IEC FDIS 42005: Information technology - Artificial intelligence - AI system impact assessment - Draft</source>
          ,
          <year>2025</year>
          . URL: https://www.iso.org/standard/44545.html.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>C.</given-names>
            <surname>Autio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Schwartz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dunietz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stanley</surname>
          </string-name>
          , E. Tabassi,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <source>NIST Trustworthy and Responsible AI - 600-1. Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile</source>
          ,
          <year>2024</year>
          . URL: https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=958388. doi:https://doi.org/10.6028/NIST.
          <source>AI</source>
          .
          <volume>600</volume>
          -
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>J. I.</given-names>
            <surname>Olszewska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. C. S.</given-names>
            <surname>Committee</surname>
          </string-name>
          , et al.,
          <article-title>Ieee standard for data privacy process:</article-title>
          <source>Ieee standard 7002-2022</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>D.</given-names>
            <surname>Leslie</surname>
          </string-name>
          ,
          <article-title>Understanding artificial intelligence ethics and safety</article-title>
          , arXiv preprint arXiv:
          <year>1906</year>
          .
          <volume>05684</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ntalampiras</surname>
          </string-name>
          , G. Misuraca,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rossel</surname>
          </string-name>
          ,
          <source>Artificial intelligence and cybersecurity research</source>
          ,
          <source>ENISA</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>J.</given-names>
            <surname>Heer</surname>
          </string-name>
          ,
          <article-title>The partnership on ai</article-title>
          ,
          <source>AI</source>
          Matters
          <volume>4</volume>
          (
          <year>2018</year>
          )
          <fpage>25</fpage>
          -
          <lpage>26</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>