<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshops at the Third International Conference on Hybrid Human-Artificial Intelligence
(HHAI), June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Global Perspectives on AI Governance: A Comparative Overview</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Nimrod Mike</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Corvinus University of Budapest</institution>
          ,
          <addr-line>Fővám tér 8, 1093, Budapest</addr-line>
          ,
          <country country="HU">Hungary</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>1</volume>
      <fpage>0</fpage>
      <lpage>14</lpage>
      <abstract>
        <p>The spread of artificial intelligence (AI) technologies has raised a huge question about the safety of their regulation to uphold their effective development and utilization. This research is about the AI regulatory landscapes of the United States, China, and EU, particularly on principles that include the rights to know, be fair, and have a sense of accountability. The EU, for its part, takes a more comprehensive approach, and the Artificial Intelligence Act (AIA), as an instance of this, only targets applications deemed to be high-risk and aims at AI that is trustworthy and aligned with ethical and legal norms. On the one hand, there is the case of the US, where there are just federal and state laws and regulatory plans, and industry self-regulation seems to predominate. Meanwhile, the practical standpoint is stressed by China, which finds its AI technologies useful for admin speeds up. However, strategic aims and societal questions are not ignored. Although governmental authorities differ as to the approach they select, shared values constitute the key principles of AI regulation worldwide. A greater level of transparency, impartiality, and accountability are put in place, although their levels of implementation are not uniform. A cross-country interaction, for example, the Global Partnership on AI (GPAI) agreement, is vital in facilitating a regulation system and an exchange of the best application forms. Among the most important ways policymakers can improve AI governance is through coordination, transparency, and research. Working with regions and stakeholders can ensure that the development of AI ethics is consistent with the values of society; this will in turn promote innovation and people's privacy.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial intelligence (AI)</kwd>
        <kwd>Regulation</kwd>
        <kwd>European Union (EU)</kwd>
        <kwd>United States (US)</kwd>
        <kwd>China</kwd>
        <kwd>Artificial Intelligence Act (AIA)</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The advancement of AI technology has underscored the critical need for effective regulation
to ensure its responsible development and deployment. While it should be noted that the
European Union (EU) has been at the forefront of drawing up a multifaceted regulation
framework emphasizing the areas of explainability, fairness, and transparency, it is too
early to say whether the regulation will be strongly acceptable throughout Europe or the
world. Such an environment gives rise to the study area that focuses on AI regulation, where
the US, China, and EU's positions on AI regulation are intensively studied and compared.
Through studying the EU's legislative structure, with its primary focus on explainability,
fairness, and disclosure, we try to gain a deep understanding of the information about how
effectively these regulations foster ethically responsible AI practices.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Background</title>
      <p>In the last few years, the development of AI technologies across various sectors has
underscored the critical need for robust regulation. AI finds a wide range of applications,
from algorithms that are used for automated decision-making to machine learning models
that shape our society and people’s lives [1]. This implementation ranges from employment
opportunities to health care access and much more. While it is true that the inherent
complexity and ambiguity of AI algorithms have been causing some worries about
accountability, transparency, and possibly bias issues, the opportunities they offer in
certain areas must be explored further. Unless we make room for adequate regulations, AI
systems may well end up creating or worsening socio-economic inequalities, infringing on
individuals' rights, and abusing ethical standards [2].</p>
      <p>Furthermore, the fast-tracked advancements of AI technology have surpassed the
lawmaking mechanisms, requiring regulators to address the need for accountability among AI
systems. Thus, the most essential thing is to create modern AI regulation for the purpose of
positioning new-age technologies in an ethical way that is transparent and completely in
line with societal values.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Research objective</title>
      <p>How do the regulatory frameworks for AI in the US, China, and the EU align with ethical
principles for AI governance?</p>
      <p>Throughout this paper, the first objective is to analyze the AI regulation program of the
EU and take it as a sample, looking at similar solutions. Second objective is to evaluate the
key concepts and frameworks behind the US and China's regulation of AI. The third
objective is to conduct a comparative analysis to uncover patterns, divergences, and
potential implications for AI governance.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>In this paper, a qualitative methodology is employed to delve into the complex and nuanced
landscape of AI regulation across the US, China, and EU. Qualitative research is quite a
flexible approach that is needed for the investigation of complex regulations and policy
approaches. Furthermore, it reveals deep-rooted exhibitions and allows for sensory and
mindful interpretation [3]. The qualitative approach is what enables the questioning of both
sides of an existing regulation in how they work and how they are understood in the broader
societal, political, and economic contexts of society.</p>
      <p>The research will make use of in-depth analysis to draw strategic conclusions related to
the principles, purposes, and dangerous implications of AI taking place in each region, as
well as intricacies around innovation, management, and values.</p>
      <p>The qualitative methodology adopted in this study involves a systematic review of the
European Union’s Artificial Intelligence Act (AIA) and a more higher-level analysis of
relevant policy documents, legislative texts or official statements pertaining to AI regulation
in the United States (US), China. With a variety of qualitative data sources, the main task of
the study will be to bring together all these viewpoints to address challenges that arise in
AI in general and different political routes [3].
5. The EU’s Framework for AI Regulation</p>
      <sec id="sec-4-1">
        <title>5.1. Analysis of the EU’s AI regulatory framework</title>
        <p>The EU has emerged as a global leader in crafting a comprehensive regulatory framework
to govern the development and deployment of AI systems. The key aspect of this strategy
consists of the AIA proposition, which looks into advancing legal harmony through the
implementation of a uniform regime of norms and legal standards in all member countries
[4].</p>
        <p>The AIA is known for its processes of risk assessment for AI, which leads to AI apps being
used in critical fields such as healthcare and transportation, which are highly regulated,
with such regulations requiring accuracy in information, accountability, and human
intervention. The EU concept of applying AI in areas of high risk (i.e., in which ethical
principles and law fundamentally drive AI operations) differs from the idea that
technological developments require moral and legal safeguards and that AI systems in
Europe must operate in accordance with ethical principles and legal norms.</p>
        <p>The EU's AI regulatory framework emphasizes the principle of 'trustworthy AI,'
advocating for systems that are lawful, ethical, and robust from both a technical and societal
perspective. The AIA lays down a set of conditions for developers and users of AI in terms
of ensuring that AI systems are transparent, employ algorithms without biases, and have a
regulating mechanism that is human [5]. Moreover, the EU honors scientific progress in the
field of AI; thus, the focus should be put on the development of AI technologies while
implementing them according to human rights, democratic values, and the rule of law. The
strict regulation on AI is at the heart of the EU's goal to support AI innovation,
competitiveness, and trust, which are fundamental to the emergence of an AI-empowered
future. However, the EU is also helping to build the ethical basis for AI, which it is pioneering
on a global level.</p>
        <sec id="sec-4-1-1">
          <title>5.2. Principles of explainability, fairness, and transparency in the EU</title>
          <p>Within the EU regulatory framework for AI, the principles of explainability, fairness, and
transparency serve as fundamental pillars guiding the development and deployment of AI
systems. Explainability stands for the comprehensible aspect of AI systems that should
include rational explanations behind their decisions and actions and enable humans as well
as users or stakeholders to fully understand the critical thinking and reasoning behind
AIbased results [6]. Whether the algorithm is fair, transparent, and accountable, and whether
that same level of transparency and accountability applies to major applications such as
healthcare, finance, and criminal justice, determines the credibility of the algorithm and the
AI technology as a whole. The EU aims to accomplish this by emphasizing the aspects of
explainability, which will in turn lead to AI systems always operating on the principle of
transparency and interpretability. This will be crucial in facilitating appeals and objections
by the people against algorithmic decisions.</p>
          <p>Fairness is another key principle shaping EU AI regulations, whereby the inclination to
fight against bias and discrimination within AI systems is prioritized. The extent of
equitability in AI software is exhibited when such programs do not have a biased or unequal
output, which results from sensitive characters like gender, race, and social-economic status
[7]. Thus, a model of AI should be designed with data capability for redundancy and trained
and tested methods that can employ multiple data sets. Apart from that, approval of a model
should be against bias. Corrections for the failure of the algorithm are also essential. AI
technology is a new tool full of great potential; learning how to handle this new power
quickly is a tough task. Consequently, it is the principle of equality that must be in the main
context of any activity with AI to allow social integration as well as defend people's rights,
together with the developer's trust in AI technologies.</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>5.3. Definition of an AI system. Prohibited AI practices.</title>
          <p>Article 2 of the EU AI Act defines an AI system as “a machine-based system designed to
operate with varying levels of autonomy and that may exhibit adaptiveness after deployment
and that, for explicit or implicit objectives, infers, from the input it receives, how to generate
outputs such as predictions, content, recommendations, or decisions that can influence
physical or virtual environments.” This definition is generic and broad to the extent that it
permits the classification of all machine-based systems that operate with autonomy based
on input data to generate outputs in the form of content, decision, prediction, or
recommendation.</p>
          <p>While Article 1 of the EU AI Act provides that the purpose of the Act is to regulate the use
of AI, Article 5 of the Act expressly prohibits some AI practices. These include, first, that the
Act prohibits AI systems that use manipulative and deceptive techniques that are intended
to distort the outcome of a decision. Secondly, the Act prohibits the use of AI systems that
exploit the vulnerability of a person on the grounds of age, disability, or economic class.
Thirdly, the Act prohibits the use of AI systems to filter people based on their race, political
opinion, religion, trade unions, or any other grounds that may be used to discriminate
against a person. Fourthly, the Act prohibits social scoring, which includes categorization of
people based on their social status, behaviour, and characteristics. Additionally, the Act
prohibits real-time biometric identification of people.</p>
          <p>In all, it is evident that Article 5 of the EU AI Act seeks to address the unfairness challenge
in AI practices. According to [22], unfairness in AI systems is a sociotechnical challenge. This
is because, for societal and technical reasons, AI systems tend to produce unfair results by
relying on biassed data sets. Recognizing this reality, Article 5 of the Act prohibits AI
practices that profile people based on their socioeconomic characteristics, such as race,
disability, age, and economic class.</p>
        </sec>
        <sec id="sec-4-1-3">
          <title>5.4. High-risk AI systems.</title>
          <p>
            Of interest to note is that the EU AI Act creates rules for the application of high-risk AI
systems. A reading of Article 6(2a) of the Act demonstrates that a high-risk AI system is one
that poses significant risk to the “health, safety, and fundamental rights” of a person.
According to Articles 6(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) and (2) of the Act, there are two categories of high-risk AI
systems. The first category is that of AI systems that are used on products that are regulated
by the EU product safety regulation (Directive 2001/95/EC of the European Parliament and
of the Council of 3 December 2001 on general product safety). Article 2 of Directive
2001/95/EC provides that it applies to products (goods and services) that are intended to
be supplied to consumers for commercial activity. These include aviation services,
transport, health, and lifts used in buildings (Annex II to the EU AI Act). Article 6(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) of the
EU AI Act classifies AI used to provide or in relation to the provision of these products as
high-risk.
          </p>
          <p>The second category of high-risk AI systems is AI used in specific areas that must be
registered with the EU database. These specific areas are listed under Annex III to the Act.
These include biometrics, critical infrastructure, education and vocational training,
employment and management of employees, access to essential services, law enforcement
and administration of justice, migration, border control and asylum, and administration of
democratic processes. Article 7 of the Act allows amendments to Annex III to introduce or
remove classes of high-risk systems that fall under this second category. The amendment
should be preceded by an assessment of the risk that the subject category poses to natural
persons. In the following subsections the compliance requirements of high-risk AI systems
are briefly discussed.</p>
        </sec>
        <sec id="sec-4-1-4">
          <title>5.4.1. Risk management system.</title>
          <p>
            On the understanding that high-risk AI systems pose significant risks to people, Article 9(
            <xref ref-type="bibr" rid="ref1">1</xref>
            )
of the Act decrees that the use of high-risk systems shall be based on the establishment,
implementation, documentation, and maintenance of a risk management system. A risk
management system assists in identifying potential risks that an AI system poses [23]. Thus,
if established before the enrollment of an AI system, the provider will be able to introduce
the necessary infrastructure to monitor, mitigate, and handle the identified risks [24].
          </p>
          <p>
            According to [25], the establishment of a risk management system at its inception assists
in risk assessment, evaluation, and mitigation and is thus an important ingredient for the
success of risk management. What follows the establishment of the risk management
system is its implementation, which means applying the risk management strategy
established at its inception [23]. The risk management system should be regularly
monitored, tested, and improved [23]. This is what Article 9(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) of the EU AI Act refers to as
documentation and maintenance.
          </p>
          <p>Article 9(2) of the EU AI Act provides that the risk management system should be
continuous and run throughout the lifetime of a high-risk AI system. In addition, this article
requires regular review and updating of the risk management system. The systematic
review includes the identification of risks, the evaluation of how they may arise, and the
adoption of targeted measures to address the identified risks. Article 9(3) of the EU AI Act
provides that the measures adopted must be commensurate to the estimated effects of the
identified risks. The aim of this is to ensure that the effects are eliminated or minimized.</p>
        </sec>
        <sec id="sec-4-1-5">
          <title>5.4.2. Data governance.</title>
          <p>
            Article 10(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) and (2) of the EU AI Act provide for training, validation, and testing of data
sets used in high-risk AI systems. The training, validation, and testing should be done as per
the appropriate data governance practices. Data governance focuses on the type of data
intended to be collected, the collection process, its management, use, storage, and disposal
[26]. According to [26], data governance in relation to AI involves an organizational
approach that focuses on the planning and control of data collection, the implementation of
data protection principles, the evaluation of the approach, and the improvement of the
approach to address any identified gaps. Article 10(2) of the EU AI Act sets out the practices
that should inform appropriate data governance and management for AI systems. These
include (a) relevant design, (b) collection of data based on a purpose, (c) relevance in data
processing, and (d) identification of data gaps.
          </p>
          <p>Article 10(3) of the EU AI Act requires that the training, validation, and testing of data
sets be relevant, representative, and free of errors. Appropriate statistical infrastructure
should be employed to achieve this objective. In addition, Article 10(4) of the EU AI Act
provides that the data sets should be limited to the intended purpose.</p>
        </sec>
        <sec id="sec-4-1-6">
          <title>5.4.3. Technical documentation.</title>
          <p>
            Article 11(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) of the EU AI Act provides that a provider of a high-risk AI system must draw
up a technical documentation of the AI before availing it of the market. The technical
documentation assists in assessing and evaluating the potential risks that the AI system
poses [27]. According to [27], technical documentation of an AI system involves a
description of the AI system, labeling, and instructions for use.
          </p>
          <p>
            Article 11(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) of the EU AI Act provides that the technical documentation should be able
to demonstrate that the high-risk AI system complies with the provisions of the Act. The
minimum elements of technical documentation are set out under Annex IV of the EU AI Act.
First, the technical documentation should contain a description of the AI system, which
should include its name, the provider, the intended purpose, the relevant software, the
version and previous version, if any, and instructions on how to use it. Secondly, there shall
be a detailed description of the elements of the AI system, including the methods used to
develop it, its design specifications, architecture and software components, data
requirements, required human oversight, validation and testing procedures used, and
cybersecurity protection measures employed. Thirdly, the technical documentation should
describe how to monitor, use, and control the AI, which should include information about
the AI system’s limitations and abilities. Fourthly, the technical documentation should set
out the performance metrics used, the risk management system employed, an EU
declaration of conformity, and the post-marketing monitoring plan.
          </p>
          <p>
            The technical documentation aims at achieving the long-desired principle of
explainability of AI systems [28]. The authors in [28] argue that explainability seeks to
provide information relating to an AI system with the aim of achieving transparency and
traceability. With respect to transparency, the broad information required under Article
11(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) of the EU AI Act and Annex IV before enrollment in a high-risk AI system is intended
to ensure that users whose rights are at risk have sufficient information about AI systems
that they interact with. In addition, the requirement for a EU declaration of conformity sets
out a higher standard for providers to ensure that they disclose as much information as may
assist users in understanding high-risk AI systems. On traceability, the technical
documentation obligates providers to give information on how the AI system was
developed, the previous versions, if any, and its performance metrics. This is aimed at
solving the long-standing problem of a lack of sufficient information on the traceability of
AI systems on the market.
          </p>
        </sec>
        <sec id="sec-4-1-7">
          <title>5.4.4. Record keeping.</title>
          <p>
            Article 12(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) of the EU AI Act provides that a high-risk AI system shall be designed in a way
that it allows recording of its activities during its lifetime. The recording is required to be in
the form of logs. This also seeks to address the issue of traceability, as discussed by [28]. It
ensures that there are traces of information relating to actions undertaken by the AI system.
This is for accountability purposes, as there can be traces of information relating to the act
that can explain it.
          </p>
          <p>Indeed, Article 12(2) of the EU AI Act expresses that the purpose of recording is to ensure
traceability of the functioning of an AI system. Under Article 12(2a) of the Act, tracing is
necessary for monitoring the risks that an AI system poses, the functioning of the AI system,
and post-market monitoring as required under Article 61 of the Act. At the very least, Article
12(4) of the Act requires the logs to indicate the time, database, input data, and natural
persons involved in any use of the system.</p>
        </sec>
        <sec id="sec-4-1-8">
          <title>5.4.5. Transparency.</title>
          <p>
            Further to the technical documentation that requires the provision of information, Article
13(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) of the EU AI Act requires that AI systems be designed in a “sufficiently transparent”
manner to enable deployers to understand their output and functioning. This enjoins
designers to provide sufficient information relating to a high-risk AI system. To wit, Article
13(2) of the Act requires that a high-risk AI system be accompanied by instructions on its
use. The instructions should be in an appropriate digital format. The digital instructions
should be comprehensive, concise, comprehensible, clear, comprehensible, and correct.
          </p>
          <p>Article 13(3) of the EU AI Act sets out the minimum information that should be included
in the instructions. First, the instructions should provide detailed information about the
provider and/or the provider’s duly appointed representative, if any. Secondly, the
instructions should provide a detailed description of the AI system, including its features,
abilities, and limitations on its use. In addition, the instructions should provide information
about its human oversight measures, the hardware needed to run it, and the techniques
used to collect, store, and interpret data.</p>
          <p>According to [29], the use of high-risk AI systems comes with the risk of subjecting users
to deceptive results, especially where the intended use of the AI system is to predict
outcomes. Therefore, [29] calls for the demystification of AI systems by opening the
algorithmic box. He argues that this can be achieved through transparency in AI. Similarly,
the authors of [30] argue that transparency is an AI governance matter. Although they argue
that AI transparency is in its infancy stage, it is both an ethical and legal challenge that calls
for a well-thought-out solution. The challenge emerges from the complexity of the concept
of AI transparency, which stems from the multiplicity of aspects that call for transparency.
These includes the data, software, hardware, and algorithms involved.</p>
          <p>In April 2019, the EU Commission’s High-Level Expert Group for AI (AI HLEG) published
the ethics guidelines for trustworthy AI. One of the seven requirements of a trustworthy AI,
as per the AI HLEG, is transparency. To AI HLEG, transparency includes traceability,
communication, and explainability. This means that providers should provide sufficient
information about AI systems, and the same, including their output, should be explainable
to humans. Ideally, these are the objectives that Articles 11, 12, and 13 of the EU AI Act seek
to achieve by requiring technical documentation, record-keeping, and the provision of
information by providers.</p>
        </sec>
        <sec id="sec-4-1-9">
          <title>5.4.6. Accountability.</title>
          <p>
            Article 17(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) of the EU AI Act requires providers of high-risk AI systems to establish and
maintain a quality management system to assist them in complying with the provisions of
the Act. The quality management system is tasked with ensuring that the provider is
regulatorily compliant. Article 17(
            <xref ref-type="bibr" rid="ref1">1</xref>
            )(m) of the Act provides that the quality management
system should set out the accountability responsibilities of the provider by clearly
designating the responsibilities of the management and its staff.
          </p>
          <p>
            According to [31], the implementation of an effective quality management system assists
in ensuring command, communication, and control of processes in an organization, which
is necessary for ensuring accountability for any decision made. According to AI HLEG,
accountability is one of the seven requirements of a trustworthy AI. AI HLEG notes that
accountability connotes auditability, reduction or mitigation of risks, and the presence of
adequate, simple, and accessible redress. Notably, the requirement for recording logs under
Article 12(
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) of the EU AI Act aims at ensuring that the logs are available for auditing. In
addition, Article 9 of the EU AI Act provides for risk management. This is aimed at ensuring
that providers identify, mitigate, minimize, and address risks associated with the use of
high-risk AI systems. Further, Article 14 of the EU AI Act provides for human oversight of AI
systems to address inter-face problems that users may encounter. Therefore, it is evident
that the Act is committed to ensuring AI practices are accountable in line with the AI HLEG
2019 guidelines.
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>6. Brief overview of AI regulation in the US</title>
      <p>The current landscape of AI regulation in the United States is a composite of diverse laws,
guidelines, and initiatives that span federal and state levels. A significant federal action is
Executive Order 14110, titled "Safe, Secure, and Trustworthy Development and Use of
Artificial Intelligence," issued by President Biden on October 30, 2023 [8]. This
comprehensive order outlines a multi-faceted approach to ensure that AI development
aligns with national values of safety, security, and trustworthiness. It mandates the creation
of standardized testing procedures for AI systems to assess their safety and reliability
comprehensively. The order also emphasizes the importance of international collaboration
in developing norms and standards for AI, aiming to foster a global environment where AI
technologies are developed responsibly. Executive Order 14110 (“AI EO”) is a key
component of a broader, albeit fragmented, regulatory framework that involves multiple
governmental branches.</p>
      <p>Specific measures outlined in AI EO include the establishment of guidelines for the
ethical use of AI by federal agencies, the development of tools to detect and mitigate biases
in AI applications, and the strengthening of privacy protections in AI operations. It also calls
for the Department of Commerce to collaborate with private sector and academic leaders
to advance the technology’s safety protocols and to ensure public transparency of AI
systems’ functionalities and limitations. Furthermore, the order directs federal agencies to
prioritize funding for AI research that focuses on enhancing human-AI collaboration and
understanding AI’s societal impacts.</p>
      <p>In Section 2 of the AI EO, the administration establishes a comprehensive policy
framework along with eight guiding principles aimed at shaping the development and
governance of AI technologies. The principles begin with a strong emphasis on the safety
and security of AI, highlighting the priority for AI systems to be developed as safe, reliable,
and secure through standardized testing and risk mitigation strategies. This includes
addressing significant security risks in areas such as biotechnology and critical
infrastructure, ensuring that AI systems are resilient and ethically operated. Furthermore,
the order introduces the development of labeling and content provenance mechanisms to
enable users to distinguish between AI-generated and human-generated content.</p>
      <p>Continuing with the theme of fostering a conducive environment for AI innovation, the
AI EO promotes responsible technological development and robust competition. This
includes bolstering AI-related education, addressing intellectual property challenges, and
ensuring a marketplace that supports small developers and fosters innovation, maintaining
a fair and open market landscape that nurtures American technological leadership.</p>
      <p>The executive order also recognizes the transformative impact of AI on the labor market,
underscoring the necessity of including workers in this transition. This involves updating
training programs and ensuring that all workers, through mechanisms like collective
bargaining, can benefit from the opportunities AI presents. The administration seeks to
ensure that AI implementations in workplaces enhance job quality without infringing on
workers' rights or safety. Additionally, the order outlines strategies to enhance the federal
government's ability to govern and utilize AI effectively. This includes training for
government employees to understand AI’s implications fully and upgrading governmental
IT infrastructure to support ethical AI uses.</p>
      <p>Moreover, the policy asserts a commitment to using AI in ways that advance equity and
civil rights rather than perpetuating discrimination. Building on initiatives like the
Blueprint for an AI Bill of Rights, the order mandates that AI deployments comply with
federal laws designed to eliminate bias and ensure broad-based benefits. This commitment
extends to maintaining consumer protections in the era of AI, as the administration
emphasizes the importance of enforcing laws that protect against fraud, bias, privacy
infringements, and other harms, particularly in sensitive sectors such as healthcare and
finance. This principle advocates for AI uses that elevate service quality and consumer
safety.</p>
      <p>Lastly, the executive order positions the US as a leader in shaping the global discourse
on responsible AI usage. It seeks to collaborate with international partners to develop a
framework that addresses AI's global risks and potentials, promoting a unified approach to
AI governance. This international collaboration is pivotal as it underpins the
administration's vision of leading global societal, economic, and technological progress in
the era of AI.</p>
      <sec id="sec-5-1">
        <title>6.1. US Federal and state-level initiatives</title>
        <p>In addition to federal efforts, individual states like California, Illinois, and New York have
enacted their own AI-specific regulations. California’s Consumer Privacy Act (CCPA),
officially known as AB-375, stands as one of the most stringent data privacy laws globally.
It specifically includes provisions that affect AI companies by mandating increased
transparency and granting consumers substantial rights regarding the use of their data [9].
Illinois and New York are actively working on amendments to further regulate AI, with a
primary focus on reducing bias, enhancing transparency, and ensuring accountability in AI
applications. These state-level initiatives underscore the growing demand for personalized
legal responses and the protection of individual rights, illustrating the complexity of
regulating AI solely at the federal level.</p>
      </sec>
      <sec id="sec-5-2">
        <title>6.2. Comparison of the US with EU principles</title>
        <p>While there are significant differences in regulatory philosophies between the US and the
EU, both regions address similar ethical concerns within AI technology. The AIA, for
example, mandates robust frameworks for high-risk AI applications, requiring
comprehensive risk assessments, sustained human oversight, and explicit transparency
[10]. In contrast, the US approach, as exemplified by AI EO, tends to emphasize innovation
and technological leadership while also incorporating ethical considerations like
transparency and accountability. This difference in approach reflects the varied roles of
government and regulatory priorities in the US and EU, highlighting diverse strategies in
the global governance of AI.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>7. Brief overview of AI Regulation in China</title>
      <p>China's AI regulatory environment is characterized by a mix of government oversight,
industry self-regulation, and emerging regulatory frameworks. The Chinese government
has become cognizant of the fact that AI may serve strategic purposes and has therefore
produced policies that encourage the development of AI but also manage the dilemmas [11].
The government has created initiatives such as the National New Generation Artificial
Intelligence Development Program, which describes the intent to pursue ambitious goals in
AI research, technology, and industrial application by different sectors separately. Besides,
Saalman states that China conducts a lot of AI technology investments and infrastructure
construction efforts, including the "AI + X" plan, which seeks to unite AI and traditional
industries' transformation and develop a fast-growing economy and deep innovation model
[12].</p>
      <sec id="sec-6-1">
        <title>7.1. The Chinese government’s approach to regulating AI</title>
        <p>The Chinese government has adopted a pragmatic approach regarding automated
decisionmaking (ADM), leveraging AI technologies to enhance administrative efficiency, improve
public services, and optimize decision-making processes. For example, strategies to balance
the economic and social consequences of automation can include reducing working hours
or promoting lifelong learning and training [13].</p>
        <p>A law titled “Interim Measures for the Management of Generative Artificial Intelligence
Services” was released by the Cyberspace Administration of China (CAC) and six other
central government regulators. On August 15, 2023, this law was put into effect with the
intention of regulating the provision of generative AI services [14]. This regulation, framed
under existing laws such as the "Cybersecurity Law of the PRC" (2017), which establishes
protections for network and information security; the "Data Security Law of the PRC"
(2021), which regulates data processing activities; the "Personal Information Protection
Law of the PRC" (2021), focusing on individual data rights similar to the GDPR; and the "Law
on the Scientific and Technological Progress of the PRC" (revised in 2022), which
encourages the integration of scientific and technological innovations into national
development, aims to steer the healthy and regulated use of generative AI while
safeguarding national security and public interest. Its primary goals are the protection of
legitimate rights and interests of individuals, legal entities, and other organizations; the
promotion of safe and consistent use of generative AI; and the preservation of social public
interests and national security. Though these steps are highly likely to be effective in the
sense of improving the quality of governance and service delivery, they also pose a threat
to privacy, scrutiny, and accountability, especially if collecting personally identifiable
information and algorithms are used to make decisions in major areas of people's lives.</p>
        <p>The new legislation establishes several key requirements for all generative AI services
in the country. These services must uphold national sovereignty and social stability, prevent
discrimination in their applications, respect intellectual property rights and commercial
ethics, ensure the physical and psychological well-being of individuals, and enhance the
transparency and reliability of AI-generated content. These stipulations aim to integrate
ethical considerations into the technological development and deployment of AI.</p>
        <p>Chapter II of the legislation advocates for the innovative application of generative AI
across various fields. It encourages the coordination of innovation, risk prevention, and the
establishment of public data resources, while also promoting independent innovation in
core AI technologies and international cooperation. This reflects the nation's intention to be
a leader in AI technology globally while managing potential risks.</p>
        <p>Chapter III requires providers of generative AI services to take responsibility for the
security of online information and the protection of personal data. Providers must
transparently disclose service details to users, prevent misuse, and particularly protect
minors. Furthermore, they are obligated to accurately label AI-generated content and
ensure the continuity and safety of their services, promoting a secure and reliable
environment.</p>
        <p>Chapter IV outlines the roles of various government departments in enforcing these
measures. This includes improving regulatory methods and conducting security
assessments, particularly for services with significant public influence. Violations of these
provisions may lead to administrative sanctions or criminal charges, underscoring the
importance of strict compliance.</p>
        <p>The final section clarifies key terms and specifies conditions for administrative permits
for providing generative AI services. It also outlines the requirements for foreign
investment in generative AI, ensuring that all engagements align with national regulations
and interests.</p>
        <p>These regulatory measures underscore China’s cautious yet proactive approach to
harnessing the potential benefits of generative AI while addressing the associated risks and
ethical concerns. By establishing a robust legal framework that emphasizes both innovation
and regulation, the government aims to foster a responsible AI ecosystem that aligns with
its broader socio-economic goals and security interests, while balancing the rapid
advancement of AI technologies with necessary safeguards to protect citizens' rights and
maintain social stability. This regulatory approach could influence global AI practices and
promote a safe and equitable development of AI technologies.</p>
        <sec id="sec-6-1-1">
          <title>7.2. Analysis of Chinese principles and their alignment with the EU</title>
          <p>In analyzing the principles guiding China's AI regulation and their alignment with the EU,
there are notable differences in approach and emphasis. China and the EU are similar to
each other for the reason that each of the AIs attaches great importance to the core elements
of transparency, justice, and accountability [15]. However, their AI governance regulations
have divergences both in implementing and enforcing them.</p>
          <p>On AI regulation in China, the emphasis is put on government programs with regard to
engaging industry in the possibility of self-regulation towards strong oversights and
institutions with well-defined and independent checks and balances. Firstly, even though
there are rules and provisions for AI in China, it is still up for question about transparency
and the fact that the interests of industry stakeholders might influence policymaking. On
the contrary, the goal of the AIA is to make sure there is a sound framework in place with
consistently implemented rules and standards for high-risk AI applications, as well as
demand that transparency, human supervision, and risk assessment be ensured for new
technologies to be used reliably and ethically [16].</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>8. Comparative Analysis and Discussion</title>
      <p>In comparing AI regulation across the US, China, and the EU, several common elements
emerge despite differing regulatory approaches. Secondly, it also works in favor of
responsible AI development. One of the main factors that determines whether AI
governance should be seen as globally accepted is the principle of transparent, fair, and
accountable processes that make the system belong to all of the world [17]. They are going
to make a splash, as these leading entities already know how to do it in a way that balances
innovation and risk awareness. They are anxiously anticipating how AI can revolutionize
the world while at the same time being cautious not to forget about the possible biases,
discriminations, infringements on privacy, and other bad things.</p>
      <p>Despite these commonalities, key differences exist in the regulatory approaches of the
US, China, and the EU, with significant implications for AI governance. The US corporately
is mainly driven by industry self-regulation and recommendations that are soft on the
legislature, but the European Union takes a more rigorous standard by law in accordance
with the safety of its consumers [18]. However, being an approach with a strong focus on
the development and use of novel technologies, it will generate a set of regulatory gaps and
problem selection for addressing arising ethical issues. In the choice of approaches, China's
model contradicts the American model, whose conception is linked to the trans-sectoral
one, whose features affect heavily the national strategic goals and industrial policy [19].
This approach could propel quick scientific developments, but it raises a number of
concerns on the basis of government set out in the field, surveillance, and censorship. Also,
the EU set up a regulatory regime that has resulted in profound principle-based regulation
merged with practical assessment of high-risk AI applications. Ultimately, the two factors
united and separated. Nevertheless, the big-ticket EU rules, by all means, may become an
obstacle to doing business, and the competitiveness of the global AI market is also affected.</p>
      <p>Given the cross-border nature of AI technologies and their potential impact,
international cooperation and standards play a crucial role in shaping AI regulation and
governance. This joint cooperation between states and regions helps to establish a
consistent policy and information sharing, promote the implementation of the best AI
practices, and present a unified standard for the growth and application of AI [20]. The
member countries can make use of various initiatives like the Global Partnership on
Artificial Intelligence (GPAI). Platforms provide a venue for members to maintain
collaboration on their activities and express their opinions on the general AI issue at a
regional level [21]. These partnerships will create conditions where both parties involved
will be building trust, making constant innovations, and ensuring that the creation and use
of AI technology will be ethical, human rights-based, and for the well-being of society.</p>
      <p>In interpreting the findings of this comparative study on AI regulation across the US,
China, and the EU, several insights emerge. Primarily, each one of the regulatory
frameworks becomes a direct influence of the interconnectivity between technological
advancement, societal values, and the geopolitical landscape. Despite the American
preference to give a license to industries to self-regulate and be quick to innovate, on the
other hand, China is aimed at a centrally planned and state-led approach to promoting
national strategic goals [21]. The EU is embracing principles-based regulation rather than
strict laws to foster innovation, with ethical considerations being given due attention. Such
research results accentuate the necessity to formulate AI regulation culture- and
regionspecifically, taking into account the specific political, economic, and social factors of a
country, which create a basis for a country's own regulatory landscape regulating AI
practices in a certain country or region.</p>
      <p>The delicate balance that must be maintained between innovation and regulation in the
context of AI governance is another topic that is being discussed. Additionally, the same
innovation-unknown road can be destructive because it disregards some ethical principles,
society's disparities, and the possibility of breaching human rights. This is in addition to the
fact that innovation is the key to fueling economic growth, technical advancement, and
societal development. An important consideration in this context is legislative care, which
can be utilized to reduce the likelihood of these hazards occurring while also ensuring the
proper utilization of AI technologies [13]. While excessive or restrictive regulation may
delay progress in AI, new insights, and innovation, it may also reduce competitiveness and
limit the benefits that may be achieved from AI for society as a whole. Therefore, the ethical
and sustainable governance plans for AI are already striking a balance between the creation
of innovation silos and the implementation of prudential protection in order to achieve
success.</p>
      <p>Looking ahead, the future outlook for AI governance is multifaceted and dynamic. As AI
technologies continue to evolve and permeate various aspects of society, there is growing
recognition of the need for coordinated international efforts to address common challenges
and promote responsible AI development. International collaboration and cooperation can
no doubt enable the creation of well-planned and aligned regulatory frameworks, the
sharing of experiences, and the setting common standards for AI governance. The variety of
AI initiatives, such as the GPAI, create platforms for cooperative arrangements and fight for
the future of AI regulation in the world [20]. Further, continuing talks and working with
people outside the sphere of these technologies, including governments, industries,
academia, and civil society, is critical for ensuring that AI technologies are developed and
used to respect human rights, develop people’s trust, and achieve a common good in an
AIdriven world.</p>
    </sec>
    <sec id="sec-8">
      <title>9. Conclusion</title>
      <p>This comparative study has provided valuable insights into the current state of AI regulation
across the US, China, and the EU. Among the main outcomes of the research, we note that
respective regions apply diverse regulatory policies ranging from industry’s self-regulation
in America to the state’s access policy in China, while the European Union in particular relies
on the scope of principles. Their common nature, however, transparency, impartiality, and
accountability, does not interfere with differences in the implementation and enforcement
of human rights in various regions reflecting unique political, economic, and social contexts.
Besides, the dialogue accentuated this uncomfortable equilibrium in AI governance
between the encouragement of innovation and the institution of regulatory measures. It
highlighted the significance of, according to different contexts, suggested approaches that
prioritize ethical AI development while at the same time fostering innovation and
competitiveness.</p>
      <p>Based on these insights, several recommendations can be made for policymakers to
enhance AI regulation and governance. Policymakers should focus on coordination and
information sharing between regions and countries to ensure the unity of legal regulation
and the exchange of best practices so that common standards for AI development can be
established. For the second part, policymakers must embrace transparency, accountability,
and fairness when AI systems are involved by making sure certain regulations and
guidelines are followed for high-risk AI applications, which would build public trust in AI
systems and strengthen the confidence of the public in AI technologies. Moreover,
policymakers should promote research and advanced development to overcome ethical and
societal aspects that might arise from AI systems, for instance, discrimination, biases, and
loss of privacy, and at the same time ensure that a well-established monitoring system is in
place to ensure that the AI technologies are maintaining societal rights and values. Finally,
leaders would work with stakeholders, like governments, industry, academics, and civil
society, to draw in diverse perspectives and interests and promote a co-creation approach
in AI governance.
[2] Fukuda‐Parr, S., &amp; Gibbons, E. (2021). Emerging consensus on ‘ethical AI’: Human rights
critique of stakeholder guidelines. Global Policy, 12, 32-44.
https://doi.org/10.1111/1758-5899.12965
[3] Vu, M. C., &amp; Burton, N. (2023). Beyond the inclusion–exclusion binary: Right
mindfulness and its implications for perceived inclusion and exclusion in the
workplace. Journal of Business Ethics, 1-19.
https://doi.org/10.1007/s10551-02305457-2
[4] Feldstein, S. (2023). Evaluating Europe's push to enact AI regulations: how will this
influence global norms?. Democratization, 1-18.
https://doi.org/10.1080/13510347.2023.2196068
[5] de Almeida, P. G. R., dos Santos, C. D., &amp; Farias, J. S. (2021). Artificial intelligence
regulation: a framework for governance. Ethics and Information Technology, 23(3),
505-525. https://doi.org/10.1007/s10676-021-09593-z
[6] de Bruijn, H., Warnier, M., &amp; Janssen, M. (2022). The perils and pitfalls of explainable
AI: Strategies for explaining algorithmic decision-making. Government information
quarterly, 39(2), 101666. https://doi.org/10.1016/j.giq.2021.101666
[7] Fletcher, R. R., Nakeshimana, A., &amp; Olubeko, O. (2021). Addressing fairness, bias, and
appropriate use of artificial intelligence and machine learning in global
health. Frontiers in artificial intelligence, 3, 561802.
https://doi.org/10.3389/frai.2020.561802
[8] whitehouse.gov. (2024, January 29). Executive Order (E.O.) 14110 on Safe, Secure, and</p>
      <p>
        Trustworthy Development and Use of Artificial Intelligence.
[9] Canayaz, M., Kantorovitch, I., &amp; Mihet, R. (2022). Consumer privacy and value of
consumer data. Swiss Finance Institute Research Paper, (22-68).
https://dx.doi.org/10.2139/ssrn.3986562
[10] Shaelou, S. L., &amp; Razmetaeva, Y. (2024, January). Challenges to Fundamental Human
Rights in the age of Artificial Intelligence Systems: shaping the digital legal order while
upholding Rule of Law principles and European values. In ERA Forum (pp. 1-21).
Berlin/Heidelberg: Springer Berlin Heidelberg.
https://doi.org/10.1007/s12027-02300777-2
[11] Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., &amp; Floridi, L. (2021). The Chinese
approach to artificial intelligence: an analysis of policy, ethics, and regulation. Ethics,
governance, and policies in artificial intelligence, 47-79.
https://doi.org/10.1007/9783-030-81907-1_5
[12] Saalman, L. (2020). China and India: Two models for AI military acquisition and
integration. In Routledge Handbook of China–India Relations (pp. 266-288). Routledge.
https://doi.org/10.4324/9781351001564
[13] Alhosani, K., &amp; Alhashmi, S. M. (2024). Opportunities, challenges, and benefits of AI
innovation in government services: a review. Discover Artificial Intelligence, 4(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 18.
https://doi.org/10.1007/s44163-024-00111-w
[14] Liu, I. (2023, August 18). China passes law to regulate generative AI. Conventus Law.
      </p>
      <p>https://conventuslaw.com/report/china-passes-law-to-regulate-generative-ai/
[15] Castets-Renard, C., &amp; Besse, P. (2022). Ex Ante accountability of the AI Act: Between
certification and standardization, in pursuit of fundamental rights in the country of
compliance. Pursuit of Fundamental Rights in the Country of Compliance (August 29,
2022). Artificial Intelligence Law: Between Sectoral Rules and Comprehensive Regime.
Comparative Law Perspectives, C. Castets-Renard &amp; J. Eynard (eds), Bruylant,
Forthcoming. https://ssrn.com/abstract=4203925
[16] Pagallo, U., Ciani Sciolla, J., &amp; Durante, M. (2022). The environmental challenges of AI in
EU law: lessons learned from the Artificial Intelligence Act (AIA) with its
drawbacks. Transforming Government: People, Process and Policy, 16(3), 359-376.
https://doi.org/10.1108/TG-07-2021-0121
[17] Robinson, S. C. (2020). Trust, transparency, and openness: How inclusion of cultural
values shapes Nordic national public policy strategies for artificial intelligence
(AI). Technology in Society, 63, 101421.
https://doi.org/10.1016/j.techsoc.2020.101421
[18] Ford, J. (2021). Business, peace and human rights: The regulatory significance of pop
culture products. In Music, Business and Peacebuilding (pp. 275-321). Routledge.
https://www.taylorfrancis.com/chapters/edit/10.4324/978100301788219/business-peace-human-rights-jolyon-ford
[19] Ding, H., &amp; Kong, Y. (2024). Theorizing knowledgescape as a transnational mediating
force: Artificial intelligence and global flows. Global Media and Communication,
17427665241236331. https://doi.org/10.1177/17427665241236331
[20] Ala-Pietilä, P., &amp; Smuha, N. A. (2021). A framework for global cooperation on artificial
intelligence and its governance. Reflections on artificial intelligence for humanity,
237265. https://link.springer.com/chapter/10.1007/978-3-030-69128-8_15
[21] Vanberghen, C., &amp; Vanberghen, A. (2021). AI governance as a patchwork: The
regulatory and geopolitical approach of AI at international and European level. In EU
internet law in the digital single market (pp. 233-246). Cham: Springer International
Publishing. https://doi.org/10.1007/978-3-030-69583-5_9
[22] Bird, S., Dudík, M., Edgar, R., Horn, B., Lutz, R., Milan, V., ... &amp; Walker, K. (2020). Fairlearn:
A toolkit for assessing and improving fairness in AI. Microsoft. Tech. Rep.
MSR-TR2020-32.
[23] Tupa, J., Simota, J., &amp; Steiner, F. (2017). Aspects of risk management implementation for
industry 4.0. Procedia Manufacturing, 11, 1223–1230.</p>
      <p>https://doi.org/10.1016/j.promfg.2017.07.248 .
[24] Dallat, C., Salmon, P. M., &amp; Goode, N. (2018). Identifying risks and emergent risks across
sociotechnical systems: the NETworked hazard analysis and risk management system
(NET-HARMS). Theoretical Issues in Ergonomics, 19(4), 456–482.
https://doi.org/10.1080/1463922x.2017.1381197.
[25] Hubbard, D. W. (2020). The failure of risk management: Why it's broken and how to fix
it. John Wiley &amp; Sons.
[26] Janssen, M., Brous, P., Estevez, E., Barbosa, L. S., &amp; Janowski, T. (2020). Data governance:
Organizing data for trustworthy Artificial Intelligence. Government Information
Quarterly, 37(3), 101493. https://doi.org/10.1016/j.giq.2020.101493.
[27] Beckers, R., Kwade, Z., &amp; Zanca, F. (2021). The EU medical device regulation:
Implications for artificial intelligence-based medical device software in medical
physics. Physica Medica: PM: An International Journal Devoted to the Applications of
Physics to Medicine and Biology: Official Journal of the Italian Association of Biomedical
Physics (AIFB), 83, 1–8. https://doi.org/10.1016/j.ejmp.2021.02.011.
[28] Holzinger, A., Langs, G., Denk, H., Zatloukal, K., &amp; Müller, H. (2019). Causability and
explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews. Data
Mining and Knowledge Discovery, 9(4), e1312. https://doi.org/10.1002/widm.1312.
[29] Hollanek, T. (2023). AI transparency: A matter of reconciling design with critique. AI &amp;</p>
      <p>Society, 38(5), 2071-2079.
[30] Larsson, S., &amp; Heintz, F. (2020). Transparency in artificial intelligence. Internet Policy</p>
      <p>
        Review, 9(2). https://doi.org/10.14763/2020.2.1469.
[31] Arifin, S., Darmawan, D., Hartanto, C. F. B., &amp; Rahman, A. (2022). Human Resources
based on Total Quality Management. Journal of Social Science Studies (JOS3), 2(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 17–
20. https://doi.org/10.56348/jos3.v2i1.22.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Sarker</surname>
            ,
            <given-names>I. H.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>AI-based modeling: techniques, applications and research issues towards automation, intelligent and smart systems</article-title>
          .
          <source>SN Computer Science</source>
          ,
          <volume>3</volume>
          (
          <issue>2</issue>
          ), 158. https://doi.org/10.1007/s42979-022-01043-x
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>