<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Artificial Intelligence as a peacebuilding tool: what is missing?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sveva Ianese</string-name>
          <email>sveva.ianese@studenti.unipd.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Economics and Management, University of Padova</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper explores the role of Artificial Intelligence (AI) in conflict management and peacebuilding. This emergent research field is examined firstly through a bibliometric analysis of the current literature, based on a database of 158 documents collected by Scopus and published between 1985 and 2024. The analysis highlights the historical evolution of the research field while pinpointing some research gaps. Secondly, we ofer a broad overview of the most recent Regulations on this topic (European AI Act, U.S. Executive Order, Chinese laws and political documents). A new perspective on the impact of AI in reducing conflicts emerges, although its driving role in promoting world peace has to be severely reinforced.</p>
      </abstract>
      <kwd-group>
        <kwd>artificial intelligence</kwd>
        <kwd>peace</kwd>
        <kwd>bibliometric analysis</kwd>
        <kwd>literature review</kwd>
        <kwd>regulation</kwd>
        <kwd>AI Act</kwd>
        <kwd>Executive Order</kwd>
        <kwd>China</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>we define conflict
or international level.</p>
      <p>
        Artificial intelligence (AI) represents one of the most transformative technological innovations of our
time [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Its ability to analyze massive amounts of information, learn from it and provide data-driven
outputs ofers potential benefits across many economic sectors [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. One of the most promising, but also
complex, areas in which AI has a significant impact is peacebuilding.
      </p>
      <p>
        This concept difers from conflict management. While the latter involves diplomatic measures to
keep intrastate or interstate disputes from escalating into armed conflicts [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], the former aims at
reducing the risk of (re)lapsing into conflict by strengthening national capacities at all levels and at
laying the foundation for sustainable peace and development [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Examples of AI applications in
conflict management are detection of cyber attacks on critical infrastructures [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ], logistics, troops and
equipment transportation [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], support for military decision-making processes [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and the control over
autonomous weapon systems [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ] [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]. Examples of AI applications in peacebuilding are instead
the delivery of humanitarian aids by drones [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], conflict prevention through sentiment analysis tools
[
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ], support for peacekeeping operations through NLP models to facilitate real-time dialogues [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]
and negotiations [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        Conflict theorization also embraces milder forms of tensions, such as ethnic, sexual or age
discrimination, inequalities and other forms of social frictions that do not reach armed fights [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] (art. 1.2).
Such situations may either turn into a conflict or not, due to their characteristics (prolongation in time,
extension in space, degree of intensity) and/or the use of weapons 1[
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. For the purpose of this work
as any form of social or political tension, whether armed or unarmed, at the national
AI systems can exacerbate the detrimental efects of these phenomena by causing unfairness or
breaching fundamental rights [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. In order to consider Artificial Intelligence as an efective
peace
      </p>
      <p>CEUR</p>
      <p>ceur-ws.org
building tool, we analyze its impact on both conflict management and peace promotion and the way in
which lawmakers around the world tried to deploy a set of rules for governing it. Many attempts have
been made so far but, in this study, we limit our comparative analysis to the most recent AI regulations,
specifically the European AI Act, the U.S. Executive Order and Chinese laws (both hard and soft ones).</p>
      <p>The paper proceeds as follows: Section 2 illustrates the research questions. Section 3 presents the
stages of the bibliometric analysis and its results. Section 4 provides a comparative analysis of the
targeted regulations focusing on the role of AI in conflict reduction and peace promotion. Section 5
includes a discussion and some concluding remarks.</p>
    </sec>
    <sec id="sec-2">
      <title>2. The problematization of AI as a peacebuilding tool</title>
      <p>.</p>
      <p>The role of artificial intelligence in promoting peace worldwide is one of the most interesting, yet
under-implemented, perspectives of examining the ethics of AI nowadays. Our objective is to highlight
how this technology could increase international prosperity and what is the role of current regulations
in enabling it. We address the following research questions:</p>
      <p>What is the state-of-the-art of empirical research on AI as an instrument of peace-making? Is it
comprehensive or are there some gaps? Can we look at current legislations to derive suggestions on how to
conceive of AI as a tool for improving peace worldwide?</p>
    </sec>
    <sec id="sec-3">
      <title>3. Literature analysis</title>
      <p>.</p>
      <p>As academic debate on AI and peace is still emergent, it is useful to develop a structured and explicative
review of the topic. We aim to clarify the evolution of research on AI as a peacebuilding tool over time
and the intellectual structure of this rising field of study.</p>
      <p>The examination is conducted through a bibliometric analysis of 158 documents indexed in the Scopus
database, which is one of the most important instruments for collecting systematic information on
global scientific literature [ 21, 22, 23]. It is especially useful for mapping an emergent field of research,
as it is not limited to ISI (International Scientific Indexing) journals. As Borgman and Furner (2002) [ 24]
explain, bibliometrics ofers a powerful set of techniques and measures for studying the structure of
scholarly communication [25].</p>
      <p>The sample contains 66 articles and 92 conference papers at any publication stage, in order to include
also the most recent works on this topic. Only English documents are considered and we do not set
a specific time span. The final cluster includes 128 sources and a time window of approximately 40
years. At this point, bibliometric data are analyzed using the bibliometrix software, a flexible tool for
conducting comprehensive mapping analysis [26].</p>
      <p>Table 1 displays the principal information regarding the bibliographic data frame.</p>
      <p>Referring to the time window, most of the works on AI and peace had been published from 2016
onwards. The highest levels of scientific production have been registered in 2018, 2021 and 2023
respectively (Fig. 1). Those peaks coincide with a phase of increased war conflicts and social tensions
worldwide (e.g. Syria in 2018, India in 2021, Ukraine in 2023) [27]. Moreover, many of these events
were characterized by the adoption of AI in military settings [28, 29, 30].</p>
      <p>The geography of the scientific production reflects international dynamics too. The most prolific
countries are not only those driving global technological progress but also those where AI has been
applied in conflictual contexts (both armed and unarmed), such as China and the U.S. [ 29] (Fig. 2).</p>
      <p>Scholars’ attention on the relationship between AI and armed conflicts seems to be largely drawn by
international dynamics consequently.</p>
      <p>Moving to the conceptual structure of the targeted knowledge, various thematic clusters emerge
through a co-occurrence analysis (Fig. 3).
• A first (purple) cluster relates to the military applications of AI, with associated keywords like
disaster and military application.
• A second (orange) cluster explores the topic of human-machine interaction and its implications
for human life, as it relates to words like machine learning and human.
• A third (green) cluster highlights the relation between AI - specifically machine learning - and
international relations, due to its matching with keywords like forecasting and international
relations.
• Finally, the fourth (red) cluster links the use of AI to purposes unrelated to armed conflicts. This
area includes technological applications for improving the quality of citizens’ life, as suggested
by social aspects, e-learning and education keywords.</p>
      <p>A keywords analysis concludes our bibliometric scrutiny. Figure 4 is developed using the
wordclouds.com software starting from the keywords listed by the authors in the sampled papers. Alongside
terms associated with armed conflicts (e.g. war, weapons, conflict), the cloud provides other terms
related to unarmed tensions (e.g. surveillance, dispute resolution) and individual prosperity (education,
learning, SDGs). Peace word emerges explicitly in the cloud.</p>
    </sec>
    <sec id="sec-4">
      <title>4. AI and peacebuilding. A comparative legal perspective.</title>
      <p>An in-depth analysis of current regulations serves us to verify how lawmakers are addressing the
potential of AI in conflictual environments. In recent years, many legal initiatives have tried to untangle
the topic of artificial intelligence but in this work we focus on the most developed projects, meaning
the European Regulation on Artificial Intelligence, the U.S. Executive Order on the Safe, Secure, and
Trustworthy Development and Use of Artificial Intelligence and a set of provisions adopted by the
People’s Republic of China (PRC). These acts difer significantly from each other in their approach
and main purposes but, despite all the diferences, they constitute the most mature attempts to rule
the development and commercialization of AI systems globally. The following sections ofer a brief
analysis of each regulation to highlight whether, and eventually how, the role of AI as a peacebuilding
instrument is assessed by lawmakers.</p>
      <sec id="sec-4-1">
        <title>4.1. The European Regulation on Artificial Intelligence</title>
        <p>The European Regulation on Artificial Intelligence (or “Artificial Intelligence Act”) resulted from the
efort of the European Union to join the technological race [ 31]. It ofers a regulatory framework for
the development, the placing on the market, the putting into service and the use of AI systems by
adopting a horizontal approach that aims to cover their applications across all economic sectors (Recital
1). The Regulation is strongly inspired by European foundational values and promotes anthropocentric
artificial intelligence [ 32, 33].</p>
        <p>These provisions are built on the assumption that AI systems may cause diferent risks and impact
on fundamental rights in various ways [34, 35]. Therefore, depending on the expected level of risk
(unacceptable, high, limited or minimal), the lawmaker established a set of rules that developers,
producers and distributors must comply with. For example, the use of unacceptable risk systems (e.g.
biometric categorization systems based on sensitive characteristics, emotion recognition systems used in
the workplace and social scoring systems) is banned. High risk systems, such as those whose adoption
could undermine safety or fundamental rights, are permitted although their use is subject to strict
obligations [36].</p>
        <p>Transparency duties are set for general-purpose AI systems and for those solutions that imply an
interaction with users (e.g. chatbots) since they pose specific, even lower, risks (e.g. disinformation).</p>
        <p>Looking at the relationship between artificial intelligence and conflicts, the Regulation expressly
excludes from its application AI systems used solely for military purposes. Those are not subject to the
rules set out in the AI Act [31] (Recital 24).</p>
        <p>However there are ”mixed” solutions that can be developed for both military and non-military
purposes, such as drones or biometric recognition systems. Additionally, if an AI system is developed
or placed on the market exclusively for warfare but is subsequently used for other purposes (e.g. for
civilian or humanitarian aims), such a system still falls within the scope of the AI Act (Recital 24). In
these hypotheses the Regulation plays a key role in promoting the adoption of AI systems aligned
with human rights and EU democratic values by establishing a set of strict obligations for their usage.
These limitations may reduce the risk of technological abuses in not-solely-military contexts and the
occurrence of armed conflicts by encouraging a human-focused AI development [ 37, 38].</p>
        <p>A first example of these guarantees concerns the use of AI for migration management, asylum and
border control, access to essential public services and employment management, which have always
been critical sectors for the emergence of social and political tensions. Since those are classified as
high-risk systems (art. 6; Annex III), the Regulation imposes a bunch of additional guarantees for their
development and use - like the preliminary drafting of a fundamental rights impact assessment, logs
recording, human oversight against algorithmic drift and transparency and accuracy obligations. These
constraints aim to avoid AI-driven discrimination against weak or underrepresented citizens [39] and
promote equal access to essential services or jobs. Compliance with these obligations could dramatically
reduce socio-political conflicts caused by technological unfairness.</p>
        <p>The second example concerns deepfakes, for which the AI Act establishes transparency and labeling
obligations (art. 50) [40, 41, 42]. In fact, AI-generated contents relate to serious political tensions
[43, 44, 45] so a reliable use of these technologies may promote a clearer propaganda and a subsequent
reduction in political frictions.</p>
        <p>In conclusion, the European Regulation lays relevant foundations for a human-centered usage of
intelligent systems in unarmed conflicts while not addressing warfare issues. It seeks to unsettle social
and political frictions by encouraging the development of AI in accordance with democratic values.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Artificial intelligence - the U.S. framework</title>
        <p>The United States is leading the AI race in many economic sectors but, unlike Europe, they lack a
homogeneous legislative framework at Federal level while Member States assumed stricter or more
tolerant regulatory positions on this topic [46]. The most significant initiative at Federal level is
represented by the White House Executive Order on the Safe, Secure, and Trustworthy Development
and Use of Artificial Intelligence [ 47, 48]. It presents eight guiding principles and priorities to be
followed in the governance, development and use of AI systems.</p>
        <p>The application of such technology in military and intelligence sectors is explicitly addressed in the
document. It asks for a national memorandum to explore the role of AI as a key component of the U.S.
intelligence and defense strategy, analyzing its impact on citizens’ (and exceptionally foreigners’) rights
(Sec. 4.8). Regarding security and cybersecurity threats, the Executive Order identifies a set of actions to
be adopted in order to (i) mitigate the risk that AI is used improperly for developing biological weapons
or other chemical perils (Sec. 4.4); (ii) encourage the use of AI for discovering and fixing national IT
vulnerabilities (Sec. 4.3); (iii) protect critical infrastructures (Sec. 4.2).</p>
        <p>The document also covers the topic of deepfakes. It calls for guidelines and tools for authenticating,
detecting, labeling and auditing AI-generated or manipulated contents (Sec. 4.5). Its objective is to
facilitate the detection of those contents in order to increase communications transparency. These
predictions aim to reduce political frictions by weakening misinformation [49, 50].</p>
        <p>Furthermore, the Executive Order highlights the importance of non-discriminatory use of AI in
contexts that could generate social and political tensions, such as workplace (Sec. 6), healthcare and
justice (Sec. 7-8), data protection (Sec. 9). It sets out a roadmap for achieving these objectives in the
coming year while diminishing the risk of algorithmic abuse and biased decisions.</p>
        <p>To conclude, the Executive Order covers both warfare and nonmilitary matters. It recognizes the key
role of AI in restraining armed and unarmed conflict, algorithmic discrimination and disinformation.
Unlike the European Regulation, the document emphasizes the centrality of AI in the U.S. military and
intelligence sectors for promoting defence and protecting its citizens from armed attacks (Sec. 4.8).</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. The Chinese rules for artificial intelligence</title>
        <p>From 2021 onwards, the People’s Republic of China has issued a bunch of sectoral regulations and
political documents dedicated to AI. These provisions set out new requirements for algorithms
development and application, disclosure obligations and technical performance standards [51, 52]. The
Chinese approach is “vertical” as it focuses on the main characteristics or applications of AI systems
for designing its discipline. However, these provisions have common features that allow for some
generalizations [53].</p>
        <p>Regarding the role of AI in the military context, since 2019 China has been promoting the
”intelligentization” of armed conflict based on the integration of artificial intelligence, quantum computing,
big data and other cutting-edge technologies with human tactics [54, 55, 56]. A substantive legislation
on AI in warfare is still lacking but non-mandatory provisions have been adopted in order to provide
international guidelines on this topic. We refer to the Position Paper of the People’s Republic of China
on the regulation of military applications of artificial intelligence [ 57]. The document stresses the
importance of preventing the escalation of conflicts and instability at global level while urging
governments on the responsible development and application of AI. It calls for strengthening mutual eforts to
regulate warfare applications of such technology but admit internal policies allowing the development
of weapon systems for countries’ defense.</p>
        <p>China recently adopted another political document about AI ethics. The Position Paper on
Strengthening Ethical Governance of Artificial Intelligence explains the Chinese commitment in advocating
a human-centered approach to AI and the principle of AI for good [58]. It calls on governments to
prioritize ethics and improve accountability mechanisms for protecting the rights of all civic groups.
Additionally, the document invites foreign countries to (i) prohibit the use of AI technologies in contrast
with laws, regulations, policies and international standards; (ii) identify potential ethical risks implicit
in AI.</p>
        <p>On the other hand, China disentangled the role of AI systems in unarmed conflicts by adopting a
specific regulation on ”deep synthesis”. The Regulation on Deep Integration Management of Internet
Information Services [59] applies to AI-based technology that enables content synthesis provided
within the Republic. It aims to both strengthen the management of those systems by promoting their
reasonable and efective use in accordance with the law and preserve a good ecology in cyberspace. In
order to address specific issues related to deep fakes, the Regulation bans the dissemination of fake
news (art. 6) and the alteration of people’s biometric characteristics without their consent (art. 14).
It forces service providers to authenticate their users before providing them any data or information
(art. 9), as this technology can be used to produce, copy and disseminate illegal or false information
or assume other people’s identities. Finally, it poses a set of technical obligations on content creators
(e.g. security assessment when these contents “might involve national security”) (art. 15) and forces
watermarking for AI-generated contents (art. 17). Consequently the Regulation sets a ”red line” of deep
synthesis services in order to protect communication transparency and reduce frauds.</p>
        <p>To sum up: Chinese provisions point out the central role of AI in governing armed conflict. They
allow the use of AI for defense purposes but reject military applications of intelligent systems for
obtaining hegemony in warfare. With respect to unarmed conflicts, a sectoral Regulation aspires to
reduce the circulation of misleading or sensitive content, which may lead to socio-political tensions (art.
4).</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion and final remarks</title>
      <p>The relationship between AI and conflicts (in particular armed ones) appears well established in the
literature. Our analysis reveals a rising attention of the scientific community on the relationships
between AI, conflict management and human wellbeing in the last years (Fig. 1). In particular, the
co-occurrence analysis pinpoints a deep focus on the role of artificial intelligence either in performing
conflicts or in improving the quality of people’s life (Fig. 3). Peace keyword appears as bold in our
wordcloud, which means that its linkage with AI systems has been largely explored by scholars (Fig. 4).</p>
      <p>But while our analysis detects a deep relationship between AI and human prosperity, the concept of
peacebuilding seems to be flattened on its individual dimension. In other words, the literature privileges
framing “peace” as individual well-being rather than as a collective good. This assumption is supported
by the fact that terms like education, health, caregiver and SDGs are associated with AI in our analysis
(Fig. 3, 4). In conclusion, a hint on the actual role of artificial intelligence in advancing world peace
seems to be lacking in the literature.</p>
      <p>This gap ofers a significant starting point for investigating how lawmakers tried to disentangle
the concrete involvement of AI technologies in promoting peace globally. A comparative analysis
of targeted regulations clarifies the main diferences among the three legal ecosystems. Unlike the
Artificial Intelligence Act, the U.S. Executive Order and the Chinese provisions explicitly handle the
use of intelligent systems for military purposes, establishing a set of guiding principles for conflict
management at the national and international level. AI is considered a risky weapon so both disciplines
admit its use only for national defense while recalling the importance of ethics in this sector [47, 57].</p>
      <p>The three legislations also take into account the role of AI in reducing unarmed conflicts. Despite
ideological diferences, they condemn the irresponsible use of these systems in generating social
inequalities and disinformation. They all stress the central role of AI in promoting crystal communications
and non-discrimination among individuals, considering it a key factor for political and social stability.
However, only the Artificial Intelligence Act includes binding countermeasures for minimizing broad
additional sources of social tensions, like migration flows and border control, delivery of essential
public services and workplace. For each field, it provides targeted duties in order to reduce the risk of
socio-political frictions (art.6 seq.).</p>
      <p>Although these provisions represent a fundamental step in promoting AI ethics worldwide, none of
them seems to perceive such technology as an efective peacebuilding tool. Its adoption is governed by
the same provisions that are setting the rules for AI development in traditional or not-only-military
contexts. None of the targeted regulations includes an additional set of rules specifically dedicated to the
development and use of AI for peacebuilding purposes. This equivalency may slow down the adoption
of intelligent systems in this sector, as developers and deployers are subject to a very strict set of
obligations. This circumstance may interfere with the technology-driven advancement of peacebuilding
techniques and make the objective of sustainable world prosperity more dificult to achieve.</p>
      <p>This conclusion provides a useful point of departure for designing the trajectory of future works
aimed at reinforcing the role of artificial intelligence in the peacebuilding sector. We invite lawmakers,
governments and international activists to put this topic at the center of their eforts and initiatives and
to develop a dedicated set of rules, through mandatory or political documents, that might facilitate the
development of cutting-edge solutions for promoting and preserving world peace.</p>
      <p>This study has some limitations that should be acknowledged. We include the choice of the initial
search keywords, which inevitably afect the results of our work. Also, the comparative legal analysis
targets three very diferent countries that adopt politically-driven approaches regarding the use of
intelligent systems. The results are largely afected by the socio-political background of these countries
and this aspect might weaken our conclusions. Additionally, our study could be complemented by
further research on other regulations and international laws and documents dedicated to examine the
role of AI in promoting global peace.</p>
      <p>This work initiates an international discussion on the future trajectories of AI as a peacebuilding
tool, both as a theoretical concept and a diplomatic and political issue. Through preventing conflict
and supporting peace operations, AI can become a powerful ally in creating a nonviolent and
humancentered world.
B. Filar, et al., The malicious use of artificial intelligence: Forecasting, prevention, and mitigation,
arXiv preprint arXiv:1802.07228 (2018).
[21] P. Mongeon, A. Paul-Hus, The journal coverage of web of science and scopus: a comparative
analysis, Scientometrics 106 (2016) 213–228.
[22] S. A. S. AlRyalat, L. W. Malkawi, S. M. Momani, Comparing bibliometric analysis using pubmed,
scopus, and web of science databases, JoVE (Journal of Visualized Experiments) (2019) e58494.
[23] J. Li, J. F. Burnham, T. Lemley, R. M. Britton, Citation analysis: Comparison of web of science®,
scopus™, scifinder®, and google scholar, Journal of electronic resources in medical libraries 7
(2010) 196–217.
[24] C. L. Borgman, J. Furner, Scholarly communication and bibliometrics, Annual review of information
science and technology 36 (2002) 1–53.
[25] B. Cronin, H. B. Atkins, The web of knowledge: A festschrift in honor of Eugene Garfield,
Information Today, 2000.
[26] M. Aria, C. Cuccurullo, bibliometrix: An r-tool for comprehensive science mapping analysis,</p>
      <p>Journal of informetrics 11 (2017) 959–975.
[27] G. C. D. Lab, Number of armed conflicts, World, ???? URL: https://ourworldindata.org/grapher/
number-of-armed-conflicts.
[28] S. Penati, L. Pistarini Teixeira Nunes, On the Use of Artificial Intelligence in the framework of the
Syrian War, Technical Report, Budapest Centre for Mass Atrocities Prevention, Budapest, 2021.
URL: https://www.genocideprevention.eu/files/On_the_Use_of_Artificial_Intelligence_in_the_
framework_of_the_Syrian_War.pdf.
[29] T. Khurshid, The impact of artificial intelligence militarization on south asian deterrence dynamics,</p>
      <p>BTTN Journal 2 (2023) 134–150.
[30] R. Lindelauf, H. Meerveld, M. Postma, Leveraging decision support in the russo-ukrainian war,</p>
      <p>Atlantisch Perspectief 47 (2023) 36–41.
[31] European Union, Regulation 2024/1689 of the European Parliament and of the Council of 13 June
2024 laying down harmonized rules on artificial intelligence and amending Regulations (EC) No
300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144
and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828, 2024. URL: https://eur-lex.europa.
eu/legal-content/EN/TXT/PDF/?uri=OJ:L_202401689.
[32] F. Vainionpää, K. Väyrynen, A. Lanamaki, A. Bhandari, A review of challenges and critiques of
the european artificial intelligence act (aia) (2023).
[33] M. Comunale, A. Manera, The economic impacts and the regulation of ai: A review of the academic
literature and policy actions (2024).
[34] C. Novelli, F. Casolari, A. Rotolo, M. Taddeo, L. Floridi, Ai risk assessment: A scenario-based,
proportional methodology for the ai act, Digital Society 3 (2024) 13.
[35] D. Casaburo, I. Marsh, Ensuring fundamental rights compliance and trustworthiness of law
enforcement ai systems: the aligner fundamental rights impact assessment, AI and Ethics (2024)
1–14.
[36] M. Jacobs, J. Simon, Assigning obligations in ai regulation: A discussion of two frameworks
proposed by the european commission, Digital Society 1 (2022) 6.
[37] J. J. Bryson, A. Theodorou, How society can maintain human-centric artificial intelligence,</p>
      <p>Human-centered digitalization and services (2019) 305–323.
[38] M. Estévez Almenzar, D. Fernández Llorca, E. Gómez, F. Martinez Plumed, Glossary of
humancentric artificial intelligence, Sevilla: Joint Research Centre (Seville Site) (2022).
[39] B. Xavier, Biases within ai: challenging the illusion of neutrality, AI &amp; SOCIETY (2024) 1–2.
[40] M. Łabuz, Deep fakes and the artificial intelligence act—an important signal or a missed
opportunity?, Policy &amp; Internet (2024).
[41] F. Romero Moreno, Generative ai and deepfakes: a human rights approach to tackling harmful
content, International Review of Law, Computers &amp; Technology (2024) 1–30.
[42] M. Westerlund, The emergence of deepfake technology: A review, Technology innovation
management review 9 (2019).
[43] C. Whyte, Deepfake news: Ai-enabled disinformation as a multi-level public policy challenge,</p>
      <p>Journal of cyber policy 5 (2020) 199–217.
[44] C. Vaccari, A. Chadwick, Deepfakes and disinformation: Exploring the impact of synthetic
political video on deception, uncertainty, and trust in news, Social media+ society 6 (2020)
2056305120903408.
[45] M. Groh, A. Sankaranarayanan, N. Singh, D. Y. Kim, A. Lippman, R. Picard, Human detection of
political speech deepfakes across transcripts, audio, and video, Nature Communications 15 (2024)
7629.
[46] N. Maslej, L. Fattorini, E. Brynjolfsson, J. Etchemendy, K. Ligett, T. Lyons, J. Manyika, H. Ngo, J. C.</p>
      <p>Niebles, V. Parli, et al., The ai index 2023 annual report, AI Index Steering Committee, Institute
for Human-Centered AI, Stanford University, Stanford, CA (2023).
[47] J. R. Biden, Executive order on the safe, secure, and trustworthy development and use of artificial
intelligence (2023).
[48] M. Wörsdörfer, Biden’s executive order on ai and the eu’s ai act: A comparative computer-ethical
analysis, Philosophy &amp; Technology 37 (2024) 74.
[49] B. Chesney, D. Citron, Deep fakes: A looming challenge for privacy, democracy, and national
security, Calif. L. Rev. 107 (2019) 1753.
[50] M. Pawelec, Deepfakes and democracy (theory): How synthetic audio-visual media for
disinformation and hate speech threaten core democratic functions, Digital society 1 (2022) 19.
[51] H. Roberts, J. Cowls, J. Morley, M. Taddeo, V. Wang, L. Floridi, The Chinese approach to artificial
intelligence: an analysis of policy, ethics, and regulation, Springer, 2021.
[52] A. H. Zhang, The promise and perils of china’s regulation of artificial intelligence, Available at</p>
      <p>SSRN (2024).
[53] I. A. Filipova, Legal regulation of artificial intelligence: Experience of china, Journal of Digital</p>
      <p>Technologies and Law 2 (2024) 46–73.
[54] S. C. I. Ofice, China’s National Defense in the New Era, 2019. URL: https://www.gov.cn/zhengce/
2019-07/24/content_5414325.htm.
[55] P. Paszak, The security strategy of the people’s republic of china in light of the 2019 defence white
paper, The Bellona Quarterly 700 (2020) 49–64.
[56] J. Wuthnow, M. T. Fravel, China’s military strategy for a ‘new era’: Some change, more continuity,
and tantalizing hints, Journal of Strategic Studies 46 (2023) 1149–1184.
[57] P. R. of China, Position Paper of the People’s Republic of China on Regulating Military Applications
of Artificial Intelligence (AI), 2021. URL: http://geneva.china-mission.gov.cn/eng/dbdt/202112/
t20211213_10467517.htm.
[58] P. R. of China, Position Paper of the People’s Republic of China on Strengthening Ethical
Governance of Artificial Intelligence (AI), 2022. URL: https://www.fmprc.gov.cn/eng/zy/wjzc/202405/
t20240531_11367525.html.
[59] C. A. of China, Regulation on the Management of Deep Synthesis of Internet Information Services,
2022. URL: https://www.cac.gov.cn/2022-12/11/c_1672221949354811.htm.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O. Ozmen</given-names>
            <surname>Garibay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Winslow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Andolina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Antona</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bodenschatz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Coursaris</surname>
          </string-name>
          , G. Falco,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Fiore</surname>
          </string-name>
          , I. Garibay,
          <string-name>
            <given-names>K.</given-names>
            <surname>Grieman</surname>
          </string-name>
          , et al.,
          <article-title>Six human-centered artificial intelligence grand challenges</article-title>
          ,
          <source>International Journal of Human-Computer Interaction</source>
          <volume>39</volume>
          (
          <year>2023</year>
          )
          <fpage>391</fpage>
          -
          <lpage>437</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Calo</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence policy: a primer and roadmap</article-title>
          ,
          <source>UCDL Rev</source>
          .
          <volume>51</volume>
          (
          <year>2017</year>
          )
          <fpage>399</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Tanner</surname>
          </string-name>
          ,
          <article-title>Conflict prevention and conflict resolution: limits of multilateralism</article-title>
          ,
          <source>International review of the Red Cross</source>
          <volume>82</volume>
          (
          <year>2000</year>
          )
          <fpage>541</fpage>
          -
          <lpage>559</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>International</given-names>
            <surname>Peace</surname>
          </string-name>
          <article-title>Institute (IPI), Peacebuilding</article-title>
          .
          <source>Task Forces on Strengthening Multilateral Security Capacity</source>
          ,
          <source>Technical Report 10</source>
          , International Peace Institute (IPI), New York,
          <year>2009</year>
          . URL: https://www.ipinst.org/wp-content/uploads/publications/peacebuilding_1.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Guembe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Azeta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Misra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. C.</given-names>
            <surname>Osamor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fernandez-Sanz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Pospelova</surname>
          </string-name>
          ,
          <article-title>The emerging threat of ai-driven cyber attacks: A review</article-title>
          ,
          <source>Applied Artificial Intelligence</source>
          <volume>36</volume>
          (
          <year>2022</year>
          )
          <fpage>2037254</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rege</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. B. K.</given-names>
            <surname>Mbah</surname>
          </string-name>
          ,
          <article-title>Machine learning for cyber defense and attack</article-title>
          ,
          <source>Data Analytics</source>
          <year>2018</year>
          (
          <year>2018</year>
          )
          <fpage>83</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B</given-names>
            <surname>. A. de Castro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. G. C.</given-names>
            <surname>Pochmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. B.</given-names>
            <surname>Neves</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence applications in military logistics operations</article-title>
          , in: Multidisciplinary International Conference of Research Applied to Defense and Security, Springer,
          <year>2023</year>
          , pp.
          <fpage>89</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Basuchoudhary</surname>
          </string-name>
          ,
          <article-title>Ai and warfare: A rational choice approach</article-title>
          ,
          <source>Eastern Economic Journal</source>
          (
          <year>2024</year>
          )
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>ICRC</given-names>
            ,
            <surname>Artificial</surname>
          </string-name>
          <string-name>
            <surname>intelligence</surname>
          </string-name>
          <article-title>and machine learning in armed conflict: A human-centred approach</article-title>
          ,
          <source>ICRC Geneva</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>M. M. Yamin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Ullah</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Ullah</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Katt</surname>
          </string-name>
          ,
          <article-title>Weaponized ai for cyber attacks</article-title>
          ,
          <source>Journal of Information Security and Applications</source>
          <volume>57</volume>
          (
          <year>2021</year>
          )
          <fpage>102722</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>A. B. Rashid</surname>
            ,
            <given-names>A. K.</given-names>
          </string-name>
          <string-name>
            <surname>Kausik</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Al Hassan Sunny</surname>
            ,
            <given-names>M. H.</given-names>
          </string-name>
          <string-name>
            <surname>Bappy</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence in the military: An overview of the capabilities, applications</article-title>
          , and challenges,
          <source>International Journal of Intelligent Systems</source>
          <year>2023</year>
          (
          <year>2023</year>
          )
          <fpage>8676366</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G. U.</given-names>
            <surname>Osimen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. M.</given-names>
            <surname>Fulani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Chidozie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. O.</given-names>
            <surname>Dada</surname>
          </string-name>
          ,
          <article-title>The weaponisation of artificial intelligence in modern warfare: Implications for global peace and security</article-title>
          .,
          <source>Research Journal in Advanced Humanities</source>
          <volume>5</volume>
          (
          <year>2024</year>
          )
          <fpage>24</fpage>
          -
          <lpage>36</lpage>
          . URL: https://royalliteglobal.com/advanced-humanities/article/view/ 1654/771. doi:
          <volume>10</volume>
          .58256/g2p9tf63.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rejeb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rejeb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Simske</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Treiblmaier</surname>
          </string-name>
          ,
          <article-title>Humanitarian drones: A review and research agenda</article-title>
          ,
          <source>Internet of Things</source>
          <volume>16</volume>
          (
          <year>2021</year>
          )
          <fpage>100434</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.</given-names>
            <surname>Albrecht</surname>
          </string-name>
          , Predictive Technologies in Conflict Prevention:
          <article-title>Practical and Policy Considerations for the Multilateral System</article-title>
          ,
          <year>2023</year>
          . URL: https://unu.edu/sites/default/files/2023-09/predictive_ technologies_conflict_prevention_.
          <source>pdf.</source>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>U.</given-names>
            <surname>Sasikumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zaman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.-R.</given-names>
            <surname>Mawlood-Yunis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Chatterjee</surname>
          </string-name>
          ,
          <article-title>Sentiment analysis of twitter posts on global conflicts</article-title>
          ,
          <source>arXiv preprint arXiv:2312.03715</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>D.</given-names>
            <surname>Masood Alavi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wählisch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Irwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Konya</surname>
          </string-name>
          ,
          <article-title>Using artificial intelligence for peacebuilding</article-title>
          ,
          <source>Journal of Peacebuilding &amp; Development</source>
          <volume>17</volume>
          (
          <year>2022</year>
          )
          <fpage>239</fpage>
          -
          <lpage>243</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Olsher</surname>
          </string-name>
          ,
          <article-title>New artificial intelligence tools for deep conflict resolution and humanitarian response</article-title>
          ,
          <source>Procedia Engineering</source>
          <volume>107</volume>
          (
          <year>2015</year>
          )
          <fpage>282</fpage>
          -
          <lpage>292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>U.</given-names>
            <surname>Nations</surname>
          </string-name>
          , Protocol Additional to the
          <source>Geneva Conventions of 12 August</source>
          <year>1949</year>
          , and
          <article-title>Relating to the Protection of Victims of Non-International Armed Conflicts</article-title>
          ,
          <year>1977</year>
          . URL: https://www.ohchr.org/ sites/default/files/protocol2.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vité</surname>
          </string-name>
          ,
          <article-title>Typology of armed conflicts in international humanitarian law: legal concepts and actual situations</article-title>
          ,
          <source>International review of the red cross 91</source>
          (
          <year>2009</year>
          )
          <fpage>69</fpage>
          -
          <lpage>94</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Brundage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Avin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Toner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Eckersley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Garfinkel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dafoe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Scharre</surname>
          </string-name>
          , T. Zeitzof,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>