<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Vienna, Austria
* Corresponding author.
$ golpayes@tcd.ie (D. Golpayegani); me@harshp.com (H. J. Pandit); delewis@tcd.ie (D. Lewis)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Semantic Patterns of Prohibited AI Systems in the EU AI Act</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Delaram Golpayegani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Harshvardhan J. Pandit</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dave Lewis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ADAPT Centre, Trinity College Dublin</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>AI Accountability Lab, Trinity College Dublin</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>The EU AI Act is a landmark piece of legislation that governs deployment and use of AI systems. Within its risk-based regime of regulation, prohibited AI practices face the strictest requirements, being entirely banned to be deployed or used within the Union. The provisions for prohibited systems have been applied since 2 February 2025. While authoritative guidelines have been published for prohibited systems, there is still no systematic approach that facilitates determination of such systems in a simplified and automated manner. To fill this gap, we specify the prohibited AI conditions, articulated in Art. 5, using combination of a minimal set of semantic concepts. We further show how these conditions can be described in a machine-readable format using semantic constraint and rule languages, such as SHACL and N3. This approach to representing prohibited rules supports a more open, interoperable, and transparent implementation of the AI Act, while also enabling partial automation of enforcement processes.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;EU AI Act</kwd>
        <kwd>prohibited AI</kwd>
        <kwd>semantic rules</kwd>
        <kwd>SHACL</kwd>
        <kwd>N3</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        systems might be unlikely, the rapid pace of changes in AI systems requires adaptable approaches
that enable ongoing assessment the system’s risk level to avoid any non-compliance. Motivated by
the EU’s initiatives for regulatory simplification [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], in this paper we aim to facilitate identification of
prohibited AI systems by determining the minimal set of concepts that enable specifying prohibited
AI systems in a way that they can be suficiently distinguished. After conceptualisation of prohibited
conditions, we demonstrate how these can be translated into codified machine-readable rules using
Semantic Web technologies, particularly the Shapes Constraint Language (SHACL) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and Notation 3
(N3) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. By leveraging Semantic Web technologies, we develop a standards-based, transparent, and
interoperable framework for determining prohibited AI conditions, and thereby supporting automation
of compliance-related tasks. As will be discussed later, this work builds upon our previous research on
determining high-risk AI systems [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], which has gained considerable traction within the community.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Existing studies on the AI Act’s prohibited AI practices (Art. 5) are primarily focused on interpreting
the prohibited conditions. Some notable analyses were published prior to the publication of the
AI Act in oficial journal of the EU, including Neuwirth’s analysis of prohibited categories stated
in the Commission’s proposal [11], Bermúdez et al.’s efort to provide a definition for subliminal
techniques [
        <xref ref-type="bibr" rid="ref12 ref12 ref12 ref18 ref18 ref18 ref2 ref24 ref30 ref36 ref42 ref42 ref42 ref48 ref54">2</xref>
        ], Franklin et al.’s proposed definitions for subliminal, purposefully manipulative, and
deceptive techniques [
        <xref ref-type="bibr" rid="ref13 ref19 ref25 ref3 ref31 ref37 ref43 ref49 ref55">3</xref>
        ], Bulgakova’s analysis of the prohibition on the use of subliminal techniques [
        <xref ref-type="bibr" rid="ref14 ref14 ref14 ref14 ref20 ref20 ref20 ref20 ref26 ref32 ref32 ref32 ref32 ref38 ref4 ref44 ref44 ref44 ref44 ref50 ref56">4</xref>
        ],
and Leiser’s comparative analysis of prohibited uses that deploy manipulative techniques in diferent
mandates of the Act [12]. However, the recent publication of the Commission’s guidelines on prohibited
AI systems [
        <xref ref-type="bibr" rid="ref15 ref15 ref21 ref27 ref27 ref33 ref33 ref33 ref33 ref33 ref39 ref39 ref39 ref39 ref39 ref45 ref45 ref45 ref45 ref45 ref5 ref51 ref51 ref51 ref51 ref51 ref57 ref57 ref57 ref57 ref57">5</xref>
        ] has addressed several issues previously highlighted in these studies. Since the oficial
publication of the AI Act and the Commission guidelines on prohibited AI, there are only few studies
published including Barkane and Buka’s critical analysis of the prohibitions of surveillance and predictive
policing [13]. In general, the body of work on the criteria for prohibited systems is mainly focused on
clarification of the wording of the Act’s text and none of the aforementioned studies, in addition to the
Commission’s guidelines, establish a holistic view of the prohibited categories, nor do they identify the
set of concepts of AI use cases that make them prohibited.
      </p>
      <p>
        In regard to the codification of rules for AI Act’s risk categorisation, the Decision-Tree-based
framework [14] is a static framework that aims to assist in classification of AI systems based on the AI Act.
The framework is based on a decision tree comprising 20 questions for determining the risk category
associated with an AI system. Our pervious work [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] identifies 5 concepts to facilitate identification of
high-risk AI systems according to Annex III, which are: domain, purpose, AI capability, deployer, AI
subject. We further codified the rules using SHACL to enable automated determination of such systems.
Given the interest our work on high-risk AI has attracted, in this paper we follow the same approach
for prohibited practices.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>
        Identification of classification rules for the AI Act’s prohibited AI practices is guided by our contributions
in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. In this previous work, through manual annotation of Annex III of the AI Act, we identified the
minimum set of information elements (the 5 aforementioned concepts) required to determine high-risk
applications of AI. Building upon these identified information elements, we take the following steps to
create a framework for determining prohibited AI practices (see section 4):
1. Identify the 5 concepts (domain, purpose, AI capability, deployer, AI subject) from each prohibited
condition described in Art. 5(1),
2. Determine whether the 5 concepts are suficient to describe prohibited AI practices in a unique
way that suficiently distinguish them from each other,
3. Where the 5 concepts are not suficient, identify the minimal set of additional concepts needed
for describing the prohibited AI condition.
      </p>
      <p>To be able to provide open data specifications for prohibited systems, we add the identified additional
concepts (step 3) to the AI Risk Ontology (AIRO) [15]1 and further populate the Vocabulary of AI Risks
(VAIR)2 with the instances identified from the annotation process.</p>
      <p>
        Demonstrating how the prohibited AI rule-checking can be automated for supporting compliance
tasks, we utilise existing Semantic Web languages and standards with rule-checking capabilities. While
there are multiple languages and standards ofering such capabilities, including the Shapes Constraint
Language (SHACL) [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], the Semantic Web Rule Language (SWRL) [16], N3 (Notation3) rules [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and
the Shape Expressions (ShEx) language [17], we use SHACL in this work as it is a W3C recommended
language. We also use N3 to express rules in a simplified if-then style manner to address the complexity
of expressing the rules using SHACL (see section 5).
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Patterns of Prohibited AI Practices under the AI Act</title>
      <p>The analysis Art. 5(1) aims to identify the minimum set of concepts that are adequate to uniquely
describe prohibited AI practices. Following the steps outlined above, Art. 5(1) clauses were manually
annotated to identify the 5 following concepts: domain, purpose, AI capability, deployer, AI subject.
Then, additional concepts were identified in each clause. An example of annotating Art. 5(1a) is shown
in Figure 1. The manual annotation was carried out by the lead author and validated through discussions
with co-authors.</p>
      <p>The annotation exercise revealed that among the 5 previously identified concepts, AI deployer is not a
decisive factor in determining prohibited AI systems. Additionally, we identified the following additional
concepts: data processed by the system, locality of use, consequence, impact and its severity, and
impacted stakeholder(s). Locality of use defines the environment in which the system is used, e.g.
work place. Consequence refers to the direct immediate efect of using an AI system, whether it leads
to harms to individual, groups, and society or not. Impact refers to the overall ultimate efect of an AI
system on impacted stakeholders, such as individual, groups, and society. We treat the combination
of consequence, impact and its severity, and impacted stakeholder as (harmful) risk requirement</p>
      <sec id="sec-4-1">
        <title>1https://w3id.org/airo 2https://w3id.org/vair</title>
        <p>on the basis that these concepts can only be determined through risk assessment. Our analysis shows
that among the prohibited conditions in Art. 5(1), points (a), (b), and (c) depend on the results of a risk
assessment process that identifies consequences, associated impacts, their severity, and the stakeholders
afected.</p>
        <p>The minimal set concepts for determining prohibited AI systems are expressed in a form of questions
in the following:
1. In which domain is the AI system used?
2. What is the purpose of using the AI system?
3. What is the capability of the AI system?
4. What data is processed by the AI system?
5. Who is the AI subject?
6. What is the locality of use?
7. what is the harmful risk caused by the AI system?
a) What is the consequence of using the system?
b) What is the impact of using the AI system?
c) What is the severity of the impact?
d) Who is the impacted stakeholder?</p>
        <p>These concepts and their relations are modelled in our previously developed ontology for AI risks,
AIRO, and are illustrated in Figure 2. As shown in the figure, concepts from the Data Privacy Vocabulary
(DPV) [18] are reused for expressing the data processed by the system.</p>
        <p>The detailed analysis of the prohibited conditions is presented in Appendix A and a summary of the
conditions is illustrated in Figure 3. It should be noted that in our analysis of Art. 5(1) points (a) and (b),
we consider materially distorting behaviour as a consequence rather that purpose of the system, even
though the wording of the AI Act suggests that it can be either an objective or an efect of employing
the AI system. This interpretation is based on the reality that AI providers rarely, if ever, explicitly
state that their system’s purpose is “behaviour distortion” or “impairing decision making”. Further,
in development of emerging technologies such efects of AI are often identified after deployment as
(unintended) consequences.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Codified Rules for Determining Prohibited AI Practices</title>
      <p>
        In our framework, prior to rule-checking, an RDF-based specification of an AI systems should be created
to enable determination of its risk category. In Listing 1, machine-readable specification of an AI chatbot
that impersonates a friend of a person for scamming, described in the Commission’s guideline [
        <xref ref-type="bibr" rid="ref15 ref15 ref21 ref27 ref27 ref33 ref33 ref33 ref33 ref33 ref39 ref39 ref39 ref39 ref39 ref45 ref45 ref45 ref45 ref45 ref5 ref51 ref51 ref51 ref51 ref51 ref57 ref57 ref57 ref57 ref57">5</xref>
        ], is
shown3. This specification serves as a data graph that can be validated against both shape graphs, which
describe the rules using SHACL for prohibited AI systems, and N3 rules.
      </p>
      <p>To show how SHACL can be used for describing prohibited rules, we provide an example of a shape
graph specifying Art. 5(1a) condition in Listing 2. As it is clear in the listing, the shape graph is
expressed as negation of the condition using sh:not. This is due to the fact that SHACL’s validation
report (sh:ValidationResult) is only generated in case of non-conformance. We used the validation
report to enhance transparency by providing guiding information about the clause based on the which
the system is determined to be prohibited. The SHACL shapes for prohibited AI systems are published
on GitHub4 under permissive licences.
3This use case, along with additional examples, is available at: https://github.com/DelaramGlp/airo/tree/main/usecase
4https://github.com/DelaramGlp/airo/tree/main/prohibited-shacl</p>
      <sec id="sec-5-1">
        <title>Listing 1: RDF-based specification of the AI chatbot example</title>
        <p>As shown in the listing expressing the the harm requirement within a SHACL shape graph requires
nested NodeShapes which adds to complexity of the shape and further efects its readability and
performance. To address this issue, we use N3 to provide more flexible and simplified representation of
the rules. Listing 3 illustrates the encoding of the Art. 5(1a) in N3. For simplicity, the listing is restricted
to NaturalPersons as AI subjects and impacted stakeholders. The N3 rules are made available online5.
5https://github.com/DelaramGlp/airo/tree/main/prohibited-n3
@prefix rdf: &lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#&gt; .
@prefix rdfs: &lt;http://www.w3.org/2000/01/rdf-schema#&gt; .
@prefix sh: &lt;http://www.w3.org/ns/shacl#&gt; .
@prefix airo: &lt;https://w3id.org/airo#&gt; .
@prefix vair: &lt;https://w3id.org/vair#&gt; .
@prefix terms: &lt;http://purl.org/dc/terms/&gt; .
@prefix ex: &lt;https://example.com/ns#&gt; .
@prefix risk: &lt;https://w3id.org/dpv/risk#&gt;.
ex:Art5-1-a
a sh:NodeShape ;
sh:targetClass airo:AISystem ;
sh:message "Prohibited as per AI Act, Art. 5(1a): AI system that deploys subliminal
˓→ techniques beyond a person’s consciousness or purposefully manipulative or
˓→ deceptive techniques, with the objective, or the effect of materially distorting
˓→ the behaviour of a person or a group of persons by appreciably impairing their
˓→ ability to make an informed decision, thereby causing them to take a decision that
˓→ they would not have otherwise taken in a manner that causes or is reasonably likely
˓→ to cause that person, another person or group of persons significant harm"@en ;
sh:description "AI systems that AI system that deploys subliminal techniques beyond a
˓→ person’s consciousness or purposefully manipulative or deceptive techniques"@en ;
sh:not [
sh:and (</p>
        <p>Listing 2: SHACL shape for identifying prohibited AI systems from Art. 5(1a)</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Limitations</title>
      <p>As mentioned earlier, an initial validation of the analysis of prohibited practices, i.e. results of the
manual annotation, was conducted. However, further consultation with subject matter experts,
including lawyers and policymakers, is required to ensure the validity of our interpretation of the AI Act.
Nevertheless, since our proposed framework for determining prohibited practices leverages Semantic
Web technologies, it is flexible and can accommodate future enhancements.</p>
      <p>In the case of our research, manual annotation of clauses describing prohibited practices was possible
given the limited number of these clauses. However, manually annotating a large number of AI use
cases to determine their risk level under the AI Act might not be possible. To address this challenge, a
combination of Large Language Models (LLMs) and ontologies can provide a scalable solution. However,
this requires appropriate measures to avoid hallucinations.</p>
      <p>Our proposed framework is designed to support regulatory simplification and automation by adopting
an open, standards-based, and interoperable approach. It is important to not that our framework does
not substitute legal advice and determining some of the concepts, in particular the risk requirement,
require legal interpretation as well as technical analysis. Given the high stakes involved in determining
risk levels under the AI Act, our framework should be viewed as a supporting tool to assist in identifying
prohibited practices, not as a replacement for legal expertise.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion and Future Work</title>
      <p>
        In this paper, we presented a Semantic Web-based framework to assist with determining prohibited AI
systems according to the AI Act. This paper followed the approach we took in our previous work for
determining high-risk applications [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] in terms of both conceptualisation and codification. Although
these two studies are aligned and complementary, they have not yet integrated to capture the interplay
between the two categories. Thus, in our future work, we aim to address this gap by incorporating the
exceptions to prohibited systems, given that these exceptions are mostly result in the system being
classified as high-risk [
        <xref ref-type="bibr" rid="ref15 ref15 ref21 ref27 ref27 ref33 ref33 ref33 ref33 ref33 ref39 ref39 ref39 ref39 ref39 ref45 ref45 ref45 ref45 ref45 ref5 ref51 ref51 ref51 ref51 ref51 ref57 ref57 ref57 ref57 ref57">5</xref>
        ]. For those AI systems listed in Annex III (high-risk AI systems) but may also
meet the prohibited conditions, and therefore be classified as prohibited, a sequential classification
wherein determining prohibited AI supersedes high-risk AI may be appropriate.
      </p>
      <p>
        In our future work, we also aim to include the specificities from the Commission’s guidelines on
prohibited systems [
        <xref ref-type="bibr" rid="ref15 ref15 ref21 ref27 ref27 ref33 ref33 ref33 ref33 ref33 ref39 ref39 ref39 ref39 ref39 ref45 ref45 ref45 ref45 ref45 ref5 ref51 ref51 ref51 ref51 ref51 ref57 ref57 ref57 ref57 ref57">5</xref>
        ] and further populate VAIR, for example with instances of subliminal techniques,
including visual subliminal messages, subvisual and subaudible cueing, and misdirections. We also plan
to propose these concepts for inclusion within DPV.
      </p>
    </sec>
    <sec id="sec-8">
      <title>Acknowledgments</title>
      <p>This work has received funding from the European Commission’s Horizon Europe Research and
Innovation Programme under grant agreement No. 101177579 (FORSEE), the European Union’s Horizon
2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.
813497 (PROTECT ITN), and from the ADAPT Centre for Digital Media Technology, which is funded
by Research Ireland and is co-funded under the European Regional Development Fund (ERDF) through
Grant#13/RC/2106_P2. Harshvardhan J. Pandit is a member of AI Accountability Lab, which is funded
under John D. and Catherine T. MacArthur Foundation grant with project #216001 and award #19034.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the first author used OpenAI’s ChatGPT and Anthropic’s Claude
for language refinement and Microsoft’s Copilot for code debugging assistance. These tools were used
in a limited capacity and lead author reviewed and edited the generated content as needed and takes
full responsibility for the publication’s content.
[11] R. J. Neuwirth, Prohibited Artificial Intelligence Practices in the Proposed EU Artificial Intelligence</p>
      <p>Act (AIA), Computer Law &amp; Security Review 48 (2023).
[12] M. Leiser, Psychological Patterns and Article 5 of the AI Act:AI-Powered Deceptive Design in
the System Architecture and the User Interface, Journal of AI Law and Regulation 1 (2024).
doi:10.21552/aire/2024/1/4.
[13] I. Barkane, L. Buka, Prohibited AI Surveillance Practices in the Artificial Intelligence Act: Promises
and Pitfalls in Protecting Fundamental Rights, in: Critical Perspectives on Predictive Policing,
Edward Elgar Publishing, Cheltenham, UK, 2025, pp. 110 – 129. doi:10.4337/9781035323036.
00011.
[14] H. Hanif, J. Constantino, M.-T. Sekwenz, M. van Eeten, J. Ubacht, B. Wagner, Y. Zhauniarovich,
Navigating the EU AI Act Maze using a Decision-Tree Approach, ACM Journal on Responsible
Computing (2024). doi:10.1145/3677174.
[15] D. Golpayegani, H. J. Pandit, D. Lewis, AIRO: An Ontology for Representing AI Risks Based on
the Proposed EU AI Act and ISO Risk Management Standards, in: Towards a Knowledge-Aware
AI, volume 55, IOS Press, 2022, pp. 51–65.
[16] I. Horrocks, P. F. Patel-Schneider, H. Boley, S. Tabet, B. Grosof, M. Dean, SWRL: A Semantic Web
Rule Language Combining OWL and RuleML, 2004. URL: https://www.w3.org/submissions/2004/
SUBM-SWRL-20040521/, w3C Member Submission.
[17] E. Prud’hommeaux, I. Boneva, J. E. L. Gayo, G. Kellogg, Shape Expressions Language 2.1, 2019.</p>
      <p>URL: http://shex.io/shex-semantics/, w3C Final Community Group Report.
[18] H. J. Pandit, B. Esteves, G. P. Krog, P. Ryan, D. Golpayegani, J. Flake, Data Privacy Vocabulary
(DPV) – Version 2.0, in: G. Demartini, K. Hose, M. Acosta, M. Palmonari, G. Cheng, H. Skaf-Molli,
N. Ferranti, D. Hernández, A. Hogan (Eds.), The Semantic Web – ISWC 2024, Springer Nature
Switzerland, Cham, 2025, pp. 171–193. doi:10.1007/978-3-031-77847-6_10.</p>
    </sec>
    <sec id="sec-10">
      <title>A. Detailed Analysis of Prohibited AI Practices</title>
      <p>(1g)
Concepts</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>European</given-names>
            <surname>Parliament</surname>
          </string-name>
          ,
          <article-title>Council of the European Union, Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC</article-title>
          ) No 300/
          <year>2008</year>
          , (EU) No 167/
          <year>2013</year>
          , (EU) No 168/
          <year>2013</year>
          , (EU)
          <year>2018</year>
          /858,
          <string-name>
            <surname>(</surname>
            <given-names>EU</given-names>
          </string-name>
          )
          <year>2018</year>
          /1139 and (EU)
          <year>2019</year>
          /2144 and Directives 2014/90/EU, (EU)
          <year>2016</year>
          /797 and (EU)
          <year>2020</year>
          /1828 (Artificial
          <issue>Intelligence Act</issue>
          ),
          <year>2024</year>
          . URL: http://data.europa.eu/eli/reg/2024/1689/oj.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Bermúdez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Nyrup</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Deterding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Moradbakhti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Mougenot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>You</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Calvo</surname>
          </string-name>
          ,
          <article-title>What Is a Subliminal Technique? An Ethical Perspective on AI-Driven Influence</article-title>
          , in: 2023 IEEE International Symposium on Ethics in Engineering, Science, and
          <source>Technology (ETHICS)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . doi:
          <volume>10</volume>
          . 1109/ETHICS57328.
          <year>2023</year>
          .
          <volume>10155039</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Franklin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Tomei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gorman</surname>
          </string-name>
          ,
          <source>Strengthening the EU AI Act: Defining Key Terms on AI Manipulation</source>
          ,
          <year>2023</year>
          . URL: https://arxiv.org/abs/2308.16364. arXiv:
          <volume>2308</volume>
          .
          <fpage>16364</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bulgakova</surname>
          </string-name>
          ,
          <source>The prohibited artificial intelligence practice, Theory and Practice of Forensic Science and Criminalistics</source>
          <volume>32</volume>
          (
          <year>2023</year>
          )
          <fpage>89</fpage>
          -
          <lpage>112</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          ,
          <source>Commission Guidelines on Prohibited Artificial Intelligence Practices Established by Regulation (EU)</source>
          <year>2024</year>
          /1689 (AI Act),
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Almada</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Petit</surname>
          </string-name>
          ,
          <article-title>The EU AI Act: Between the Rock of Product Safety and the Hard Place of Fundamental Rights</article-title>
          ,
          <source>Common Market Law Review</source>
          <volume>62</volume>
          (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .54648/cola2025004.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Karathanasis</surname>
          </string-name>
          , The AI Act:
          <article-title>Balancing Implementation Challenges and the EU's Simplification Agenda</article-title>
          ,
          <year>2025</year>
          . URL: https://ssrn.com/abstract=5311501.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Knublauch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kontokostas</surname>
          </string-name>
          ,
          <source>Shapes Constraint Language (SHACL)</source>
          ,
          <year>2017</year>
          . URL: https://www.w3. org/TR/shacl/, w3C Recommendation.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>W. V.</given-names>
            <surname>Woensel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Arndt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.-A.</given-names>
            <surname>Champin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tomaszuk</surname>
          </string-name>
          , G. Kellogg,
          <source>Notation3 Language</source>
          ,
          <year>2024</year>
          . URL: https://w3c.github.io/N3/spec/, w3C Community Group Draft Report.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Golpayegani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Pandit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <article-title>To Be High-Risk, or Not To Be-Semantic Specifications and Implications of the AI Act's High-Risk AI Applications and Harmonised Standards</article-title>
          ,
          <source>in: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>905</fpage>
          -
          <lpage>915</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>1. Domain: Any</mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>2. Purpose: Any</mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>3. Capability: Subliminal Capability, Manipulation, Deception</mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>4. Data processed: Any</mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>5. AI subject: Natural Person, Group of Persons</mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          6.
          <article-title>Locality of use: Any 7a</article-title>
          .
          <article-title>Consequence: Impaired Decision Making 7b</article-title>
          .
          <article-title>Impact: Harm 7c</article-title>
          .
          <article-title>Severity of impact: Severe 7d</article-title>
          .
          <article-title>Impacted stakeholder: Natural Person (self or third-party)</article-title>
          , Group of Persons
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>1. Domain: Any</mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>2. Purpose: Any</mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>3. Capability: Exploitation Of Vulnerability</mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>4. Data processed: Any</mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>5. AI subject: Vulnerable Person, Vulnerable Groups Of Persons</mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          6.
          <article-title>Locality of use: Any 7a</article-title>
          .
          <article-title>Consequence: Materially Distorting Behaviour, Exploiting Vulnerability 7b</article-title>
          .
          <article-title>Impact: Harm 7c</article-title>
          .
          <article-title>Severity of impact: Severe 7d</article-title>
          .
          <article-title>Impacted stakeholder: Vulnerable Person (self or third-party)</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>1. Domain: Any</mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>2. Purpose: Evaluation Of People, Classification Of People</mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>3. Capability: Social Scoring</mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <article-title>4. Data processed: Social Behaviour Data,Known, Inferred or Predicted Personal Characteristics</article-title>
          , Known, Inferred or Predicted Personality Characteristics
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>5. AI subject: Natural Person, Group of Persons</mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          6.
          <article-title>Locality of use: Any 7a</article-title>
          .
          <article-title>Consequence: Any 7b</article-title>
          .
          <article-title>Impact: Discriminatory Treatment, Detrimental Treatment, Unfavourable Treatment 7c</article-title>
          .
          <article-title>Severity of impact: Any 7d</article-title>
          .
          <article-title>Impacted stakeholder: Natural Person</article-title>
          , Group of Persons
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>1. Domain: Any,</mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          2. Purpose:
          <article-title>Assessing Risk of Committing a Criminal Ofence, Predicting Risk of Committing a Criminal Ofence</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          3. Capability: Profiling,
          <string-name>
            <surname>Personality Trait Analysis</surname>
          </string-name>
          , Personality Characteristics Assessment
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>4. Data processed: Any</mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>5. AI subject: Natural Person</mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          6.
          <article-title>Locality of use: Any 7a</article-title>
          .
          <source>Consequence: Any 7b. Impact: Any 7c. Severity of impact: Any 7d. Impacted stakeholder: Any</source>
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>1. Domain: Any</mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>2. Purpose: Creating Facial Recognition Databases, Expanding Facial Recognition Databases</mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>3. Capability: Web Scraping</mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <article-title>4. Data processed: Facial Images From The Internet</article-title>
          , Facial Images From CCTV Footage
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>5. AI subject: Natural Person</mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          6.
          <article-title>Locality of use: Any 7a</article-title>
          .
          <article-title>Consequence: Any 7b</article-title>
          .
          <article-title>Impact: Any 7c</article-title>
          .
          <article-title>Severity of impact: Any 7d</article-title>
          .
          <article-title>Impacted stakeholder: Any Table 2 Analysis of prohibited AI practices listed in Art. 5, Points (1f) to (1h) Art. 5 clause (1f)</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>1. Domain: Employment, Education</mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>2. Purpose: Any</mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>3. Capability: Emotion Recognition</mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>4. Data processed: Any</mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>5. AI subject: Natural Person</mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          6.
          <article-title>Locality of use: Workplace, Education Institution 7a</article-title>
          .
          <source>Consequence: Any 7b. Impact: Any 7c. Severity of impact: Any 7d. Impacted stakeholder: Any</source>
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>1. Domain: Any</mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>2. Purpose: Deduce Sensitive Information, Infer Sensitive Information</mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>3. Capability: Biometric Categorisation</mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>4. Data processed: Special Category Data</mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>5. AI subject: Natural Person</mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          6.
          <article-title>Locality of use: Any 7a</article-title>
          .
          <source>Consequence: Any 7b. Impact: Any 7c. Severity of impact: Any 7d. Impacted stakeholder: Any</source>
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>1. Domain: Law Enforcement</mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>2. Purpose: Remote Identification</mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          3. Capability:
          <string-name>
            <surname>Real-Time Remote</surname>
          </string-name>
          Biometric Identification
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>4. Data processed: Biometric Data</mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>5. AI subject: Natural Person</mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          6.
          <article-title>Locality of use: Publicly Accessible Space Consequence: Any 7a</article-title>
          .
          <source>Consequence: Any 7b. Impact: Any 7c. Severity of impact: Any 7d. Impacted stakeholder: Any</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>