<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>H. J. Pandit);</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>AIUP: an ODRL Profile for Expressing AI Use Policies to Support the EU AI Act</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Delaram Golpayegani</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Beatriz Esteves</string-name>
          <email>beatriz.esteves@ugent.be</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Harshvardhan J. Pandit</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dave Lewis</string-name>
          <email>delewis@tcd.ie</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ADAPT Centre, Dublin City University</institution>
          ,
          <addr-line>Dublin</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>ADAPT Centre, Trinity College Dublin</institution>
          ,
          <addr-line>Dublin</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>IDLab, Ghent University - imec</institution>
          ,
          <addr-line>Ghent</addr-line>
          ,
          <country country="BE">Belgium</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>The upcoming EU AI Act requires providers of high-risk AI systems to define and communicate the system's intended purpose - a key and complex concept upon which many of the Act's obligations rely. To assist with expressing the intended purposes and uses, along with precluded uses as regulated by the AI Act, we extend the Open Digital Rights Language (ODRL) with a profile to express the AI Use Policy (AIUP). This open approach to declaring use policies enables explicit and transparent expression of the conditions under which an AI system can be used, benefiting AI application markets beyond the immediate needs of high-risk AI compliance in the EU. AIUP is available online at https://w3id.org/aiup under the CC-BY-4.0 license.</p>
      </abstract>
      <kwd-group>
        <kwd>AI Act</kwd>
        <kwd>ODRL</kwd>
        <kwd>AI use policy</kwd>
        <kwd>AI risk management</kwd>
        <kwd>regulatory enforcement</kwd>
        <kwd>trustworthy AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>CEUR
ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        Within the EU AI Act [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] there is a strong emphasis on intended purpose – a legal
term-of-art described as the use of the system specified by the provider, which should
include information regarding context and conditions of use (AI Act, Art. 3). Given its
importance in assessment of risk level under the Act [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and in turn in ensuring safe and
trustworthy use of AI, intended purpose of an AI system should be communicated to its
deployers in a transparent manner. In this paper, we aim to simplify the specification of
this key concept by adopting a policy-based approach. As such, we propose to extend
the W3C’s recommendation on Open Digital Rights Language (ODRL)1 to fulfil the
representation of intended purpose through an AI Use Policy (AIUP) profile. AIUP
serves as a mechanism for expressing AI intended and precluded uses as well as conditions
of use by modelling them as permissions, prohibitions, and duties within a policy.
dam, Netherlands
∗Corresponding author.
      </p>
      <p>(CC BY 4.0).</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related Work</title>
      <p>
        ODRL has been leveraged for legal compliance and policy enforcement, particularly
in EU GDPR compliance tasks such as automated checking of consent permissions [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
expressing legal obligations [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and modelling the obligations in terms of permissions and
prohibitions regarding executing business processes [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In the context of data governance,
ODRL was extended for expressing policies related to access control over data stored in
Solid Pods [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], utilised for modelling policies associated with responsible use of genomics
data [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and used in expressing data spaces’ usage and access control policies [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ].
      </p>
    </sec>
    <sec id="sec-4">
      <title>3. AIUP</title>
      <sec id="sec-4-1">
        <title>3.1. AIUP Requirements</title>
        <p>
          AIUP is intended to be used by AI providers and deployers to communicate and negotiate
the conditions under which an AI system can/cannot be used. The competency questions,
which shape the requirements of the policy profile, are extracted from the AI Act and
listed in the following:
• CQ1. What is the intended use(s) of the AI system? (Art. 13 and Annex IV(1a))
• CQ2. What is the precluded use(s)2 of the AI system? (Recital 72)
• CQ3. To use the system as intended, what human oversight measure(s) should be
implemented by the deployer? (Art. 14 (3)(b))
• CQ4. What is the reporting obligation(s) of the deployer? (Art. 26(5))
To express intended and precluded uses, we utilise the 5 concepts identified in our
previous work [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] that are domain, purpose, AI capability, AI deployer, and AI subject.
To further capture the context of use, we also include locality of use.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. AIUP Overview</title>
        <p>
          An overview of the AIUP’s profile is illustrated in Figure 1. Expressing intended and
precluded uses of an AI system or component within a policy are enabled by employing
odrl:permission and odrl:prohibition rules respectively. For expressing the conditions
of use, i.e., obligations that should be fulfilled by a party in order to use a system or
component, the odrl:duty property should be employed. The vocabulary used in AIUP
is defined in alignment with the AI Risk Ontology (AIRO) [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] and the Data Privacy
Vocabulary (DPV) [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. The development follows the ODRL V2.2 Profile Best Practices 3,
which requires the terms to be defined in the policy namespace (in this case aiup) with
skos:exactMatch to link the proposed terms to existing vocabularies.
        </p>
        <p>AIUP introduces 3 types of aiup:UsePolicy, that are aiup:UseOffer,
aiup:UseRequest, and aiup:UseAgreement. These enable expressing ofers,
requests, and agreements from/between AI providers and deployers. To address the
2Refers to the uses of an AI system that are prohibited by the provider.
3https://w3c.github.io/odrl/profile-bp/
ambiguities around the function of odrl:isA in the inclusion of “sub-class of” relations,
we introduce semantic equality (aiup:seq) that indicates presence of either “instance
of” or “sub-class of” relations. AIUP allows describing use policies for AI components,
such as general-purpose AI models, by specifying general concepts of aiup:AIComponent,
aiup:Provider, and aiup:Deployer. However, it leaves out the inclusion of more specific
elements required for expressing component use policies for future work. AIUP is made
available online at https://w3id.org/aiup under the CC-BY-4.0 license.</p>
      </sec>
      <sec id="sec-4-3">
        <title>3.3. AIUP Example</title>
        <p>
          As an example scenario, we consider a policy for an online student proctoring system
called Proctify, previously described in [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]. The conditions of deploying Proctify, as
an aiup:UseOffer policy, are presented in Listing 1. For brevity, we only include 3
constraints for describing the intended domain, purpose, and AI subjects. The ofer
indicates that the deployer should provide training to end-users of the system as a control
measure to address the risk of over-reliance on the system’s output.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Conclusion</title>
      <p>In this paper, we proposed AIUP as a novel technical solution for declaring AI use
policies in an open, machine-readable, and interoperable format based on the evolving
requirements of the AI value chain, particularly the obligations of the EU AI Act. The
AIUP profile supports modelling and comparison of use policies related to AI systems
@prefix odrl: &lt;https://www.w3.org/ns/odrl/2/&gt; .
@prefix aiup: &lt;https://w3id.org/aiup#&gt; .
@prefix vair: &lt;http://w3id.org/vair#&gt; .
@prefix rdfs: &lt;http://www.w3.org/2000/01/rdf-schema#&gt; .
@prefix dct: &lt;http://purl.org/dc/terms/&gt; .
@prefix ex: &lt;http://example.org/&gt; .
ex:proctify-offer-01 a aiup:UseOffer ;
odrl:uid ex:proctify-offer-01 ;
odrl:profile aiup: ;
rdfs:comment "Offer for using Proctify"@en ;
odrl:permission [
a odrl:Permission ;
odrl:assigner ex:aiedux ;
odrl:target ex:proctify ;
odrl:action aiup:Deploy ;
odrl:constraint [
odrl:and [
odrl:leftOperand aiup:Domain ;
odrl:operator aiup:seq ;
odrl:rightOperand vair:Education ] ,
and their components. It further assists AI auditors and authorities in investigation of
non-compliance and ascertaining liable parties when investigating claims concerning AI.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>This project has received funding from the EUs Horizon 2020 research and innovation
programme under the Marie Skodowska-Curie grant agreement No 813497 (PROTECT
ITN) and from Science Foundation Ireland under Grant#13/RC/2106_P2 at the ADAPT
SFI Research Centre. Beatriz Esteves is funded by SolidLab Vlaanderen (Flemish
Government, EWI and RRF project VV023/10). Harshvardhan Pandit has received
funding under the SFI EMPOWER program.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Regulation</surname>
          </string-name>
          (EU)
          <year>2024</year>
          /
          <article-title>1689 of the european parliament and of the council of 13 june 2024 laying down harmonised rules on artificial intelligence and amending regulations (EC</article-title>
          ) no 300/
          <year>2008</year>
          , (EU) no 167/
          <year>2013</year>
          , (EU) no 168/
          <year>2013</year>
          , (EU)
          <year>2018</year>
          /858,
          <string-name>
            <surname>(</surname>
            <given-names>EU</given-names>
          </string-name>
          )
          <year>2018</year>
          /1139 and (EU)
          <year>2019</year>
          /2144 and directives
          <year>2014</year>
          /90/EU, (EU)
          <year>2016</year>
          /797 and (EU)
          <year>2020</year>
          /1828
          <source>(artificial intelligence act)</source>
          ,
          <year>2024</year>
          . URL: http://data.europa.eu/eli/reg/ 2024/1689/oj.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I.</given-names>
            <surname>Hupont</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fernández-Llorca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Baldassarri</surname>
          </string-name>
          , E. Gómez,
          <article-title>Use case cards: A use case reporting framework inspired by the european AI act</article-title>
          ,
          <source>Ethics and Information Technology</source>
          <volume>26</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B.</given-names>
            <surname>Esteves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Pandit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Rodríguez-Doncel</surname>
          </string-name>
          ,
          <article-title>ODRL profile for expressing consent through granular access control policies in solid</article-title>
          ,
          <source>in: 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&amp;PW)</source>
          , IEEE,
          <year>2021</year>
          , pp.
          <fpage>298</fpage>
          -
          <lpage>306</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Steyskal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Antunovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kirrane</surname>
          </string-name>
          ,
          <article-title>Legislative compliance assessment: Framework, model and GDPR instantiation</article-title>
          ,
          <source>in: Privacy Technologies and Policy</source>
          , Springer International Publishing, Cham,
          <year>2018</year>
          , pp.
          <fpage>131</fpage>
          -
          <lpage>149</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>M. De Vos</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Kirrane</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Padget</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Satoh</surname>
          </string-name>
          ,
          <article-title>ODRL policy modelling and compliance checking</article-title>
          , in: P. Fodor,
          <string-name>
            <given-names>M.</given-names>
            <surname>Montali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Calvanese</surname>
          </string-name>
          , D. Roman (Eds.),
          <source>Rules and Reasoning</source>
          , SpringerInternational Publishing,
          <year>2019</year>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>51</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Pandit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Esteves</surname>
          </string-name>
          ,
          <article-title>Enhancing data use ontology (DUO) for health-data sharing by extending it with ODRL and DPV</article-title>
          ,
          <string-name>
            <surname>Semantic Web Journal</surname>
          </string-name>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T.</given-names>
            <surname>Dam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Krimbacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Neumaier</surname>
          </string-name>
          ,
          <article-title>Policy patterns for usage control in data spaces</article-title>
          ,
          <source>arXiv preprint arXiv:2309.11289</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>I.</given-names>
            <surname>Akaichi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Slabbinck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Rojas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Van Gheluwe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Bozzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Colpaert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Verborgh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kirrane</surname>
          </string-name>
          ,
          <article-title>Interoperable and continuous usage control enforcement in dataspaces</article-title>
          , in: The Second International Workshop on Semantics in Dataspaces, co-located
          <source>with the Extended Semantic Web Conference</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Golpayegani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Pandit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <article-title>To be high-risk, or not to be-semantic specifications and implications of the AI act's high-risk AI applications and harmonised standards</article-title>
          ,
          <source>in: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>905</fpage>
          -
          <lpage>915</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Golpayegani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Pandit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <string-name>
            <surname>AIRO:</surname>
          </string-name>
          <article-title>An ontology for representing AI risks based on the proposed EU AI Act and ISO risk management standards, in: Towards a Knowledge-Aware AI</article-title>
          , volume
          <volume>55</volume>
          , IOS Press,
          <year>2022</year>
          , pp.
          <fpage>51</fpage>
          -
          <lpage>65</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Pandit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Esteves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. P.</given-names>
            <surname>Krog</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ryan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Golpayegani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Flake</surname>
          </string-name>
          ,
          <article-title>Data privacy vocabulary (DPV)-version 2</article-title>
          , arXiv preprint arXiv:
          <volume>2404</volume>
          .13426 (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Golpayegani</surname>
          </string-name>
          , I. Hupont,
          <string-name>
            <given-names>C.</given-names>
            <surname>Panigutti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Pandit</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schade</surname>
          </string-name>
          ,
          <string-name>
            <surname>D. O'Sullivan</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Lewis</surname>
          </string-name>
          ,
          <article-title>AI cards: Towards an applied framework for machine-readable AI and risk documentation inspired by the EU AI act</article-title>
          ,
          <source>in: Privacy Technologies and Policy</source>
          , Springer Nature Switzerland,
          <year>2024</year>
          , pp.
          <fpage>48</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>