<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Article</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>of the G DPR: Im plications for Semi-Automated Legal Decision-Making</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Peter Alexander Earls Davis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sebastian Felix Schwemer</string-name>
          <email>sebastian.felix.schwemer@jur.ku.dk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Centre for Information and Innovation Law (CIIR), University of Copenhagen</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Norwegian Research Center for Computers and Law (NRCCL), University of Oslo</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>ual Decision-Making and Profiling for the Purposes of Regulation</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>22</volume>
      <issue>1</issue>
      <fpage>0000</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>This paper examines the implications of Article 22 of the General Data Protection Regulation (GDPR) for legal tech tools that involve semi-automated decision-making. The authors focus on the interpretation of the term 'decision' within the provision and argue that it should be construed broadly to include recommendations or other measures leading to a particular outcome for an individual. The implications of this interpretation for legal artificial intelligence (AI) and intelligent assistance (IA) are briefly discussed, with potential increased responsibilities under the GDPR for entities that use these tools. The paper concludes by calling for further examination of the 'locating decisions' problem in the context of AI and IA systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Public and private entities increasingly rely on automated
tools to make better and more eficient decisions and
effectively augment human capabilities. Concomitantly,
lawmakers around the world and especially in the
European Union (‘EU’) have focused considerable attention on
addressing issues posed by increased automation in
society further fuelled by recent technological advances, such
as large-language models (‘LLMs’) including ChatGPT.</p>
      <sec id="sec-1-1">
        <title>These novel instruments, such as the proposed Artificial</title>
        <sec id="sec-1-1-1">
          <title>Intelligence Act1 will invariably impact legal automated</title>
          <p>decision making (‘ADM’) and legal artificial intelligence
(‘AI’) and intelligent assistance (‘IA’) in a variety of ways.</p>
        </sec>
      </sec>
      <sec id="sec-1-2">
        <title>In this paper, instead, we turn to a well-known instru</title>
        <p>ment – the European Union’s General Data Protection</p>
        <sec id="sec-1-2-1">
          <title>Regulation (‘GDPR’)2 – with a somewhat overlooked but</title>
          <p>central question. Automated tools used in legal decision
making often process personal data, meaning that data
protection rules are relevant in assessing the lawfulness
nEvelop-O
CEUR
Workshop
Proce dings
htp:/ceur-ws.org
IS N1613-073
© 2023 Copyright for this paper by its authors. Use permitted under Creative</p>
          <p>CEUR</p>
          <p>Workshop Proceedings (CEUR-WS.org)
1Proposal for a Regulation of the European Parliament and of
the Council laying down harmonised rules on Artificial Intelligence
(Artificial Intelligence Act) and amending certain Union legislative
ment of such data, and repealing Directive 95/46/EC (General Data
by the A29WP/EDBP, to recognise its conception by the former and
adoption by the latter.
components have been the subject of intense scrutiny processing (‘NLP’), a subset of machine learning (‘ML’),
by academics [1][2][3][4][5][6][7] and, more recently, for legal information retrieval. LEGALESE may be used,
the provision has seen action before the Court of Justice inter alia, to assist a public service worker to reach a
deof the European Union (‘CJEU’) in the upcoming Case cision about an individual (such as whether they should
C-634/21 SCHUFA Holding and Others5 that promises to receive a particular welfare benefit), based on previous
settle at least some of the interpretive confusion. As similar cases. In this way, LEGALESE endeavours to
of writing, the Advocate General (‘AG’), Pikamäe, has make decision-making eficient, consistent, and accurate.
handed down her Opinion in the case, which is often – Critical to the application of Article 22(1) in this instance
but not always – indicative of the CJEU’s final reasoning is whether LEGALESE is making a decision when used,
[8][9]. or another form of output which is better categorised as</p>
          <p>One aspect of the provision that warrants more thor- a recommendation or similar – with a decision only taking
ough examination in the context of legal AI and legal IA place through the eventual human decision-maker.
– especially before the CJEU hands down its judgment The paper proceeds as follows. First, the statutory
in SCHUFA – is the word ‘decision’. Conceptualising complex surrounding, and the logic behind, Article 22 is
precisely what a ‘decision’ entails is especially pertinent discussed. This discussion informs the following section,
for automated tools that help to inform an ultimate, final which examines where precisely a ‘decision’ is made. It is
‘decision’ by a human decision-maker – a common task concluded that the term ‘decision’ should be interpreted
that emerging legal technologies are designed to carry broadly, so as to include recommendations or other
meaout. The legal question is whether the actions, or results, sures that lead to a particular result for an individual.
of such automated processing operations are to be prop- The implications of this finding in the context of legal AI
erly considered as ‘decisions’ in themselves (in addition or IA systems are briefly discussed in the final section,
to the ultimate, human, decision). If the answer is in the which also functions as the conclusion.
afirmative, entities that use these automated tools are
liable to increased responsibilities under the GDPR than
otherwise. 2. Logic and Mechanics of Article</p>
          <p>This ‘locating decisions’ problem – as coined by Binns 22 of the GDPR
and Veale [10] – is yet to be considered robustly in the
literature.6 This is despite the considerable debate on It is worth remarking at the outset that the GDPR’s
priArticle 22 more broadly, and Bygrave [11] pointing to the mary raison d’être is to lay down rules regarding personal
dificulties of ‘distinguishing decisions from other pro- data protection,8 and to protect the fundamental rights
cesses’ in 2001 under the GDPR’s precursor from 1995, the of individuals in this regard (‘in particular the right to the
Data Protection Directive (‘DPD’7). The A29WP/EDPB protection of personal data’).9 Whilst the GDPR does
digave terse – and, as this paper argues, misleading – guid- rectly regulate ADM and its potential harms, it only does
ance on this point in WP251. so to the extent that the processing of personal data is</p>
          <p>Legal decision-making can involve various degrees involved. This means that ADM tools that do not require
of automation [12]. Large scale content moderation de- the use of personal data (e.g. certain ADM tools used for
cisions by online platforms, for example, are regularly industrial purposes) are not captured by the GDPR or its
fully automated; i.e. the question of whether a specific Article 22. It also means that a data controller leveraging
piece of information is lawful or not is decided by al- ADM tools that do process personal data must, in
princigorithmic systems [13][14]. Regularly, however, only ple, comply with the GDPR’s many other requirements
parts of a complex legal decision-making process are au- for that processing10 (e.g. lawful basis and basic data
protomated because full automation may be neither feasible tection principles), regardless of whether the impugned
under the current state-of-the-art nor desirable in certain processing activities fall within the scope of Article 22.
ADM scenarios [15]. To concretise the analysis herein, Article 22 has been unpacked by the many others that
focus is had on an example of an automated tool that have discussed its various components.11 It being
unis intended to assist a human decision-maker. The tool necessary to cover ground well-trodden, the following
in question, dubbed ‘LEGALESE’, is a ‘legal tech’ infor- high-level remarks can be made about its logic and
memation retrieval application that uses natural language chanics that are important for providing context to this
paper:
5Hereinafter SCHUFA.</p>
          <p>6Binns and Veale (ibid) provide helpful context for the problem,
but do not endeavour to solve it, remarking that ‘there seem no easy
answers to this quandary in case law or regulatory guidance’.</p>
          <p>7Directive 95/46/EC on the protection of individuals with regard
to the processing of personal data and on the free movement of such
data [1995] OJ L 281/31.
• Exceptions and derogations: Whilst Article</p>
          <p>22(1) purports to restrict certain forms of ADM,
8GDPR, Art. 1(1).
9GDPR, Art. 1(2).
10On this point, see further WP251 (n 3), 9-19; [6].
11For a summary, see [5].</p>
          <p>treated as distinct from anti-money laundering
monitoring process that preceded the decision.
• Automated processing and profiling:
Profiling is defined at Article 4(4) as ‘...any form of
automated processing of personal data consisting
of the use of personal data to evaluate certain
personal aspects relating to a natural person...’. In
practice, automated decisions captured by Article
22(1) are likely to involve some kind of profiling
[17]. However, in principle, other forms of
automated processing of personal data may also lead
to a decision within the scope of Article 22.
• Right to explanation: The existence of a right
to explanation of ADM under the GDPR has been
a subject of academic discourse [2], due to its
lack of explicit provision in Article 22 (the
accompanying Recital 71, however, does indicate such
a right exists). Nonetheless, the overwhelming
weight of academic ([5][16][18][19]) and
stakeholder (notably WP251) commentary concludes
that such a right does exist. Whilst explainability
is not a direct focus of this paper, it is worthwhile
noting this aspect of Article 22, since a broader
interpretation of ’decision’ leads to a
concomitant increase in the amount of ’decisions’ that
data controllers are liable to explain (assuming,
of course, that such a right does exist).
it is ‘heavily encumbered by qualifications’ [ 16].</p>
          <p>Per Article 22(2), the right not to be subject to
ADM enshrined in Article 22(1) does not apply
if (a) ADM is necessary for the performance or
entering into a contract; (b) a Member State law
allows the ADM concerned; or (c) the data subject
explicitly consents to it.</p>
          <p>Moreover, Article 22(1) only applies to automated
decisions with relatively serious consequences;
i.e. those that ‘produce legal efects concerning
him or her or similarly significantly afects him
or her’.12 ADM that has a trivial impact on an
individual are not restricted by Article 22(1).13
Many legal AI/IA use cases, on the other hand,
might require a detailed analysis of what whether
they produce such legal efect.</p>
          <p>Finally, Article 22(1) applies to ‘a decision based
solely on automated processing’ (emphasis added).</p>
          <p>Ostensibly, this means that the restrictions in
Article 22(1) do not apply where there exists human
involvement in a decision – such as where a
computer simply recommends a course of action, or
gives a score for a human decision maker to apply
to a situation (e.g. credit history, employability,
visa eligibility). This conclusion, however, is
challenged in this paper as overly simplistic.
• Right vs prohibition: It is unsettled whether
Article 22(1) establishes a prima facie prohibition on
ADM, or is a right able to be exercised by data
subjects. If the former, a controller may seek to over- With these broad remarks in mind, it is helpful to
recome the generalised prohibition through one of turn to one of the observations made above. That pertains
the exceptions in Article 22(2), such as consent. If to the word solely, which denotes that even any human
the latter, the data subject must proactively exer- influence on the decision would preclude the application
cise their right by, for example, demanding that a of Article 22(1). This has led to several commentators
decision is made through non-automated means. [20][2][11], including the A29WP/EDPB in WP251, to
And, whilst Tosoni [1] cogently lays out a case for conclude that Article 22(1) is able to be circumvented by
the former, AG Pikamäe in SCHUFA endorses the involving a human decision maker – i.e. a human-in-the
latter.14 If the CJEU follows the approach of the loop [21][22] – with some degree of discretion over the
AG in this respect, Article 22 will likely present a ifnal decision. The only instance that this arrangement
greater thorn in the side of data controllers using would be insuficient, according to the A29WP/EDPB, is
ADM technologies than otherwise. where human ‘oversight of the decision is ... just a token
gesture’15, or in other words, where humans efectively
• Decision vs automated processing: According ‘rubber-stamp’ [10] decisions made by a computer.
to its plain wording, Article 22(1) applies to deci- Whilst this conclusion is convenient, it is – potentially
sions based on automated processing, including – problematic, for at least two related reasons. First, it
profiling. This means that a decision is not neces- implicitly presupposes a simple and linear relationship
sarily to be treated the same as ‘the processing that between the automated processing and eventual
decileads to it’ [1]. For example, a bank’s decision not sion. In analysing this problem, Binns and Veale [10]
to allow a financial transaction to proceed may be explain that ‘human intervention and/or a decision’s
significance can be stratified by stages or by particular
decision outcomes.’16 The authors refer to these instances as
12This is discussed below.</p>
          <p>13WP251 suggests that most forms of online targeted advertising,
as one example, would not fall within Article 22(1) for this reason;
WP251 (n 3), 22.</p>
          <p>14SCHUFA, para. 31.
15WP251 (n 3), 21.</p>
          <p>16Bygrave [5] also notes the possibility that a decision may be
‘an interim action in a broader process potentially involving multiple
decisions’.
‘multi-stage profiling systems’, as distinct from
‘singlestep automated decision-making’. For the benefit of
clarity, examples of these two scenarios are depicted in the
diagrams below (Figure 1 and Figure 2).</p>
          <p>Data collected by human</p>
          <p>Open source data</p>
          <p>Automated data analysis (processing)</p>
          <p>Human interpretation of data analysis &amp; final decision</p>
          <p>The second, related, problem with the ‘convenient’
conclusion above is that it assumes the relevant
‘decision’ for the purposes of Article 22(1) is always the final
decision. Put diferently, the above interpretation implies
that, in a decision-making complex, a final ‘decision’ (that
may have human input) cannot be preceded by one or
more other ‘decisions’ that are based solely on automated
processing.</p>
          <p>As alluded to in the introduction, the relevance of
other preliminary ‘decisions’, such as those depicted in
Figure 3, has been under-appreciated and under-studied
in the literature. On a purely textual analysis, there is
no immediate reason to conclude that the only relevant
‘decision’ for Article 22(1) purposes is the final decision.</p>
          <p>The following section examines whether, on the proper
construction of the provision, the conventional wisdom
should prevail. Or, alternatively, that Article 22(1) should
be interpreted broadly so as to include a broad range of
‘decisions’, including those that are merely a necessary
step towards an eventual, final decision.</p>
          <p>Automated data
processing
Decision based on
automated processing
Ranking, result, or score based solely</p>
          <p>on automated processing</p>
          <p>Final decision based on automated
ranking, result or score made by human</p>
          <p>Decision?</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>3. What’s in a Decision?</title>
      <p>‘Decision’ is not a defined term in the GDPR; nor is it a
term with a generally applicable definition in EU law. 17
Therefore, in interpreting what a ‘decision’ entails, one
must abide by the canons of statutory interpretation laid
out by the CJEU [23].</p>
      <p>17Whilst an oficial ’decision’ by an EU institution is an
established concept under Article 288 of the Treaty on the Functioning of
the European Union, this is clearly distinct from the type of ’decision’
envisioned in the GDPR. See SCHUFA, para. 37. See also [5].
3.1. Textual Interpretation</p>
      <sec id="sec-2-1">
        <title>From a purely textual perspective, Bygrave [11] remarks</title>
        <p>that ‘it is fairly obvious that making a decision about
an individual person ordinarily involves the adoption
of a particular attitude, opinion or stance towards that
person.’ This kind of activity is distinct, according to
Bygrave [5], ‘from other stages that prepare, support,
complement or head of decision making’. AG Pikamäe
in SCHUFA refers to, and appears to endorse, Bygrave’s
remarks,18 which provide a sound point of departure
from a textual sense, but far from conclusively establish
a firm meaning of the term.
3.2. Systematic and Teleological</p>
        <p>Interpretation
Given the possibility for ambiguity in the meaning of the
term, one must look beyond the (English version19) text,
and towards its context (i.e. a systematic interpretation)
and purpose (i.e. ‘telos’) in light of the instrument as
a whole. AG Pikamäe in SCHUFA notes that, given the
legislature chose not to define the term, it is possible to
deduce that the EU legislature intended a broad
interpretation of the term. Unfortunately, the AG declined
to refer to authority for this assertion;20 but
justification is forthcoming through a systematic and teleological
interpretation of the provision.</p>
        <p>For one, Recital 71 indicates that the legislature did not
intend an overly formalistic interpretation of ‘decision’:
The data subject should have the right
not to be subject to a decision, which may
include a measure, evaluating personal
aspects relating to him or her which is
based solely on automated processing and
which produces legal efects concerning
him or her or similarly significantly
affects him or her, such as automatic
refusal of an online credit application or
e-recruiting practices without any human
intervention.
a tribunal or court21), are indicative of an intention for
‘decision’ to be interpreted broadly.</p>
        <p>Further, from a teleological perspective, the GDPR is
concerned, inter alia, with the protection of
fundamental rights, especially the right to protection of personal
data.22 These rights are liable to erosion [25] by public
and private actors that use personal data to feed
diferent types of ADM processes. Article 22 is designed to
mitigate against such outcomes – an overly restrictive
interpretation of ‘decision’ runs counter to that goal. From
a consequentialist perspective, it seems absurd to allow
entities to use ADM, so long as there is a ‘human in the
loop’ somewhere in the decision-making process – even
if the human involvement is inconsequential in practice.</p>
        <p>Moreover, as put by the President of the CJEU,
(co)writing extra-judicially, the court ‘must, as far as
possible, interpret the law with a view to filling any normative
lacunae, either in primary or secondary EU law, whose
persistence would ”lead to a result contrary both to the
spirit of the Treaty ... and to its system.”’23 The solely
automated criterion, as mentioned above, has been referred
to by commentators as a ‘legal loophole’. So far as this
loophole is able to be closed through a broad
interpretation of the term ‘decision’, the CJEU would be likely
to adopt such an interpretation, thereby also increasing
legal certainty when employing AI or IA systems in legal
decision-making.</p>
        <p>Finally, from an empirical perspective, the CJEU has
been (arguably) near-continuously prepared to interpret
the provisions of the GDPR and its predecessor, the DPD,
in a manner that privileges the protection of data
protection rights over, for instance, business or public security
interests [26]. There is little to suggest that the CJEU
would resile from this approach in the context of ADM.
3.3. Counterarguments to an Expansive</p>
        <p>Interpretation</p>
      </sec>
      <sec id="sec-2-2">
        <title>The obvious counterargument to the above is that it ren</title>
        <p>ders the term ‘solely’ in Article 22(1) superfluous. It
is interpretive dogma – in the EU and beyond24 – that
the judiciary should strive to give meaning to each term</p>
        <p>The inclusion of a ‘measure’, and listing of non- in a written law due to the presumed rationality of the
exhaustive examples that extend beyond formal decision- legislator. Why include the ‘decision based solely on
aumaking (e.g. as made in an oficial proceeding, such as tomated processing’ criterion, only to allow Article 22(1)
18SCHUFA, para. 37.</p>
        <p>19In principle, all oficial languages are equally relevant from an
interpretive perspective, see [23].</p>
        <p>20Bygrave [11] makes a similar remark in his 2001 work analysing
Article 22 of the GDPR’s precursor, in Article 15 of the DPD: ‘the
notion of decision in Art. 15(1) is undoubtedly to be construed
broadly and somewhat loosely in light of the provision’s rationale
and its otherwise detailed qualification of the type of decision it
embraces.’
21Compare in the context of legal high risk AI systems in the AI
Act and judicial authority Schwemer et al. [24].</p>
        <p>22SCHUFA, para. 48.</p>
        <p>23[23], citing Case 294/83, Les Veils v. Parliament, 1986 E.C.R.
1357, para. 25.</p>
        <p>24E.g. Market Co. v. Hofman, 101 U. S. 112, 115 (1879) (”As
early as in Bacon’s Abridgment, sect. 2, it was said that ’a statute
ought, upon the whole, to be so construed that, if it can be
prevented, no clause, sentence, or word shall be superfluous, void, or
insignificant’”).
to nonetheless apply where there is meaningful human 3.4. ‘Meaningful’ Human Oversight?
involvement?</p>
        <p>In reply, it is contended that the interpretation ad- As an added benefit, the interpretive approach suggested
vanced in this paper remains in observance of this prin- in this paper solves a further interpretive dilemma. It was
ciple. A decision with human input continues to fall earlier stated that the A29WP/EDPB, in WP251, regarded
outside the scope of Article 22(1). What difers is that the that ‘token’ human involvement was insuficient to
ren‘decision’ is able to be located elsewhere than the final, der the decision ‘solely’ made by automated processing.
human decision. Therefore, a decision with human in- The full relevant passage is as follows:
put might be preceded by decisions that are solely made To qualify as human involvement, the
conthrough automated processing. Figure 3 above provides troller must ensure that any oversight of
an illustration of an example. the decision is meaningful, rather than</p>
        <p>The proposition that computers can make decisions as just a token gesture. It should be carried
a precursor to further, human, decisions is plainly uncon- out by someone who has the authority
troversial from a textual or contextual perspective. From and competence to change the decision.
a consequentialist-teleological perspective [23], this ap- As part of the analysis, they should
conproach maintains the balance that the GDPR is designed sider all the relevant data.27
to strike. To reiterate an earlier observation, as a data
protection instrument, the (often discretionary)
decisionmaking process of humans is not within the remit of the
GDPR. However, the processing of personal data, which
may create or impact decisions, is. As Bygrave notes,
one of the ‘fears’ that underpinned the drafting of Article
15(1) of the DPD, the precursor to Article 22(1) GDPR, was
‘that the increasing automatisation of decision-making
processes engenders automatic acceptance of the validity
of the decisions reached and a concomitant reduction in
the investigatory and decisional responsibilities of
humans’ [11], i.e. propagating automation bias.25 Bygrave
points to the Commission’s commentary in this regard:</p>
      </sec>
      <sec id="sec-2-3">
        <title>In other words, according to this perspective, human</title>
        <p>involvement in the context of Article 22 needs to take
place in the operation of the ADM in a ’meaningful’ way.
This is notably also diferent to many other
human-inthe-loop provisions in secondary EU legislation, e.g., in
relation to content moderation where human
involvement is only required in the redress phase of ADM [14].
On the other hand, the EU’s proposed AI Act requires
’appropriate’ human oversight measures [24][27].</p>
        <p>In the context of Article 22, in any case, there is no
interpretive justification for this stance given by the
A29WP/EDPB. The stipulation merely adds a further
ambiguous step to Article 22 compliance – that data
controllers must consider when human involvement is
‘meaningful’ in a decision aided by automated processing.</p>
        <p>A broad interpretation of ‘decision’ circumvents the
need for this additional step –i.e. the assessment of
meaningfulness – to an already confounding provision. Where
there is a human in the loop, the point of enquiry for the
application of Article 22(1) is not the ‘meaningfulness’
of the human input, but the extent to which the decision
made by solely automated processing ‘produces legal
effects concerning ... or similarly significantly afects’ the
data subject. If human input is meagre, or functionally
non-existent, then the contribution to the data subject’s
position by the automated processing is concomitantly
larger. This conclusion is elaborated further below, when
examining the consequences of the interpretation
advanced herein.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Consequences of a Broad</title>
    </sec>
    <sec id="sec-4">
      <title>Interpretation of ‘Decision’ for</title>
    </sec>
    <sec id="sec-5">
      <title>Legal AI/IA</title>
      <sec id="sec-5-1">
        <title>Despite, according to this paper, the phrase ‘a decision</title>
        <p>based solely on automated processing’ encompassing</p>
        <p>The danger of the misuse of data
processing in decision-making may become a
major problem in future: the result produced
by the machine, using more and more
sophisticated software, and even expert
systems, has an apparently objective and
incontrovertible character to which a
human decision-maker may attach too much
weight, thus abdicating his own
responsibilities.26</p>
        <p>The interpretation of Article 22(1) advanced in this
paper ensures that the ‘result produced by the machine’
is scrutinisable – so long as it ‘produces legal efects
concerning ... or similarly significantly afects’ the data
subject. Likewise, it ensures that human contributions
to an ultimate decision are not covered by the provision.</p>
        <p>25In the context of the proposed AI Act, e.g., referred to the
’possible tendency of automatically relying or over-relying on the
output’, Article 14(4) lit. b.</p>
        <p>26Amended proposal for a Council Directive on the protection
of individuals with regard to the processing of personal data and
on the free movement of such data COM(92) 422 final – SYN 287,
15.10.1992, 26.
more types of ADM than conventional wisdom suggests, fell within the scope of Article 22(1).30 Anti-money
launthe scope of Article 22(1) remains subject to other con- dering, job applications, welfare applications, and visa or
straints. This includes the ‘legal efects... or similarly citizenship applications, to name a few, are amongst those
significantly afects’ criterion. processes that increasingly rely on the assistance of
au</p>
        <p>Indeed, this is likely to be the key threshold question tomated data processing tools to increase eficiency and
that entities using ADM must ask when considering the accuracy, despite human involvement. As this paper
surpotential application of Article 22(1). Put diferently, the mises, those might fall within Article 22(1)
notwithstandkey point of enquiry that an entity leveraging personal ing the ‘solely automated’ criterion. Data controllers
data-fuelled ADM processes is not whether a decision using such tools should be aware of potential legal
oblihas been made, or whether a human’s involvement has gations in this regard.
been ‘meaningful’. Rather, data controllers must look to Many legal AI/IA systems are designed to empower
the practical efect of the purely automated decisions – a human decision-maker to make better, more accurate
broadly defined – and their impact on data subjects. decisions, more eficiently. Those using nascent, legal</p>
        <p>Such a conclusion is similarly reached by AG Pikamäe technologies like these should be particularly attuned to
in SCHUFA, who writes that ‘[t]he decisive factor is the the possibility that Article 22(1) applies in these instances.
efect that the “decision” has on the person concerned.’ 28 That is because the use of legal technologies are
particuWhere a human is ‘in the loop’, an entity leveraging larly liable, by their very nature and the specific context
ADM must also consider whether other preliminary, fully- in which they are used, to ‘produce legal efects’ on data
automated stages might, in themselves, produce legal subjects. Whilst this aspect of Article 22(1) is outside
efects or similarly significantly afects data subjects. In the scope of this paper, it is worthwhile mentioning the
SCHUFA, AG Pikamäe ultimately opined that the auto- A29WP/EDPB guidance in WP251 on point:
mated calculation of a credit score could comprise a
‘decision’ for the purposes of Article 22(1). This was despite
the eventual, final decision to lend credit having human
input that could reasonably be categorised as
‘meaningful’. A translation of the relevant passage is as follows:</p>
      </sec>
      <sec id="sec-5-2">
        <title>The decisive factor is the efect that the</title>
        <p>“decision” has on the person concerned.</p>
        <p>Since a negative [credit] score can, on its
own, produce unfavourable efects for the
person concerned, namely significantly
limiting him in the exercise of his
freedoms, or even stigmatizing him in
society, it seems justified to qualify it as a
”decision” in the sense of the
aforementioned provision [i.e. Article 22(1)] when
a financial institution gives it paramount
importance in their decision-making
procedure. Indeed, in such circumstances,
the credit applicant is afected from the
stage of the assessment of his credit by
the credit check company and not only at
the final stage of the refusal of the credit,
in which the financial institution does not
apply the result of this evaluation to the
specific case. 29</p>
        <p>One need not think hard to imagine other types of
scoring, ranking, or human decision-assistance processes
with similar efect. The Amsterdam Court of Appeals,
for instance, recently ruled that automated processing
that presaged the firing of Uber and Ola drivers similarly
28SCHUFA, para. 43 (translated from French).
29SCHUFA, para. 43 (translation from French).</p>
        <p>A legal efect requires that the decision,
which is based on solely automated
processing, afects someone’s legal rights,
such as the freedom to associate with
others, vote in an election, or take legal
action. A legal efect may also be something
that afects a person’s legal status or their
rights under a contract. Examples of this
type of efect include automated decisions
about an individual that result in:
• cancellation of a contract;
• entitlement to or denial of a
particular social benefit granted by law,
such as child or housing benefit;
• refused admission to a country or</p>
        <p>denial of citizenship.31</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion</title>
      <p>The conclusions reached in this paper have implications
for those leveraging ADM more broadly, but perhaps
especially those developing, or leveraging, AI/IA ADM
processes in legal technologies. Put simply, it is
insuficient to place a human in the loop to alleviate compliance
burdens in the context of the GDPR’s ADM-focused
Article 22. Those leveraging ADM systems are unlikely to</p>
      <p>30Amsterdam Court of Appeals ECLI:NL:GHAMS:2023:793;
ECLI:NL:GHAMS:2023:796; ECLI:NL:GHAMS:2023:804.</p>
      <p>For an English summary of the cases, see https://
www.workerinfoexchange.org/post/historic-digital-rights-win-forwie-and-the-adcu-over-uber-and-ola-at-amsterdam-court-of-appeal.</p>
      <p>31WP251, 21.
be thrilled with AG Pikamäe’s remarks that Article 22’s General on the Court of Justice of the European
application ‘depends on the circumstances of each par- Union, Cambridge International Law Journal 5
ticular case.’32 This sentiment echoes others who have (2016) 82–112.
remarked that the GDPR is ill-suited to rigid, checkbox- [9] T. Tridimas, The Court of Justice of the European
style compliance. Union, in: Oxford Principles of European Law:</p>
      <p>Nevertheless, as potential landmark case, the forth- Volume 1: The European Union Legal Order, Oxford
coming SCHUFA judgment promises to bring some legal University Press, 2018.
certainty for the legal AI and IA community on the inter- [10] R. Binns, M. Veale, Is that your final decision?
section of ADM and data protection. This significant de- Multi-stage profiling, selective efects, and Article
velopment, coupled with other notable political advance- 22 of the GDPR, International Data Privacy Law 11
ments in the EU, particularly the AI Act, will continue (2021) 319–332.
to keep European legal experts occupied as they further [11] L. A. Bygrave, Automated Profiling: Minding the
delve into the intricate legal ramifications of ADM. Machine: Article 15 of the EC Data Protection
Directive and Automated Profiling, Computer Law
&amp; Security Review 17 (2001) 17–24. doi:10.1016/
Acknowledgments S0267-3649(01)00104-2.
[12] K. D. Ashley, Artificial Intelligence and Legal
AnThis research is part of the Legalese project at the Univer- alytics: New Tools for Law Practice in the Digital
sity of Copenhagen, co-financed by the Innovation Fund Age, Cambridge University Press, 2017.
Denmark (grant agreement: 0175-00011A). We thank [13] R. Gorwa, R. Binns, C. Katzenbach,
AlgorithLuca Tosoni for inspiring discussions. All remaining mic content moderation: Technical and political
errors are our own. challenges in the automation of platform
governance, Big Data &amp; Society 7 (2020). doi:10.1177/
References 2053951719897945.
[14] T. Riis, S. F. Schwemer, Leaving the European Safe
[1] L. Tosoni, The right to object to automated indi- Harbor, Sailing towards Algorithmic Content
Regvidual decisions: resolving the ambiguity of Article ulation, Journal of Internet Law 22 (2019) 1–21.
22 (1) of the General Data Protection Regulation, [15] S. Deakin, C. Markou, Is law computable?:
CritiInternational Data Privacy Law 11 (2021) 145–162. cal perspectives on law and artificial intelligence,
[2] S. Wachter, B. Mittelstadt, L. Floridi, Why a right Bloomsbury Publishing, 2020.
to explanation of automated decision-making does [16] L. A. Bygrave, Minding the Machine v2.0: The
not exist in the General Data Protection Regulation, EU General Data Protection Regulation and
AutoInternational Data Privacy Law 7 (2017) 76–99. mated Decision-Making, in: Algorithmic
Regula[3] A. Selbst, J. Powles, “Meaningful information” and tion, Oxford University Press, 2019. doi:10.1093/
the right to explanation, in: Conference on Fairness, oso/9780198838494.003.0011.</p>
      <p>Accountability and Transparency, PMLR, 2018, pp. [17] M. Brkan, Do algorithms rule the world?
Algo48–48. rithmic decision-making and data protection in
[4] B. Goodman, S. Flaxman, European union regula- the framework of the GDPR and beyond,
Internations on algorithmic decision-making and a “right tional Journal of Law and Information Technology
to explanation”, AI magazine 38 (2017) 50–57. 27 (2019) 91–121.
[5] L. Bygrave, Article 22, in: C. Kuner, L. Bygrave, [18] I. Mendoza, L. A. Bygrave, The right not to be
C. Docksey (Eds.), The EU General Data Protec- subject to automated decisions based on profiling,
tion Regulation (GDPR), Oxford University Press, EU internet law: Regulation and enforcement (2017)
Oxford, 2020, pp. 522–542. 77–98.
[6] A. Tamo-Larrieux, Decision-making by machines: [19] M. E. Kaminski, The right to explanation,
exIs the ‘Law of Everything’ enough?, Computer Law plained, Berkeley Technology Law Journal 34 (2019)
&amp; Security Review 41 (2021) 105541. 189–218.
[7] M. E. Kaminski, G. Malgieri, Algorithmic impact [20] S. Wachter, B. Mittelstadt, A right to reasonable
assessments under the GDPR: producing multi- inferences: re-thinking data protection law in the
layered explanations, International Data Privacy age of big data and AI, Colum. Bus. L. Rev. (2019)
Law (2020) 19–28. 494.
[8] C. Arrebola, A. J. Mauricio, H. J. Portilla, An econo- [21] F. M. Zanzotto, Human-in-the-loop artificial
intelmetric analysis of the influence of the Advocate ligence, Journal of Artificial Intelligence Research
64 (2019) 243–252.
32SCHUFA, para. 40 (translated from French). [22] R. Crootof, M. E. Kaminski, W. N. Price II, Humans
in the loop, Vanderbilt Law Review, Forthcoming
(2023).
[23] K. Lenaerts, J. A. Gutiérrez-Fons, To say what the
law of the EU is: methods of interpretation and
the European Court of Justice, Colum. J. Eur. L. 20
(2013) 3.
[24] S. F. Schwemer, L. Tomada, T. Pasini, Legal AI</p>
      <p>Systems in the EU’s proposed Artificial Intelligence
Act, in: Proceedings of the Second International
Workshop on AI and Intelligent Assistance for Legal
Professionals in the Digital Workplace (LegalAIIA
2021), held in conjunction with ICAIL, 2021.
[25] C. O’Neil, Weapons of math destruction: How big
data increases inequality and threatens democracy,</p>
      <p>Crown, 2017.
[26] N. Purtova, The law of everything. Broad concept
of personal data and future of EU data protection
law, Law, Innovation and Technology 10 (2018)
40–81.
[27] M. Veale, F. Zuiderveen Borgesius, Demystifying
the draft EU Artificial Intelligence Act—Analysing
the good, the bad, and the unclear elements of the
proposed approach, Computer Law Review
International 22 (2021) 97–112.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>