<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Workshop on AI bias: Measurements, Mitigation, Explanation Strategies, Amsterdam, March</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Credit scoring and transparency between the AI Act and the Court of Justice of the European Union⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Elena Falletti</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Chiara Gallese</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Università Cattaneo-LIUC</institution>
          ,
          <addr-line>Corso Matteotti 22, 20153, Castellanza</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Università di Torino</institution>
          ,
          <addr-line>Lungo Dora Siena 100, 10153, Torino</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>20</volume>
      <issue>2024</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Credit scoring software has become firmly established in the banking sector as a means to mitigate defaults and non-performing loans. These software systems pose significant challenges related to their non-transparent nature as well as biases inherent in the data nurturing the machine learning. Despite the Artificial Intelligence Act Proposal not being enacted yet, legal precedents have begun to emerge, starting with the ruling of the Court of Justice of the European Union (Case C-634/21). This ruling acknowledges that individuals seeking bank loans have the right, under Article 22 of the GDPR, to demand an explanation regarding the decision-making process of such programs. This article aims to analyze the evolution of credit scoring software since the SCHUFA ruling and the entering into force of the Artificial Intelligence Act.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Artificial Intelligence</kwd>
        <kwd>Automated Decision Making</kwd>
        <kwd>Credit Scoring 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Credit risk assessment has long been the subject of
debate in both doctrine [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and case law [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ].
      </p>
      <p>The notion of risk regards an evaluation of a
creditor's trust in a debtor's capacity to pay their
debts. This kind of evaluation is necessary to uphold
the integrity of the financial market, encompassing
both borrowers for their ventures and investors
leveraging others' savings. In assessing the
trustworthiness of credit seekers, databases are
utilized to document debtors' reliability, given the
frequent convergence of these roles.</p>
      <p>
        Using automated decision-making systems
marked a significant advancement, integrating data
on historical reliability alongside probabilistic
projections of future solvency [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>The logic behind using such tools lies in the
empirical observation that human actions tend to
repeat. Considering this seriality, it is considered
reasonable to calculate the probability of a given
behavior's recurrence by a mathematical procedure
embedded in the algorithm.</p>
      <p>
        This scoring contains an element of behavioral
analysis that could hide a social-ethical judgment [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ],
which is linked to the risk of default.
      </p>
      <p>
        It is because the loan denial is justified based on
the result of the credit scoring software; therefore,
biases capable of negatively influencing the
algorithmic procedure [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] could ambush in the
performance of this operation [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>However, the application of the credit scoring
algorithm is justified by the fact that, at least in
abstract terms, it should treat serialized situations
uniformly, ensuring, at least in intention, the
conformity of access criteria by linking them with the
solvency of past debts.</p>
      <p>At this early stage, the procedure plays a decisive
role in specific contexts, enabling decisions based on
probability parameters.</p>
      <p>
        There is thus an area that can be quantified by the
percentage of accuracy between the result processed
by machine learning and the reality principle [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and
this space may contain errors [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], biases [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ],
hallucinations [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], or discrimination [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] depending
on the quality of the data with which the dataset used
by machine learning was formed [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        The practice of evaluating credit trustworthiness
has been performed - before the advent of AI - by
employing traditional techniques [
        <xref ref-type="bibr" rid="ref14 ref15">14, 15</xref>
        ], which have
not been regulated as strictly as in the new AI Act. In
Italy, for example, only a general discipline is found in
the banking code, regulating only credit scoring
performed by banks and financial institutions.
      </p>
      <p>We might argue that credit scoring itself is a
sensitive topic that has the potential to significantly
impact the lives of citizens, especially the wealthy,
whether AI or not. However, AI models' capacity to be
inherently opaque on a very large scale, impacting
millions of people at once, differentiates them from
other techniques. For this reason, we will focus the
scope of this article on AI models.</p>
      <p>The first section of the article focuses on article 22
GDPR (General Data Protection Regulation) and its
implications; the second deals with a recent judgment
of the Court of Justice of the European Union (Case
C634/21, see Fig. 1); the third examines the topic in
light of the AI Act proposal; and the last draws some
conclusive remarks.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Credit scoring and the right to an explanation under Article 22 GDPR</title>
      <p>As explained in the previous section, the person
subjected to the automated predictive decision must
be able to access the explanation of the process
carried out by the algorithm, whether it is a result in
credit matters or about areas in which the
fundamental rights of the person involved are put at
risk.</p>
      <p>
        In current law, this right is recognized by Art. 22
GDPR.[
        <xref ref-type="bibr" rid="ref16 ref17">16,17</xref>
        ] At the same time, Art. 68c of the
Artificial Intelligence Act serves as the concluding rule
for all areas not addressed by the aforementioned Art.
22 GDPR [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], despite some differences in its text,
which has not yet been published in its official version
as of the time of writing.
      </p>
      <p>
        As is well known, Art. 22 GDPR provides for the
right of the person subject to the decision to be
informed of the automated process. As a defense
against this claim, the protection of trade secrets on
how the algorithmic software works is invoked [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>
        Credit scoring programs concern a sub-category of
predictive software measuring social scoring [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
Generally speaking, credit scoring is a rate that
assesses financial reliability, i.e., the possible
predictability of repayment of the loan or mortgage. It
is a score processed through a statistical procedure.
This procedure quantifies the probability of a person's
future solvency based on a combination of the
payments made in the past by the same person and on
their classification within a category of similar
subjects according to their characteristics [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ].
      </p>
      <p>
        Under this perspective, scholars observe that the
credit scoring system measures the prediction of a
behavior [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], by placing the person concerned in a
category of profiles with a similar score; therefore,
this score will be decisive in denying or granting the
request based on the strict assumption that in
standardized situations behavior is serialized.
      </p>
      <p>
        Nevertheless, it should be borne in mind that “a
profile is not a person” [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. This assertion is only
apparently obvious since the serialized data collected
and treated in machine learning, precisely because
they are serialized, fail to grasp the essence of each
individual, both in the positive and negative sense.
Therefore, it is neither possible nor common sense to
consider the actual person coincident with the profile
derived from the projection of the combination of
their data [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>
        Thus, the request for access to the
decisionmaking process by a hypothetical but plausible loan
applicant who was denied money is well-founded [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]
in two respects, i.e. both under Article 22 GDPR, which
recognizes the right to an explanation, and under
Article 17 GDPR, i.e. based on what actual information
this result was processed by machine learning [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ].
Further, such protections are reinforced by Article 8
of the Charter of Fundamental Rights of the European
Union, according to which every person has the right
to access and obtain rectification of the data collected
concerning them. It is an effect of the right to
protection of personal data relating to individuals.
According to this principle, personal data collected
must be processed under the principle of fairness for
specified purposes and based on the consent of the
person concerned or for a legitimate purpose
provided for by law [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ].
      </p>
      <p>
        In the context of the balancing act between the
protection of personal data from the collection
activities necessary for machine learning related to
the credit scoring programs and the exception
constantly presented in court about the protection of
industrial secrets [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], protected by Article 17(2) of
the same Charter, it is the latter that is recessive
concerning the request for transparency. Indeed,
transparency as to the functioning of the algorithmic
activity is necessary for understanding the logics that
govern the evaluative classification relative to the
attribution of the solvability score. Otherwise, the
purpose of the data protection principle and the
necessity of algorithmic transparency, provided for by
the GDPR and reaffirmed by the approved Artificial
Intelligence Act Proposal and in the publication
process, would be thwarted [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ].
      </p>
      <p>
        In this regard, the source code should be
accessible in any situation where potential
discrimination could emerge, both direct and indirect
[
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], since the exercise of the right of access, in
defense of the dignity and reputation of the party,
since being unfairly considered a bad payer is a severe
injury to reputation [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], is deemed to prevail over the
protection of trade secrets.
      </p>
      <p>
        As stated by scholarly opinion [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], not knowing
the source code prevents the algorithm’s traceability,
violating the minimum explanatory duty established
by European sources, such as Article 22 GDPR itself or
Article 68c of the AI Act.
      </p>
      <p>
        In the specific context, it was explored whether it
was possible to create a fully interpretable machine
learning model. In 2018, a competition known as the
Explainable Machine Learning Challenge [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], was
launched to explain how models work transparently.
Surprisingly, some participants responded by
proposing a transparent and interpretable model,
thus demonstrating that machine learning can be
organized relatively and transparently [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. This
approach has also attracted interest in credit scoring,
with specific studies [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] also promoted by credit
institutions. Although these studies may come from
parties directly involved in a conflict of interest, they
deserve attention [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. The decision of the Court of</title>
    </sec>
    <sec id="sec-4">
      <title>Justice of the European Union on credit scoring</title>
      <p>
        The legal case decided by the Court of Justice of the
European Union (EUCJ) started in Germany and
concerned the processing of personal data by a
private credit agency. This entity provided
information on the creditworthiness of third parties,
such as consumers to banks or loaning activities [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ].
      </p>
      <p>At the same time, the credit agency was the data
controller, processed the personal data of the profiled
persons, and compiled the scores to be provided to the
applicant banks using statistical and mathematical
methods.</p>
      <p>The credit score assigned by the data controller
was taken into account by the scoring agency's
contractual partners, who used those results in their
decision-making process to decide whether or not to
grant a loan to the borrower. The bank refused the
applicant's credit request. The refusal was based on
the result of the private agency in charge.</p>
      <p>Following this, the client requested access to the
information concerning her based on Article 22 GDPR.
The German national data protection authority
rejected this request, allowing the claimant to obtain
specific information on personal data but not on the
functioning of the negative credit scoring calculation.
The applicant claimed that this last part is the heart of
credit scoring, claiming that it was a process protected
by trade secrets. The applicant challenged the refusal
in court.</p>
      <p>According to the referring court, the core of the
question was whether determining the probability of
default rate constituted an automated process within
the meaning of Article 22 GDPR(1) since this
provision is oriented towards protecting (natural)
persons from the discriminatory risks associated with
purely automated decisions.</p>
      <p>The question concerns at which stage of assessing
the customer's creditworthiness fits the automated
calculation process whether at the assessment stage
based on data provided by the third party (i.e., the
bank) to SCHUFA in the actual calculating phase.</p>
      <p>In the first case, there would be a legal loophole in
that SCHUFA would have to respond to the requesting
data subject based on Article 15(1)(h) GDPR alone,
but not based on Article 22(1), and this would amount
to a lack of protection, since on the one hand the
automated decision-making process takes place
during the first phase.</p>
      <p>On the other hand, the bank that requested the
service and to which the probability rate is
communicated cannot provide information on the
automation of the service since it is an outsourced
service.</p>
      <p>Since Art. 22 GPDR and Recital No. 71 have a
specific rationale concerning the protection of the
user against the automation of decisions without
human intervention, it must be examined how Art. 31
BDSG (Bundesdatenschutzgesetz – Federal Data
Protection Act) has implemented such protection in
German law and whether it is compatible with it.</p>
      <p>In this respect, two perspectives would open up:
on the one hand, Section 31 BDSG would consider only
the use of the probability rate, but not its calculation,
as an automated process, and again, there would be a
lack of protection. On the other hand, if calculating
that probability rate did not constitute an automated
decision-making procedure for natural persons,
neither Article 22 GPDR nor Paragraph 1 nor the
opening clause of Paragraph 2(b) could apply.</p>
      <p>The referring Court's question concerns the
definition of what is intended as an 'automated
decision' within the meaning of Article 22 GDPR and
how this applies to credit scoring.</p>
      <p>The EUCJ states that for Article 22 to be applicable,
three conditions must coexist, namely: 1. that a
decision must be necessary; 2. that it must be 'based
solely on automated processing, including profiling';
and 3. that it must produce 'legal effects [concerning
the data subject]' or affect 'in a similarly significant
way their person.</p>
      <p>
        Concerning point (a), the definition provided in
Recital 71, according to which the data subject has the
right to opt out of the legal effects produced by a
purely automated decision affecting them, such as the
automatic rejection of an online credit application or
online recruiting practices managed by algorithms
[
        <xref ref-type="bibr" rid="ref34">34</xref>
        ].
      </p>
      <p>
        Elaborated in these terms, the Court stated that
the decision on credit scoring referred to in the
reference for a preliminary ruling falls within the
applicability of Article 22 GDPR para. 1, since that
carried out by SCHUFA, is a profiling activity under
Art. 4, point 4 of the GDPR, where by its very nature
discriminatory results may emerge, given that it
involves data on even intimate characteristics of a
person, such as health, personal preferences, interests
not always directly related to their behavior, such as
professional performance, economic situation,
reliability, location or movements of that individual
[
        <xref ref-type="bibr" rid="ref35">35</xref>
        ].
      </p>
      <p>All these situations may be subject to
measurement or balancing in the light of fundamental
rights.</p>
      <p>After that, the question referred for a preliminary
ruling explicitly relates to the automated calculation
of a probability rate based on personal data relating to
a person and concerning that person's ability to honor
a loan in the future.</p>
      <p>Such a decision produces significant legal effects
on the person since the action of the credit scoring
company's client (i.e., the 'third party') to whom the
probability result is transmitted will suffer decisive
legal effects. An insufficient probability rate will, in
almost all cases, lead to a refusal to grant the
requested loan.</p>
      <p>Therefore, calculating such a rate qualifies as a
decision concerning a data subject's legal effects
concerning or significantly similarly affecting them
within the meaning of Article 22(2) GDPR. The latter
gives the data subject the 'right' not to be subject to a
decision based solely on automated processing,
including profiling. This provision lays down a
prohibition in principle, the breach of which does not
need to be asserted individually by such a person.</p>
      <p>Indeed, as is evident from the combined
provisions of Article 22(2) of the GDPR and Recital 71
of that regulation, the adoption of a decision based
solely on automated processing is authorized only in
the cases referred to in that article, i.e., where such a
decision is necessary for the conclusion or
performance of a contract between the data subject
and a data controller within the meaning of point (a),
or where it is authorized by the law of the Union or of
the Member State to which the data controller is
subject under point (b) or is based on the data
subject's explicit consent provided for in point (c).</p>
      <p>
        Some attention must be paid to this last point
since the debtor's consent may be given without being
aware of it, for example, by signing forms or forms
where the applicant signs without due care, either
because he is vulnerable [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ] or because of a tendency
to underestimate the consequences of such an act, or
the necessity of the signature to continue with the
credit application which, in the applicant's belief, he
hopes will be successful.
      </p>
      <p>In the cases referred to in Article 22(2)(a) and (c)
of that Regulation, the controller shall at least
implement the data subject's right to obtain human
intervention, to express his opinion, and to contest the
decision. What is more, in the case of the adoption of
a decision based solely on automated processing, such
as that referred to in Article 22(1) of the GDPR, on the
one hand, the data controller is subject to additional
information obligations under Article 13(2)(f) and
Article 14(2)(g) of that Regulation. On the other hand,
the data subject enjoys, under Article 15(1)(h) GDPR,
the right to obtain from the data controller, among
other things, "meaningful information about the logic
used and the significance and intended consequences
of that processing for the data subject."</p>
    </sec>
    <sec id="sec-5">
      <title>4. Credit Scoring in light of the AI Act</title>
      <p>The European Commission finally released the
first proposal for a harmonized legal framework on AI
at the European level. This is a unique piece of
legislation which is aimed at achieving four specific
objectives:
•
•
•
•
ensure that AI systems placed on the Union
market and used are safe and respect
existing law on fundamental rights and
Union values;
ensure legal certainty to facilitate investment
and innovation in AI;
enhance governance and effective
enforcement of existing law on fundamental
rights and safety requirements applicable to
AI systems;
facilitate the development of a single market
for lawful, safe and trustworthy AI
applications and prevent market
fragmentation.</p>
      <p>The enforcement mechanism of the proposal
relies on a governance system at national level,
building on already existing structures, and
establishes a central cooperation mechanism through
a "European Artificial Intelligence Board``.</p>
      <p>The most important innovation of the proposal is
the establishment of four risks categories for AI
systems, in order to protect citizens' fundamental
rights. The explanatory memorandum attached to the
proposal, in fact, notes that ``The use of AI with its
specific characteristics (e.g. opacity, complexity,
dependency on data, autonomous behaviour) can
adversely affect a number of fundamental rights
enshrined in the EU Charter of Fundamental Rights
(‘the Charter’). This proposal seeks to ensure a high
level of protection for those fundamental rights and
aims to address various sources of risks through a
clearly defined risk-based approach. With a set of
requirements for trustworthy AI and proportionate
obligations on all value chain participants, the
proposal will enhance and promote the protection of
the rights protected by the Charter: the right to human
dignity (Article 1), respect for private life and
protection of personal data (Articles 7 and 8),
nondiscrimination (Article 21) and equality between
women and men (Article 23). It aims to prevent a
chilling effect on the rights to freedom of expression
(Article 11) and freedom of assembly (Article 12), to
ensure protection of the right to an effective remedy
and to a fair trial, the rights of defence and the
presumption of innocence (Articles 47 and 48), as well
as the general principle of good administration.
Furthermore, as applicable in certain domains, the
proposal will positively affect the rights of a number
of special groups, such as the workers’ rights to fair
and just working conditions (Article 31), a high level
of consumer protection (Article 28), the rights of the
child (Article 24) and the integration of persons with
disabilities (Article 26). The right to a high level of
environmental protection and the improvement of the
quality of the environment (Article 37) is also
relevant, including in relation to the health and safety
of people. The obligations for ex ante testing, risk
management and human oversight will also facilitate
the respect of other fundamental rights by minimising
the risk of erroneous or biased AI-assisted decisions
in critical areas such as education and training,
employment, important services, law enforcement
and the judiciary. In case infringements of
fundamental rights still happen, effective redress for
affected persons will be made possible by ensuring
transparency and traceability of the AI systems
coupled with strong ex post controls.</p>
      <p>
        The risk categories are related to the degree
(intensity and scope) of risk to citizens' safety or
fundamental rights and are classified into four
different categories for AI systems, among which the
high-risk ones have to comply with many
requirements and obligations. Taking inspiration
from the product safety legislation, the classification
of risks is based on the intended purpose and
modalities for which the AI system is used, not only on
their specific function. Depending on the national legal
system, the qualification of high risk may have
consequences over liability, such as that under art.
2050 of the Italian Civil Code. The proposal also draws
up a list of prohibited AI systems that fall within the
''unacceptable risk" category [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ].
      </p>
      <p>The proposal, in Annex III, classifies AI systems
employed for credit scoring as "high-risk". The
decision to include such systems in this category was
most likely drawn by the fact that financial
institutions play an important social role by deciding
to grant a mortgage or a financial instrument to
citizens. In the end, they represent the only obstacle
for less wealthy families to own a house or to afford
essential means for their everyday life (e.g., being able
to open their own business).</p>
      <p>AI systems are known to perpetuate societal and
historical biases, and there is no reason to believe that
social scoring systems would be different: by
providing safeguards, transparency measures, and
precise obligations on AI providers and users, the
legislator intended to protect citizens from such
systems.</p>
      <p>In particular, the provisions about Data
Governance and transparency are the most important.
As known, an AI system is only as good as the data it
relies on: if the data is flawed, the system will be
biased. By providing an obligation to test the datasets
for biases, the AI Act will ensure that credit scoring
applications are not designed to discriminate groups
or individuals, and by mandating clear instructions
and information, it will put citizens in the position of
being able to challenge the systems.</p>
      <p>Although promising, the new regulation has not
come as far as mandating full interpretability for AI
systems. Therefore, some biases might still be present,
and they might be difficult to detect when black boxes
are employed.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusions</title>
      <p>The discourse presented herein, along with the
data subject's rights to access their data, aligns with
the acknowledgment of the right to explanation,
thereby supporting the objectives of Article 22 of the
GDPR. This article is designed to safeguard individuals
from the potential hazards to their rights and
freedoms posed by automated personal data
processing, including profiling.</p>
      <p>
        In scenarios where multiple parties with varying
interests are engaged, such as the profiled individual,
the profiling entity, and the lending institution,
adhering to a narrow interpretation of Article 22 of
the GDPR could inadvertently facilitate the evasion of
the very protections it is meant to uphold, leaving the
data subject—the most vulnerable party—without
adequate legal defense. This narrow view regards the
computation of the probability rate merely as a
preliminary step, recognizing only the subsequent
actions taken by an external entity, like a credit
organization, as 'decisions' as defined by Article 22(1)
of the GDPR [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ].
      </p>
      <p>Without an expansive interpretation, the
individual subjected to profiling would be deprived of
critical information necessary for their defense, as this
data resides not with the bank but with the profiling
company that collects and processes it. Conversely,
recognizing the statistical evaluation as an inherent
component of the automated decision-making
process would rightly allocate responsibility to the
profiling agency: it would be accountable for any
unlawful data processing under Article 82 of the GDPR
and contractually liable to the bank for the profiling
service provided.</p>
      <p>One may wonder whether such a principle may
remain valid even after the AI Act's entry into force,
the long process of which seems to have reached its
final stages pending final publication. We note that
Article 68c of the proposal signifies an enhancement
of the right to explanation for automated decisions.
This addition is applicable only where Union law,
specifically Article 22 of the GDPR, does not already
provide such a right. The provision introduces,
beginning with its heading, an entitlement for data
subjects to receive a 'clear and meaningful'
elucidation of the decision-making process that
involves them, particularly when high-risk AI systems
are used, and the decision significantly impacts their
fundamental rights.</p>
      <p>Under Article 13(1) of the AI Act Proposal,
individuals may request explanations from the
deployer regarding the AI system's role, the pertinent
input data, and the principal elements of the resulting
decision. Nonetheless, exceptions may apply if the
deployment of such AI systems is mandated by Union
or national law, provided these exemptions uphold
the core of fundamental rights and freedoms and are
deemed necessary and proportionate within a
democratic society.</p>
      <p>In conclusion, we believe that the AI Act might
have been slightly ''braver" by mandating more
impacting transparency measures, such as
interpretability, so that the reasoning behind the
credit scoring classification would not have been
hidden behind a black box.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>This article was written with the contribution of
the “SPIDER Project”, granted by the Cattaneo-LIUC
University and the Project 101108151 — DataCom —
HORIZON-MSCA-2022-PF-01. Partially funded by the
European Union. Views and opinions expressed are
however those of the author(s) only and do not
necessarily reflect those of the European Union or the
European Commission. Neither the European Union
nor the granting authority can be held responsible for
them.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Ricci</surname>
          </string-name>
          , Annarita. “
          <article-title>Sulla segnalazione “in sofferenza” alla Centrale dei rischi e la dibattuta natura del preavviso al cliente non consumatore”</article-title>
          .
          <source>Contratto e impresa 1</source>
          (
          <year>2020</year>
          ):
          <fpage>192</fpage>
          -
          <lpage>224</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Cons</surname>
          </string-name>
          . Stato, Sez. VI, Sent.,
          <volume>03</volume>
          /09/2009, n. 5198.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Cass. civ.</given-names>
            ,
            <surname>Sez</surname>
          </string-name>
          . Un., Sent.,
          <volume>14</volume>
          /04/2011, n.
          <volume>8487</volume>
          (
          <issue>rv</issue>
          . 616973).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Corte</given-names>
            <surname>App</surname>
          </string-name>
          . Palermo, Sez. III, Sent.,
          <volume>23</volume>
          /05/2023, n. 1003.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Manes</surname>
            ,
            <given-names>Paola.</given-names>
          </string-name>
          <article-title>"Credit scoring assicurativo, machine learning e profilo di rischio: nuove prospettive." Contratto e impresa 2 (</article-title>
          <year>2021</year>
          ):
          <fpage>469</fpage>
          -
          <lpage>489</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Castelnovo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Malandri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mercorio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mezzanzanica</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cosentini</surname>
          </string-name>
          ,
          <article-title>Towards fairness through time</article-title>
          .
          <source>In Joint European Conference on Machine Learning and Knowledge Discovery in Databases</source>
          (pp.
          <fpage>647</fpage>
          -
          <lpage>663</lpage>
          ). Cham: Springer International Publishing,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>X.</given-names>
            <surname>Dastile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Celik</surname>
          </string-name>
          , M. Potsane, (
          <year>2020</year>
          ).
          <article-title>Statistical and machine learning models in credit scoring: A systematic literature survey</article-title>
          .
          <source>Applied Soft Computing</source>
          ,
          <volume>91</volume>
          , 106263. doi:
          <volume>10</volume>
          .1016/j.asoc.
          <year>2020</year>
          .106263
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Castelnovo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Crupi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Del Gamba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Greco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Naseer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Regoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. S. M.</given-names>
            <surname>Gonzalez</surname>
          </string-name>
          , (
          <year>2020</year>
          , December).
          <article-title>Befair: Addressing fairness in the banking sector</article-title>
          .
          <source>In 2020 IEEE International Conference on Big Data (Big Data)</source>
          (pp.
          <fpage>3652</fpage>
          -
          <lpage>3661</lpage>
          ). IEEE. Doi: DOI: 10.1109/BigData50022.
          <year>2020</year>
          .
          <volume>9377894</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Pessach</surname>
          </string-name>
          and
          <string-name>
            <surname>E. Shmueli. “</surname>
          </string-name>
          <article-title>A review on fairness in machine learning”</article-title>
          .
          <source>ACM Computing Surveys (CSUR)</source>
          ,
          <volume>55</volume>
          (
          <issue>3</issue>
          ), (
          <year>2022</year>
          ):
          <fpage>1</fpage>
          -
          <lpage>44</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>S. Charles</surname>
          </string-name>
          , (
          <year>2023</year>
          ).
          <article-title>The Algorithmic Bias and Misrepresentation of Mixed Race Identities: by Artificial Intelligence Systems in The West</article-title>
          .
          <source>GRACE: Global Review of AI Community Ethics</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>G. Pasceri.</surname>
          </string-name>
          “
          <article-title>Le scienze argomentative tra stereotipi e veri pregiudizi: la black box. Le scienze argomentative tra stereotipi e veri pregiudizi: la black box”</article-title>
          . (
          <year>2023</year>
          ):
          <fpage>21</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Dahl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Magesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Suzgun</surname>
          </string-name>
          , D. E. Ho,. (
          <year>2024</year>
          ).
          <article-title>Large legal fictions: Profiling legal hallucinations in large language models</article-title>
          .
          <source>arXiv preprint arXiv:2401</source>
          .01301. URL: https://arxiv.org/abs/2401.01301.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>G.</given-names>
            <surname>Cerrina Feroni</surname>
          </string-name>
          ,
          <article-title>"Intelligenza artificiale e sistemi di scoring sociale. tra distopia e realtà." Il diritto dell'informazione e dell'informatica</article-title>
          . (
          <year>2023</year>
          ):
          <fpage>1</fpage>
          -
          <lpage>24</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>G.</given-names>
            <surname>Spindler</surname>
          </string-name>
          , “
          <article-title>Algorithms, credit scoring, and the new proposals of the EU for an AI Act and on a Consumer Credit Directive”</article-title>
          .
          <source>Law and Financial Markets Review</source>
          <volume>15</volume>
          .
          <fpage>3</fpage>
          -
          <lpage>4</lpage>
          (
          <year>2021</year>
          ):
          <fpage>239</fpage>
          -
          <lpage>261</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Greco</surname>
          </string-name>
          , “
          <article-title>Credit scoring 5.0, tra Artificial Intelligence Act e Testo Unico Bancario”</article-title>
          .
          <source>Rivista Trimestrale di Diritto dell'Economia</source>
          .
          <year>2021</year>
          .
          <article-title>3 suppl</article-title>
          . (
          <year>2021</year>
          ):
          <fpage>74</fpage>
          -
          <lpage>100</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Falletti</surname>
            ,
            <given-names>Elena.</given-names>
          </string-name>
          <article-title>"Decisioni automatizzate e diritto alla spiegazione: alcune riflessioni comparatistiche." Il diritto dell'informazione e dell'informatica 36.2</article-title>
          , marzo/aprile
          <year>2020</year>
          (
          <year>2020</year>
          ):
          <fpage>169</fpage>
          -
          <lpage>206</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Gallese-Nobile</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Legal aspects of AI models in medicine. The role of interpretable models</article-title>
          .
          <source>Big data Analysis and Artificial Intelligence for Medical Science</source>
          . Wiley.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Schneeberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Röttger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Campagner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Plass</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <article-title>The tower of babel in explainable artificial intelligence (XAI)</article-title>
          .
          <source>In International CrossDomain Conference for Machine Learning and Knowledge Extraction</source>
          (pp.
          <fpage>65</fpage>
          -
          <lpage>81</lpage>
          ). (
          <year>2023</year>
          ) Cham: Springer Nature Switzerland.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bravo</surname>
          </string-name>
          ,
          <article-title>"Software di Intelligenza Artificiale e istituzione del registro per il deposito del codice sorgente." Contratto e impresa 4 (</article-title>
          <year>2020</year>
          ):
          <fpage>1412</fpage>
          -
          <lpage>1429</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pincovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Falcão</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. N.</given-names>
            <surname>Nunes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Furtado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Cunha</surname>
          </string-name>
          ,
          <article-title>Machine Learning applied to credit analysis: a Systematic Literature Review</article-title>
          .
          <source>In 2021 16th Iberian Conference on Information Systems and Technologies (CISTI)</source>
          (pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          ).
          <source>2021 IEEE. Doi: 10.23919/CISTI52073</source>
          .
          <year>2021</year>
          .
          <volume>9476350</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ruggeri</surname>
          </string-name>
          , “
          <article-title>La dicotomia dati personali e dati non personali: il problema della tutela della persona nei c. dd. dati misti”</article-title>
          .
          <source>Diritto di Famiglia e delle Persone</source>
          .
          <volume>2</volume>
          (
          <year>2023</year>
          ):
          <fpage>808</fpage>
          -
          <lpage>832</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>G.</given-names>
            <surname>Gigerenzer</surname>
          </string-name>
          ,
          <article-title>Perché l'intelligenza umana batte ancora gli algoritmi</article-title>
          .
          <source>Raffaello Cortina Editore</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hildebrandt</surname>
          </string-name>
          ,
          <article-title>Defining profiling: A new type of knowledge?. Profiling the European citizen: Cross-disciplinary perspectives</article-title>
          . Dordrecht: Springer Netherlands,
          <year>2008</year>
          .
          <fpage>17</fpage>
          -
          <lpage>45</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Bundesverwaltungsgericht (BVwG) (Austria</surname>
            <given-names>)</given-names>
          </string-name>
          ,
          <source>W252 2246581-1</source>
          ,
          <issue>29</issue>
          /6/2023.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>K.</given-names>
            <surname>Demetzou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Zanfir-Fortuna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Barros</given-names>
            <surname>Vale</surname>
          </string-name>
          . “
          <article-title>The thin red line: refocusing data protection law on ADM, a global perspective with lessons from case-law”</article-title>
          .
          <source>Computer Law &amp; Security Review</source>
          <volume>49</volume>
          (
          <year>2023</year>
          ):
          <fpage>105806</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>G.</given-names>
            <surname>González</surname>
          </string-name>
          <string-name>
            <surname>Fuster</surname>
          </string-name>
          , “
          <article-title>The emergence of personal data protection as a fundamental right of the EU</article-title>
          . Vol.
          <volume>16</volume>
          . Cham: Springer Science &amp; Business”,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>E. Bayamlioğlu,</surname>
          </string-name>
          <article-title>"Machine Learning and the Relevance of IP Rights With an Account of Transparency Requirements for AI."</article-title>
          <source>European Review of Private Law 31.2/3</source>
          (
          <year>2023</year>
          ):
          <fpage>329</fpage>
          -
          <lpage>364</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Gallese</surname>
            <given-names>C.</given-names>
          </string-name>
          , (
          <year>2023</year>
          ).
          <article-title>The AI Act Proposal: a new right to technical interpretability?</article-title>
          .
          <source>ArXiv preprint arXiv:2303</source>
          .
          <fpage>17558</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>J.</given-names>
            <surname>Adams-Prassl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Binns</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Kelly-Lyth</surname>
          </string-name>
          .
          <article-title>"Directly discriminatory algorithms."</article-title>
          <source>The Modern Law Review 86.1</source>
          (
          <year>2023</year>
          ):
          <fpage>144</fpage>
          -
          <lpage>175</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>V.</given-names>
            <surname>Amendolagine</surname>
          </string-name>
          ,
          <article-title>"La responsabilità aggravata della banca che agisce per un credito inesistente." Giurisprudenza Italiana 5 (</article-title>
          <year>2021</year>
          ):
          <fpage>1080</fpage>
          -
          <lpage>1083</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <surname>Foa</surname>
            ,
            <given-names>Sergio.</given-names>
          </string-name>
          <article-title>"Intelligenza artificiale e cultura della trasparenza amministrativa. Dalle “scatole nere” alla “casa di vetro?”</article-title>
          .
          <source>Diritto Amministrativo</source>
          .
          <year>2023</year>
          .
          <volume>3</volume>
          (
          <year>2023</year>
          ):
          <fpage>515</fpage>
          -
          <lpage>548</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>C.</given-names>
            <surname>Rudin</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Radin</surname>
          </string-name>
          .
          <article-title>"Why are we using black box models in AI when we don't need to? A lesson from an explainable AI competition</article-title>
          .
          <source>" Harvard Data Science Review 1.2</source>
          (
          <year>2019</year>
          ):
          <fpage>10</fpage>
          -
          <lpage>1162</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>E.</given-names>
            <surname>Falletti</surname>
          </string-name>
          , “
          <article-title>Alcune riflessioni sull'applicabilità dell'art. 22 GDPR in materia di scoring creditizio”. Diritto dell'informazione e dell'informatica</article-title>
          , (
          <year>2024</year>
          ):
          <fpage>110</fpage>
          -
          <lpage>128</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>N.</given-names>
            <surname>Rane</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Choudhary</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Rane</surname>
          </string-name>
          .
          <article-title>"Explainable Artificial Intelligence (XAI) approaches for transparency and accountability in financial decision-making."</article-title>
          <source>Available at SSRN</source>
          <volume>4640316</volume>
          (
          <year>2023</year>
          ). Doi:
          <volume>10</volume>
          .2139/ssrn.4640316.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ochmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Michels</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Tiefenbeck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Maier</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Laumer</surname>
          </string-name>
          , (
          <year>2024</year>
          ).
          <article-title>“Perceived algorithmic fairness: An empirical study of transparency and anthropomorphism in algorithmic recruiting”</article-title>
          .
          <source>Information Systems Journal. Doi: 10</source>
          .1111/isj.12482.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>M.</given-names>
            <surname>Girolami</surname>
          </string-name>
          ,
          <article-title>"La scelta negoziale nella protezione degli adulti vulnerabili: spunti dalla recente riforma tedesca." Rivista di diritto civile 5/2023 (</article-title>
          <year>2023</year>
          ):
          <fpage>854</fpage>
          -
          <lpage>883</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <surname>Gallese</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          (
          <year>2022</year>
          , November).
          <article-title>Suggestions for a revision of the European smart robot liability regime</article-title>
          .
          <source>European Conference on the impact of Artificial Intelligence and Robotics</source>
          (Vol.
          <volume>4</volume>
          , No.
          <volume>1</volume>
          ,
          <fpage>29</fpage>
          -
          <lpage>35</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>E.</given-names>
            <surname>Gil González</surname>
          </string-name>
          , P. De Hert.
          <article-title>"Understanding the legal provisions that allow processing and profiling of personal data-an analysis of GDPR provisions and principles." Era Forum</article-title>
          . Vol.
          <volume>19</volume>
          . No. 4. Berlin/Heidelberg: Springer Berlin Heidelberg,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>