=Paper= {{Paper |id=Vol-3744/paper7 |storemode=property |title=Credit scoring and transparency between the AI Act and the Court of Justice of the European Union |pdfUrl=https://ceur-ws.org/Vol-3744/paper7.pdf |volume=Vol-3744 |authors=Elena Falletti,Chiara Gallese |dblpUrl=https://dblp.org/rec/conf/aimmes/FallettiG24 }} ==Credit scoring and transparency between the AI Act and the Court of Justice of the European Union== https://ceur-ws.org/Vol-3744/paper7.pdf
                                Credit scoring and transparency between the AI Act and
                                the Court of Justice of the European Union⋆
                                Elena Falletti1,† and Chiara Gallese2,∗,†

                                1 Università Cattaneo-LIUC, Corso Matteotti 22, 20153, Castellanza, Italy

                                2 Università di Torino, Lungo Dora Siena 100, 10153, Torino, Italy




                                                    Abstract
                                                    Credit scoring software has become firmly established in the banking sector as a means to
                                                    mitigate defaults and non-performing loans. These software systems pose significant challenges
                                                    related to their non-transparent nature as well as biases inherent in the data nurturing the
                                                    machine learning. Despite the Artificial Intelligence Act Proposal not being enacted yet, legal
                                                    precedents have begun to emerge, starting with the ruling of the Court of Justice of the European
                                                    Union (Case C-634/21). This ruling acknowledges that individuals seeking bank loans have the
                                                    right, under Article 22 of the GDPR, to demand an explanation regarding the decision-making
                                                    process of such programs. This article aims to analyze the evolution of credit scoring software
                                                    since the SCHUFA ruling and the entering into force of the Artificial Intelligence Act.

                                                    Keywords
                                                    Artificial Intelligence, Automated Decision Making, Credit Scoring 1


                                                                                                                    reasonable to calculate the probability of a given
                                1. Introduction                                                                     behavior's recurrence by a mathematical procedure
                                    Credit risk assessment has long been the subject of                             embedded in the algorithm.
                                debate in both doctrine [1] and case law [2, 3, 4].                                     This scoring contains an element of behavioral
                                    The notion of risk regards an evaluation of a                                   analysis that could hide a social-ethical judgment [6],
                                creditor's trust in a debtor's capacity to pay their                                which is linked to the risk of default.
                                debts. This kind of evaluation is necessary to uphold                                   It is because the loan denial is justified based on
                                the integrity of the financial market, encompassing                                 the result of the credit scoring software; therefore,
                                both borrowers for their ventures and investors                                     biases capable of negatively influencing the
                                leveraging others' savings. In assessing the                                        algorithmic procedure [7] could ambush in the
                                trustworthiness of credit seekers, databases are                                    performance of this operation [8].
                                utilized to document debtors' reliability, given the                                    However, the application of the credit scoring
                                frequent convergence of these roles.                                                algorithm is justified by the fact that, at least in
                                    Using automated decision-making systems                                         abstract terms, it should treat serialized situations
                                marked a significant advancement, integrating data                                  uniformly, ensuring, at least in intention, the
                                on historical reliability alongside probabilistic                                   conformity of access criteria by linking them with the
                                projections of future solvency [5].                                                 solvency of past debts.
                                    The logic behind using such tools lies in the                                       At this early stage, the procedure plays a decisive
                                empirical observation that human actions tend to                                    role in specific contexts, enabling decisions based on
                                repeat. Considering this seriality, it is considered                                probability parameters.



                                AIMMES 2024 | Workshop on AI bias: Measurements, Mitigation, Explanation Strategies, Amsterdam, March 20, 2024.
                                1∗ Corresponding author.
                                † Dr. Elena Falletti wrote sections 2, 3, and 4; Conclusions were written jointly; Dr. Chiara Gallese wrote the rest.

                                  efalletti@liuc.it (E. Falletti); chiara.gallese@unito.it (C. Gallese).
                                  0000-0002-6121-6775 (E. Falletti); 0000-0001-8194-0261 (C. Gallese).
                                              © 2023 Copyright for this paper by its authors. Use permitted under
                                              Creative Commons License Attribution 4.0 International (CC BY 4.0).




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
    There is thus an area that can be quantified by the          Credit scoring programs concern a sub-category of
percentage of accuracy between the result processed          predictive software measuring social scoring [13].
by machine learning and the reality principle [9], and       Generally speaking, credit scoring is a rate that
this space may contain errors [10], biases [11],             assesses financial reliability, i.e., the possible
hallucinations [12], or discrimination [13] depending        predictability of repayment of the loan or mortgage. It
on the quality of the data with which the dataset used       is a score processed through a statistical procedure.
by machine learning was formed [8].                          This procedure quantifies the probability of a person's
    The practice of evaluating credit trustworthiness        future solvency based on a combination of the
has been performed - before the advent of AI - by            payments made in the past by the same person and on
employing traditional techniques [14, 15], which have        their classification within a category of similar
not been regulated as strictly as in the new AI Act. In      subjects according to their characteristics [20].
Italy, for example, only a general discipline is found in        Under this perspective, scholars observe that the
the banking code, regulating only credit scoring             credit scoring system measures the prediction of a
performed by banks and financial institutions.               behavior [21], by placing the person concerned in a
    We might argue that credit scoring itself is a           category of profiles with a similar score; therefore,
sensitive topic that has the potential to significantly      this score will be decisive in denying or granting the
impact the lives of citizens, especially the wealthy,        request based on the strict assumption that in
whether AI or not. However, AI models' capacity to be        standardized situations behavior is serialized.
inherently opaque on a very large scale, impacting               Nevertheless, it should be borne in mind that “a
millions of people at once, differentiates them from         profile is not a person” [22]. This assertion is only
other techniques. For this reason, we will focus the         apparently obvious since the serialized data collected
scope of this article on AI models.                          and treated in machine learning, precisely because
    The first section of the article focuses on article 22   they are serialized, fail to grasp the essence of each
GDPR (General Data Protection Regulation) and its            individual, both in the positive and negative sense.
implications; the second deals with a recent judgment        Therefore, it is neither possible nor common sense to
of the Court of Justice of the European Union (Case C-       consider the actual person coincident with the profile
634/21, see Fig. 1); the third examines the topic in         derived from the projection of the combination of
light of the AI Act proposal; and the last draws some        their data [23].
conclusive remarks.                                              Thus, the request for access to the decision-
                                                             making process by a hypothetical but plausible loan
2. Credit scoring and the right to                           applicant who was denied money is well-founded [24]
                                                             in two respects, i.e. both under Article 22 GDPR, which
   an explanation under Article                              recognizes the right to an explanation, and under
   22 GDPR                                                   Article 17 GDPR, i.e. based on what actual information
                                                             this result was processed by machine learning [25].
    As explained in the previous section, the person
                                                             Further, such protections are reinforced by Article 8
subjected to the automated predictive decision must
                                                             of the Charter of Fundamental Rights of the European
be able to access the explanation of the process
                                                             Union, according to which every person has the right
carried out by the algorithm, whether it is a result in
                                                             to access and obtain rectification of the data collected
credit matters or about areas in which the
                                                             concerning them. It is an effect of the right to
fundamental rights of the person involved are put at
                                                             protection of personal data relating to individuals.
risk.
                                                             According to this principle, personal data collected
    In current law, this right is recognized by Art. 22
                                                             must be processed under the principle of fairness for
GDPR.[16,17] At the same time, Art. 68c of the
                                                             specified purposes and based on the consent of the
Artificial Intelligence Act serves as the concluding rule
                                                             person concerned or for a legitimate purpose
for all areas not addressed by the aforementioned Art.
                                                             provided for by law [26].
22 GDPR [18], despite some differences in its text,
                                                                 In the context of the balancing act between the
which has not yet been published in its official version
                                                             protection of personal data from the collection
as of the time of writing.
                                                             activities necessary for machine learning related to
    As is well known, Art. 22 GDPR provides for the
                                                             the credit scoring programs and the exception
right of the person subject to the decision to be
                                                             constantly presented in court about the protection of
informed of the automated process. As a defense
                                                             industrial secrets [27], protected by Article 17(2) of
against this claim, the protection of trade secrets on
                                                             the same Charter, it is the latter that is recessive
how the algorithmic software works is invoked [19].
                                                             concerning the request for transparency. Indeed,
transparency as to the functioning of the algorithmic           The credit score assigned by the data controller
activity is necessary for understanding the logics that     was taken into account by the scoring agency's
govern the evaluative classification relative to the        contractual partners, who used those results in their
attribution of the solvability score. Otherwise, the        decision-making process to decide whether or not to
purpose of the data protection principle and the            grant a loan to the borrower. The bank refused the
necessity of algorithmic transparency, provided for by      applicant's credit request. The refusal was based on
the GDPR and reaffirmed by the approved Artificial          the result of the private agency in charge.
Intelligence Act Proposal and in the publication                Following this, the client requested access to the
process, would be thwarted [28].                            information concerning her based on Article 22 GDPR.
    In this regard, the source code should be               The German national data protection authority
accessible in any situation where potential                 rejected this request, allowing the claimant to obtain
discrimination could emerge, both direct and indirect       specific information on personal data but not on the
[30], since the exercise of the right of access, in         functioning of the negative credit scoring calculation.
defense of the dignity and reputation of the party,         The applicant claimed that this last part is the heart of
since being unfairly considered a bad payer is a severe     credit scoring, claiming that it was a process protected
injury to reputation [30], is deemed to prevail over the    by trade secrets. The applicant challenged the refusal
protection of trade secrets.                                in court.
    As stated by scholarly opinion [31], not knowing            According to the referring court, the core of the
the source code prevents the algorithm’s traceability,      question was whether determining the probability of
violating the minimum explanatory duty established          default rate constituted an automated process within
by European sources, such as Article 22 GDPR itself or      the meaning of Article 22 GDPR(1) since this
Article 68c of the AI Act.                                  provision is oriented towards protecting (natural)
    In the specific context, it was explored whether it     persons from the discriminatory risks associated with
was possible to create a fully interpretable machine        purely automated decisions.
learning model. In 2018, a competition known as the             The question concerns at which stage of assessing
Explainable Machine Learning Challenge [22], was            the customer's creditworthiness fits the automated
launched to explain how models work transparently.          calculation process whether at the assessment stage
Surprisingly, some participants responded by                based on data provided by the third party (i.e., the
proposing a transparent and interpretable model,            bank) to SCHUFA in the actual calculating phase.
thus demonstrating that machine learning can be                 In the first case, there would be a legal loophole in
organized relatively and transparently [32]. This           that SCHUFA would have to respond to the requesting
approach has also attracted interest in credit scoring,     data subject based on Article 15(1)(h) GDPR alone,
with specific studies [30] also promoted by credit          but not based on Article 22(1), and this would amount
institutions. Although these studies may come from          to a lack of protection, since on the one hand the
parties directly involved in a conflict of interest, they   automated decision-making process takes place
deserve attention [6].                                      during the first phase.
                                                                On the other hand, the bank that requested the
3. The decision of the Court of                             service and to which the probability rate is
                                                            communicated cannot provide information on the
   Justice of the European Union                            automation of the service since it is an outsourced
   on credit scoring                                        service.
                                                                Since Art. 22 GPDR and Recital No. 71 have a
    The legal case decided by the Court of Justice of the
                                                            specific rationale concerning the protection of the
European Union (EUCJ) started in Germany and
                                                            user against the automation of decisions without
concerned the processing of personal data by a
                                                            human intervention, it must be examined how Art. 31
private credit agency. This entity provided
                                                            BDSG (Bundesdatenschutzgesetz – Federal Data
information on the creditworthiness of third parties,
                                                            Protection Act) has implemented such protection in
such as consumers to banks or loaning activities [33].
                                                            German law and whether it is compatible with it.
    At the same time, the credit agency was the data
                                                                In this respect, two perspectives would open up:
controller, processed the personal data of the profiled
                                                            on the one hand, Section 31 BDSG would consider only
persons, and compiled the scores to be provided to the
                                                            the use of the probability rate, but not its calculation,
applicant banks using statistical and mathematical
                                                            as an automated process, and again, there would be a
methods.
                                                            lack of protection. On the other hand, if calculating
                                                            that probability rate did not constitute an automated
decision-making procedure for natural persons,              decision based solely on automated processing,
neither Article 22 GPDR nor Paragraph 1 nor the             including profiling. This provision lays down a
opening clause of Paragraph 2(b) could apply.               prohibition in principle, the breach of which does not
    The referring Court's question concerns the             need to be asserted individually by such a person.
definition of what is intended as an 'automated                 Indeed, as is evident from the combined
decision' within the meaning of Article 22 GDPR and         provisions of Article 22(2) of the GDPR and Recital 71
how this applies to credit scoring.                         of that regulation, the adoption of a decision based
    The EUCJ states that for Article 22 to be applicable,   solely on automated processing is authorized only in
three conditions must coexist, namely: 1. that a            the cases referred to in that article, i.e., where such a
decision must be necessary; 2. that it must be 'based       decision is necessary for the conclusion or
solely on automated processing, including profiling';       performance of a contract between the data subject
and 3. that it must produce 'legal effects [concerning      and a data controller within the meaning of point (a),
the data subject]' or affect 'in a similarly significant    or where it is authorized by the law of the Union or of
way their person.                                           the Member State to which the data controller is
    Concerning point (a), the definition provided in        subject under point (b) or is based on the data
Recital 71, according to which the data subject has the     subject's explicit consent provided for in point (c).
right to opt out of the legal effects produced by a             Some attention must be paid to this last point
purely automated decision affecting them, such as the       since the debtor's consent may be given without being
automatic rejection of an online credit application or      aware of it, for example, by signing forms or forms
online recruiting practices managed by algorithms           where the applicant signs without due care, either
[34].                                                       because he is vulnerable [36] or because of a tendency
    Elaborated in these terms, the Court stated that        to underestimate the consequences of such an act, or
the decision on credit scoring referred to in the           the necessity of the signature to continue with the
reference for a preliminary ruling falls within the         credit application which, in the applicant's belief, he
applicability of Article 22 GDPR para. 1, since that        hopes will be successful.
carried out by SCHUFA, is a profiling activity under            In the cases referred to in Article 22(2)(a) and (c)
Art. 4, point 4 of the GDPR, where by its very nature       of that Regulation, the controller shall at least
discriminatory results may emerge, given that it            implement the data subject's right to obtain human
involves data on even intimate characteristics of a         intervention, to express his opinion, and to contest the
person, such as health, personal preferences, interests     decision. What is more, in the case of the adoption of
not always directly related to their behavior, such as      a decision based solely on automated processing, such
professional performance, economic situation,               as that referred to in Article 22(1) of the GDPR, on the
reliability, location or movements of that individual       one hand, the data controller is subject to additional
[35].                                                       information obligations under Article 13(2)(f) and
    All these situations may be subject to                  Article 14(2)(g) of that Regulation. On the other hand,
measurement or balancing in the light of fundamental        the data subject enjoys, under Article 15(1)(h) GDPR,
rights.                                                     the right to obtain from the data controller, among
    After that, the question referred for a preliminary     other things, "meaningful information about the logic
ruling explicitly relates to the automated calculation      used and the significance and intended consequences
of a probability rate based on personal data relating to    of that processing for the data subject."
a person and concerning that person's ability to honor
a loan in the future.
    Such a decision produces significant legal effects
on the person since the action of the credit scoring
company's client (i.e., the 'third party') to whom the
probability result is transmitted will suffer decisive
legal effects. An insufficient probability rate will, in
almost all cases, lead to a refusal to grant the
requested loan.
    Therefore, calculating such a rate qualifies as a
decision concerning a data subject's legal effects
concerning or significantly similarly affecting them           Figure 1 Summary of the decision
within the meaning of Article 22(2) GDPR. The latter
gives the data subject the 'right' not to be subject to a
4. Credit Scoring in light of the AI                         presumption of innocence (Articles 47 and 48), as well
                                                             as the general principle of good administration.
   Act                                                       Furthermore, as applicable in certain domains, the
    The European Commission finally released the             proposal will positively affect the rights of a number
first proposal for a harmonized legal framework on AI        of special groups, such as the workers’ rights to fair
at the European level. This is a unique piece of             and just working conditions (Article 31), a high level
legislation which is aimed at achieving four specific        of consumer protection (Article 28), the rights of the
objectives:                                                  child (Article 24) and the integration of persons with
                                                             disabilities (Article 26). The right to a high level of
   •     ensure that AI systems placed on the Union          environmental protection and the improvement of the
         market and used are safe and respect                quality of the environment (Article 37) is also
         existing law on fundamental rights and              relevant, including in relation to the health and safety
         Union values;                                       of people. The obligations for ex ante testing, risk
   •     ensure legal certainty to facilitate investment     management and human oversight will also facilitate
         and innovation in AI;                               the respect of other fundamental rights by minimising
   •     enhance       governance       and      effective   the risk of erroneous or biased AI-assisted decisions
         enforcement of existing law on fundamental          in critical areas such as education and training,
         rights and safety requirements applicable to        employment, important services, law enforcement
         AI systems;                                         and the judiciary. In case infringements of
   •     facilitate the development of a single market       fundamental rights still happen, effective redress for
         for lawful, safe and trustworthy AI                 affected persons will be made possible by ensuring
         applications      and      prevent       market     transparency and traceability of the AI systems
         fragmentation.                                      coupled with strong ex post controls.
                                                                  The risk categories are related to the degree
    The enforcement mechanism of the proposal                (intensity and scope) of risk to citizens' safety or
relies on a governance system at national level,             fundamental rights and are classified into four
building on already existing structures, and                 different categories for AI systems, among which the
establishes a central cooperation mechanism through          high-risk ones have to comply with many
a "European Artificial Intelligence Board``.                 requirements and obligations. Taking inspiration
    The most important innovation of the proposal is         from the product safety legislation, the classification
the establishment of four risks categories for AI            of risks is based on the intended purpose and
systems, in order to protect citizens' fundamental           modalities for which the AI system is used, not only on
rights. The explanatory memorandum attached to the           their specific function. Depending on the national legal
proposal, in fact, notes that ``The use of AI with its       system, the qualification of high risk may have
specific characteristics (e.g. opacity, complexity,          consequences over liability, such as that under art.
dependency on data, autonomous behaviour) can                2050 of the Italian Civil Code. The proposal also draws
adversely affect a number of fundamental rights              up a list of prohibited AI systems that fall within the
enshrined in the EU Charter of Fundamental Rights            ''unacceptable risk" category [37].
(‘the Charter’). This proposal seeks to ensure a high             The proposal, in Annex III, classifies AI systems
level of protection for those fundamental rights and         employed for credit scoring as "high-risk". The
aims to address various sources of risks through a           decision to include such systems in this category was
clearly defined risk-based approach. With a set of           most likely drawn by the fact that financial
requirements for trustworthy AI and proportionate            institutions play an important social role by deciding
obligations on all value chain participants, the             to grant a mortgage or a financial instrument to
proposal will enhance and promote the protection of          citizens. In the end, they represent the only obstacle
the rights protected by the Charter: the right to human      for less wealthy families to own a house or to afford
dignity (Article 1), respect for private life and            essential means for their everyday life (e.g., being able
protection of personal data (Articles 7 and 8), non-         to open their own business).
discrimination (Article 21) and equality between                  AI systems are known to perpetuate societal and
women and men (Article 23). It aims to prevent a             historical biases, and there is no reason to believe that
chilling effect on the rights to freedom of expression       social scoring systems would be different: by
(Article 11) and freedom of assembly (Article 12), to        providing safeguards, transparency measures, and
ensure protection of the right to an effective remedy        precise obligations on AI providers and users, the
and to a fair trial, the rights of defence and the
legislator intended to protect citizens from such               One may wonder whether such a principle may
systems.                                                    remain valid even after the AI Act's entry into force,
    In particular, the provisions about Data                the long process of which seems to have reached its
Governance and transparency are the most important.         final stages pending final publication. We note that
As known, an AI system is only as good as the data it       Article 68c of the proposal signifies an enhancement
relies on: if the data is flawed, the system will be        of the right to explanation for automated decisions.
biased. By providing an obligation to test the datasets     This addition is applicable only where Union law,
for biases, the AI Act will ensure that credit scoring      specifically Article 22 of the GDPR, does not already
applications are not designed to discriminate groups        provide such a right. The provision introduces,
or individuals, and by mandating clear instructions         beginning with its heading, an entitlement for data
and information, it will put citizens in the position of    subjects to receive a 'clear and meaningful'
being able to challenge the systems.                        elucidation of the decision-making process that
    Although promising, the new regulation has not          involves them, particularly when high-risk AI systems
come as far as mandating full interpretability for AI       are used, and the decision significantly impacts their
systems. Therefore, some biases might still be present,     fundamental rights.
and they might be difficult to detect when black boxes          Under Article 13(1) of the AI Act Proposal,
are employed.                                               individuals may request explanations from the
                                                            deployer regarding the AI system's role, the pertinent
5. Conclusions                                              input data, and the principal elements of the resulting
                                                            decision. Nonetheless, exceptions may apply if the
    The discourse presented herein, along with the          deployment of such AI systems is mandated by Union
data subject's rights to access their data, aligns with     or national law, provided these exemptions uphold
the acknowledgment of the right to explanation,             the core of fundamental rights and freedoms and are
thereby supporting the objectives of Article 22 of the      deemed necessary and proportionate within a
GDPR. This article is designed to safeguard individuals     democratic society.
from the potential hazards to their rights and                  In conclusion, we believe that the AI Act might
freedoms posed by automated personal data                   have been slightly ''braver" by mandating more
processing, including profiling.                            impacting transparency measures, such as
    In scenarios where multiple parties with varying        interpretability, so that the reasoning behind the
interests are engaged, such as the profiled individual,     credit scoring classification would not have been
the profiling entity, and the lending institution,          hidden behind a black box.
adhering to a narrow interpretation of Article 22 of
the GDPR could inadvertently facilitate the evasion of
the very protections it is meant to uphold, leaving the
                                                            Acknowledgements
data subject—the most vulnerable party—without                  This article was written with the contribution of
adequate legal defense. This narrow view regards the        the “SPIDER Project”, granted by the Cattaneo-LIUC
computation of the probability rate merely as a             University and the Project 101108151 — DataCom —
preliminary step, recognizing only the subsequent           HORIZON-MSCA-2022-PF-01. Partially funded by the
actions taken by an external entity, like a credit          European Union. Views and opinions expressed are
organization, as 'decisions' as defined by Article 22(1)    however those of the author(s) only and do not
of the GDPR [38].                                           necessarily reflect those of the European Union or the
    Without an expansive interpretation, the                European Commission. Neither the European Union
individual subjected to profiling would be deprived of      nor the granting authority can be held responsible for
critical information necessary for their defense, as this   them.
data resides not with the bank but with the profiling
company that collects and processes it. Conversely,         References
recognizing the statistical evaluation as an inherent
component of the automated decision-making                  [1]   Ricci, Annarita. “Sulla segnalazione “in
process would rightly allocate responsibility to the              sofferenza” alla Centrale dei rischi e la dibattuta
profiling agency: it would be accountable for any                 natura del preavviso al cliente non
unlawful data processing under Article 82 of the GDPR             consumatore”. Contratto e impresa 1 (2020):
and contractually liable to the bank for the profiling            192-224.
service provided.                                           [2]   Cons. Stato, Sez. VI, Sent., 03/09/2009, n. 5198.
[3]  Cass. civ., Sez. Un., Sent., 14/04/2011, n. 8487             Trimestrale di Diritto dell’Economia. 2021.3
     (rv. 616973).                                                suppl. (2021): 74-100.
[4] Corte App. Palermo, Sez. III, Sent., 23/05/2023,         [16] Falletti, Elena. "Decisioni automatizzate e diritto
     n. 1003.                                                     alla       spiegazione:      alcune      riflessioni
[5] Manes, Paola. "Credit scoring assicurativo,                   comparatistiche." Il diritto dell'informazione e
     machine learning e profilo di rischio: nuove                 dell'informatica 36.2, marzo/aprile 2020
     prospettive." Contratto e impresa 2 (2021): 469-             (2020): 169-206.
     489.                                                    [17] Gallese-Nobile, C. (2023). Legal aspects of AI
[6] A. Castelnovo, L. Malandri, F. Mercorio, M.                   models in medicine. The role of interpretable
     Mezzanzanica, A. Cosentini, Towards fairness                 models. Big data Analysis and Artificial
     through time. In Joint European Conference on                Intelligence for Medical Science. Wiley.
     Machine Learning and Knowledge Discovery in             [18] D. Schneeberger, R. Röttger, F. Cabitza, A.
     Databases (pp. 647-663). Cham: Springer                      Campagner, M. Plass, H. Müller, A. Holzinger, The
     International Publishing, 2021.                              tower of babel in explainable artificial
[7] X. Dastile, T. Celik, M. Potsane, (2020). Statistical         intelligence (XAI). In International Cross-
     and machine learning models in credit scoring:               Domain Conference for Machine Learning and
     A systematic literature survey. Applied Soft                 Knowledge Extraction (pp. 65-81). (2023)
     Computing,           91,        106263.          doi:        Cham: Springer Nature Switzerland.
     10.1016/j.asoc.2020.106263                              [19] F. Bravo, "Software di Intelligenza Artificiale e
[8] A. Castelnovo, R. Crupi, G. Del Gamba, G., Greco,             istituzione del registro per il deposito del codice
     A. Naseer, D. Regoli, B. S. M. Gonzalez, (2020,              sorgente." Contratto e impresa 4 (2020): 1412-
     December). Befair: Addressing fairness in the                1429.
     banking sector. In 2020 IEEE International              [20] M. Pincovsky, A. Falcão, W. N. Nunes, A. P.
     Conference on Big Data (Big Data) (pp. 3652-                 Furtado, R. C. Cunha, Machine Learning applied
     3661).            IEEE.           Doi:          DOI:         to credit analysis: a Systematic Literature
     10.1109/BigData50022.2020.9377894.                           Review. In 2021 16th Iberian Conference on
[9] D. Pessach and E. Shmueli. “A review on fairness              Information Systems and Technologies (CISTI)
     in machine learning”. ACM Computing Surveys                  (pp.        1-5).      2021        IEEE.        Doi:
     (CSUR), 55(3), (2022): 1-44.                                 10.23919/CISTI52073.2021.9476350.
[10] S. Charles, (2023). The Algorithmic Bias and            [21] L. Ruggeri, “La dicotomia dati personali e dati
     Misrepresentation of Mixed Race Identities: by               non personali: il problema della tutela della
     Artificial Intelligence Systems in The West.                 persona nei c. dd. dati misti”. Diritto di Famiglia
     GRACE: Global Review of AI Community Ethics,                 e delle Persone. 2 (2023): 808-832.
     1(1).                                                   [22] G. Gigerenzer, Perché l'intelligenza umana batte
[11] G. Pasceri. “Le scienze argomentative tra                    ancora gli algoritmi. Raffaello Cortina Editore,
     stereotipi e veri pregiudizi: la black box. Le               2023.
     scienze argomentative tra stereotipi e veri             [23] M. Hildebrandt, Defining profiling: A new type of
     pregiudizi: la black box”. (2023): 21-41.                    knowledge?. Profiling the European citizen:
[12] M. Dahl, V. Magesh, M. Suzgun, D. E. Ho,. (2024).            Cross-disciplinary perspectives. Dordrecht:
     Large legal fictions: Profiling legal hallucinations         Springer Netherlands, 2008. 17-45.
     in large language models. arXiv preprint                [24] Bundesverwaltungsgericht (BVwG) (Austria),
     arXiv:2401.01301.                              URL:          W252 2246581-1 , 29/6/2023.
     https://arxiv.org/abs/2401.01301.                       [25] K. Demetzou, G. Zanfir-Fortuna, S. Barros Vale.
[13] G. Cerrina Feroni, "Intelligenza artificiale e               “The thin red line: refocusing data protection
     sistemi di scoring sociale. tra distopia e realtà."          law on ADM, a global perspective with lessons
     Il diritto dell’informazione e dell’informatica.             from case-law”. Computer Law & Security
     (2023): 1-24.                                                Review 49 (2023): 105806.
[14] G. Spindler, “Algorithms, credit scoring, and the       [26] G. González Fuster, “The emergence of personal
     new proposals of the EU for an AI Act and on a               data protection as a fundamental right of the EU.
     Consumer Credit Directive”. Law and Financial                Vol. 16. Cham: Springer Science & Business”,
     Markets Review 15.3-4 (2021): 239-261.                       2014.
[15] G. L. Greco, “Credit scoring 5.0, tra Artificial        [27] E. Bayamlioğlu, "Machine Learning and the
     Intelligence Act e Testo Unico Bancario”. Rivista            Relevance of IP Rights With an Account of
     Transparency Requirements for AI." European
     Review of Private Law 31.2/3 (2023): 329-364.
[28] Gallese C., (2023). The AI Act Proposal: a new
     right to technical interpretability?. ArXiv
     preprint arXiv:2303.17558.
[29] J. Adams-Prassl, R. Binns, and A. Kelly-Lyth.
     "Directly discriminatory algorithms." The
     Modern Law Review 86.1 (2023): 144-175.
[30] V. Amendolagine, "La responsabilità aggravata
     della banca che agisce per un credito
     inesistente." Giurisprudenza Italiana 5 (2021):
     1080-1083.
[31] Foa, Sergio. "Intelligenza artificiale e cultura
     della trasparenza amministrativa. Dalle “scatole
     nere” alla “casa di vetro?”. Diritto
     Amministrativo. 2023.3 (2023): 515-548.
[32] C. Rudin and J. Radin. "Why are we using black
     box models in AI when we don’t need to? A
     lesson from an explainable AI competition."
     Harvard Data Science Review 1.2 (2019): 10-
     1162.
[33] E. Falletti, “Alcune riflessioni sull’applicabilità
     dell’art. 22 GDPR in materia di scoring
     creditizio”.     Diritto   dell’informazione      e
     dell’informatica, (2024): 110-128.
[34] N. Rane, S. Choudhary and J. Rane. "Explainable
     Artificial Intelligence (XAI) approaches for
     transparency and accountability in financial
     decision-making." Available at SSRN 4640316
     (2023). Doi: 10.2139/ssrn.4640316.
[35] J. Ochmann, L. Michels, V. Tiefenbeck, C. Maier, S.
     Laumer, (2024). “Perceived algorithmic
     fairness: An empirical study of transparency and
     anthropomorphism in algorithmic recruiting”.
     Information         Systems       Journal.     Doi:
     10.1111/isj.12482.
[36] M. Girolami, "La scelta negoziale nella
     protezione degli adulti vulnerabili: spunti dalla
     recente riforma tedesca." Rivista di diritto civile
     5/2023 (2023): 854-883.
[37] Gallese, C. (2022, November). Suggestions for a
     revision of the European smart robot liability
     regime. European Conference on the impact of
     Artificial Intelligence and Robotics (Vol. 4, No. 1,
     29-35).
[38] E. Gil González, P. De Hert. "Understanding the
     legal provisions that allow processing and
     profiling of personal data—an analysis of GDPR
     provisions and principles." Era Forum. Vol. 19.
     No. 4. Berlin/Heidelberg: Springer Berlin
     Heidelberg, 2019.